home / skills / a5c-ai / babysitter / apache-spark-optimizer
This skill analyzes and optimizes Apache Spark jobs for performance, cost, and resource utilization, providing actionable tuning recommendations.
npx playbooks add skill a5c-ai/babysitter --skill apache-spark-optimizerReview the files below or copy the command above to add this skill to your agents.
---
name: Apache Spark Optimizer
description: Analyzes and optimizes Apache Spark jobs for performance, cost, and resource utilization
version: 1.0.0
category: Distributed Processing
skillId: SK-DEA-001
allowed-tools:
- Read
- Write
- Edit
- Glob
- Grep
- Bash
---
# Apache Spark Optimizer
## Overview
Analyzes and optimizes Apache Spark jobs for performance, cost, and resource utilization. This skill provides deep expertise in Spark execution plans, partitioning strategies, and resource configuration to maximize efficiency.
## Capabilities
- Spark execution plan analysis and optimization
- Partition strategy recommendations
- Shuffle reduction techniques
- Memory and executor configuration tuning
- Catalyst optimizer hints generation
- Data skew detection and mitigation
- Broadcast join optimization
- Caching strategy recommendations
## Input Schema
```json
{
"sparkCode": "string",
"clusterConfig": "object",
"executionMetrics": "object",
"dataCharacteristics": {
"volumeGB": "number",
"partitionCount": "number",
"skewFactor": "number"
}
}
```
## Output Schema
```json
{
"optimizedCode": "string",
"recommendations": ["string"],
"expectedImprovement": {
"executionTime": "percentage",
"resourceUsage": "percentage",
"cost": "percentage"
},
"configChanges": "object"
}
```
## Target Processes
- ETL/ELT Pipeline
- Streaming Pipeline
- Feature Store Setup
- Pipeline Migration
## Usage Guidelines
1. Provide the Spark code or job definition for analysis
2. Include cluster configuration details (executors, memory, cores)
3. Share execution metrics if available (from Spark UI or history server)
4. Describe data characteristics including volume, partitions, and known skew
## Best Practices
- Always analyze execution plans before and after optimization
- Test optimizations on representative data samples first
- Monitor resource utilization during optimization validation
- Document configuration changes for reproducibility
- Consider cost implications alongside performance gains
This skill analyzes and optimizes Apache Spark jobs for performance, cost, and resource utilization. It provides actionable code changes, cluster configuration recommendations, and expected improvement estimates to make Spark workloads faster and cheaper. The focus is on practical, measurable wins across batch and streaming pipelines.
You provide Spark code, cluster configuration, execution metrics, and basic data characteristics. The skill inspects the physical and logical execution plans, detects data skew and costly shuffles, and generates tuned settings for executors, memory, partitions, and Catalyst hints. It returns optimized code snippets, a prioritized recommendation list, and expected improvements in time, resource use, and cost.
What inputs produce the best recommendations?
Provide the Spark job code, cluster config (executors, cores, memory), recent execution metrics, and data volume/partition info for the most precise guidance.
Will suggested changes always reduce cost?
Not always; some optimizations trade higher resource allocation for faster runtime. The skill reports expected changes in execution time, resource usage, and cost so you can decide.