home / skills / a5c-ai / babysitter / apache-spark-optimizer

This skill analyzes and optimizes Apache Spark jobs for performance, cost, and resource utilization, providing actionable tuning recommendations.

npx playbooks add skill a5c-ai/babysitter --skill apache-spark-optimizer

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
2.0 KB
---
name: Apache Spark Optimizer
description: Analyzes and optimizes Apache Spark jobs for performance, cost, and resource utilization
version: 1.0.0
category: Distributed Processing
skillId: SK-DEA-001
allowed-tools:
  - Read
  - Write
  - Edit
  - Glob
  - Grep
  - Bash
---

# Apache Spark Optimizer

## Overview

Analyzes and optimizes Apache Spark jobs for performance, cost, and resource utilization. This skill provides deep expertise in Spark execution plans, partitioning strategies, and resource configuration to maximize efficiency.

## Capabilities

- Spark execution plan analysis and optimization
- Partition strategy recommendations
- Shuffle reduction techniques
- Memory and executor configuration tuning
- Catalyst optimizer hints generation
- Data skew detection and mitigation
- Broadcast join optimization
- Caching strategy recommendations

## Input Schema

```json
{
  "sparkCode": "string",
  "clusterConfig": "object",
  "executionMetrics": "object",
  "dataCharacteristics": {
    "volumeGB": "number",
    "partitionCount": "number",
    "skewFactor": "number"
  }
}
```

## Output Schema

```json
{
  "optimizedCode": "string",
  "recommendations": ["string"],
  "expectedImprovement": {
    "executionTime": "percentage",
    "resourceUsage": "percentage",
    "cost": "percentage"
  },
  "configChanges": "object"
}
```

## Target Processes

- ETL/ELT Pipeline
- Streaming Pipeline
- Feature Store Setup
- Pipeline Migration

## Usage Guidelines

1. Provide the Spark code or job definition for analysis
2. Include cluster configuration details (executors, memory, cores)
3. Share execution metrics if available (from Spark UI or history server)
4. Describe data characteristics including volume, partitions, and known skew

## Best Practices

- Always analyze execution plans before and after optimization
- Test optimizations on representative data samples first
- Monitor resource utilization during optimization validation
- Document configuration changes for reproducibility
- Consider cost implications alongside performance gains

Overview

This skill analyzes and optimizes Apache Spark jobs for performance, cost, and resource utilization. It provides actionable code changes, cluster configuration recommendations, and expected improvement estimates to make Spark workloads faster and cheaper. The focus is on practical, measurable wins across batch and streaming pipelines.

How this skill works

You provide Spark code, cluster configuration, execution metrics, and basic data characteristics. The skill inspects the physical and logical execution plans, detects data skew and costly shuffles, and generates tuned settings for executors, memory, partitions, and Catalyst hints. It returns optimized code snippets, a prioritized recommendation list, and expected improvements in time, resource use, and cost.

When to use it

  • Improving slow ETL or ELT jobs with high execution time
  • Reducing cloud cost by lowering resource consumption for frequent pipelines
  • Diagnosing and mitigating data skew or excessive shuffling
  • Tuning streaming jobs to meet latency and throughput targets
  • Validating migration from one cluster size or provider to another

Best practices

  • Supply real execution metrics (Spark UI or history server) for accurate recommendations
  • Test suggested changes on representative datasets before full rollout
  • Compare before-and-after execution plans to confirm improvements
  • Document configuration changes and keep reproducible runbooks
  • Balance performance gains against cost and operational complexity

Example use cases

  • Analyze a nightly ETL job and reduce runtime by changing partitioning and caching
  • Optimize a streaming aggregation to lower latency through executor and memory tuning
  • Recommend broadcast joins for small dimension tables to eliminate large shuffles
  • Detect and correct heavy key skew in feature store ingestion pipelines
  • Produce Catalyst optimizer hints and code edits to improve query plan efficiency

FAQ

What inputs produce the best recommendations?

Provide the Spark job code, cluster config (executors, cores, memory), recent execution metrics, and data volume/partition info for the most precise guidance.

Will suggested changes always reduce cost?

Not always; some optimizations trade higher resource allocation for faster runtime. The skill reports expected changes in execution time, resource usage, and cost so you can decide.