home / skills / pluginagentmarketplace / custom-plugin-typescript / data

data skill

/skills/data

This skill helps you design and optimize data pipelines and warehouses using SQL, dbt, Spark, and orchestrators to boost analytics.

npx playbooks add skill pluginagentmarketplace/custom-plugin-typescript --skill data

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
1.8 KB
---
name: data-engineering
description: Master data engineering, ETL/ELT, data warehousing, SQL optimization, and analytics. Use when building data pipelines, designing data systems, or working with large datasets.
sasmp_version: "1.3.0"
bonded_agent: 04-data-engineering-analytics
bond_type: PRIMARY_BOND
---

# Data Engineering & Analytics Skill

## Quick Start - SQL Data Pipeline

```sql
-- Create staging table
CREATE TABLE staging_events AS
SELECT 
  event_id,
  user_id,
  event_type,
  event_time,
  properties
FROM raw_events
WHERE event_time >= CURRENT_DATE - INTERVAL '1 day'
AND event_type IN ('click', 'purchase', 'view');

-- Aggregate metrics
SELECT
  DATE(event_time) as date,
  user_id,
  COUNT(*) as event_count,
  COUNT(DISTINCT event_type) as unique_events
FROM staging_events
GROUP BY 1, 2
ORDER BY date DESC, event_count DESC;
```

## Core Technologies

### Data Processing
- Apache Spark
- Apache Flink
- Pandas / Polars
- dbt (data transformation)

### Data Warehousing
- Snowflake
- BigQuery (GCP)
- Redshift (AWS)
- Azure Synapse

### ETL/ELT Tools
- dbt
- Airflow
- Talend
- Informatica

### Streaming
- Apache Kafka
- AWS Kinesis
- Apache Pulsar

### ML & Analytics
- scikit-learn
- TensorFlow
- Tableau / Power BI

## Best Practices

1. **Data Quality** - Validation and testing
2. **Documentation** - Clear metadata
3. **Performance** - Query optimization
4. **Governance** - Data security
5. **Monitoring** - Pipeline alerts
6. **Scalability** - Design for growth
7. **Version Control** - Git for code and configs
8. **Testing** - Data and pipeline testing

## Resources

- [Apache Spark Documentation](https://spark.apache.org/)
- [dbt Documentation](https://docs.getdbt.com/)
- [SQL Mode Tutorial](https://mode.com/sql-tutorial/)
- [Kaggle](https://www.kaggle.com/)

Overview

This skill teaches practical data engineering and analytics: building ETL/ELT pipelines, designing data warehouses, optimizing SQL, and preparing data for analytics or ML. It focuses on scalable tooling and real-world patterns so you can move from raw events to reliable, queryable datasets quickly. The guidance covers batch and streaming, testing, monitoring, and governance to keep pipelines production-ready.

How this skill works

The skill inspects pipeline design, transformation logic, and operational patterns to recommend improvements and templates. It evaluates data ingestion, staging, transformation, and serving layers using SQL snippets, orchestration patterns, and tooling choices. Recommendations include query optimizations, partitioning and clustering, schema design for analytics, and monitoring/test strategies.

When to use it

  • Building or refactoring ETL/ELT pipelines for batch or streaming data
  • Designing a data warehouse schema or choosing a warehousing platform
  • Optimizing slow SQL queries and analytics performance
  • Preparing datasets for machine learning or BI reporting
  • Establishing data quality, testing, and monitoring practices

Best practices

  • Implement data quality checks and automated validation at every stage
  • Use version control for transformations and modular, testable code (dbt, CI)
  • Design partitions and clustering based on query patterns and cardinality
  • Instrument pipelines with alerts, metrics, and lineage for observability
  • Apply least-privilege governance and encrypt sensitive data in transit and at rest

Example use cases

  • Create daily staging tables from raw event streams and produce aggregated metrics for dashboards
  • Migrate on-prem ETL jobs to a cloud data warehouse with optimized SQL and partitioning
  • Build a streaming pipeline using Kafka + Spark/Flink to power real-time analytics
  • Implement dbt-based transformations with automated tests and CI/CD for analytics models
  • Tune slow BI queries by adding appropriate indexes, materialized views, or denormalized tables

FAQ

Which tools should I choose for a cloud-first pipeline?

Pick a managed warehouse (BigQuery, Snowflake, or Redshift) and pair it with Airflow or a managed orchestration plus dbt for transformations; choose Kafka or Kinesis for high-throughput streaming.

How do I ensure data quality before reporting?

Implement schema checks, null/consistency tests, row-count comparisons, and end-to-end integration tests; fail fast and surface issues to monitoring dashboards and alerts.