home / skills / jeremylongshore / claude-code-plugins-plus-skills / data-lineage-tracker

This skill helps you implement data lineage tracker workflows with production-ready code, validation, and best-practice configurations for data pipelines.

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill data-lineage-tracker

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.1 KB
---
name: "data-lineage-tracker"
description: |
  Track data lineage tracker operations. Auto-activating skill for Data Pipelines.
  Triggers on: data lineage tracker, data lineage tracker
  Part of the Data Pipelines skill category. Use when working with data lineage tracker functionality. Trigger with phrases like "data lineage tracker", "data tracker", "data".
allowed-tools: "Read, Write, Edit, Bash(cmd:*), Grep"
version: 1.0.0
license: MIT
author: "Jeremy Longshore <[email protected]>"
---

# Data Lineage Tracker

## Overview

This skill provides automated assistance for data lineage tracker tasks within the Data Pipelines domain.

## When to Use

This skill activates automatically when you:
- Mention "data lineage tracker" in your request
- Ask about data lineage tracker patterns or best practices
- Need help with data pipeline skills covering etl, data transformation, workflow orchestration, and streaming data processing.

## Instructions

1. Provides step-by-step guidance for data lineage tracker
2. Follows industry best practices and patterns
3. Generates production-ready code and configurations
4. Validates outputs against common standards

## Examples

**Example: Basic Usage**
Request: "Help me with data lineage tracker"
Result: Provides step-by-step guidance and generates appropriate configurations


## Prerequisites

- Relevant development environment configured
- Access to necessary tools and services
- Basic understanding of data pipelines concepts


## Output

- Generated configurations and code
- Best practice recommendations
- Validation results


## Error Handling

| Error | Cause | Solution |
|-------|-------|----------|
| Configuration invalid | Missing required fields | Check documentation for required parameters |
| Tool not found | Dependency not installed | Install required tools per prerequisites |
| Permission denied | Insufficient access | Verify credentials and permissions |


## Resources

- Official documentation for related tools
- Best practices guides
- Community examples and tutorials

## Related Skills

Part of the **Data Pipelines** skill category.
Tags: etl, airflow, spark, streaming, data-engineering

Overview

This skill provides automated assistance for tracking data lineage in production data pipelines. It activates automatically for requests mentioning data lineage tracker and helps generate configurations, code snippets, and validation checks. The goal is to make lineage capture, traceability, and auditing practical and repeatable across ETL, streaming, and orchestration stacks.

How this skill works

The skill inspects pipeline definitions, transformation steps, and metadata stores to map upstream and downstream relationships. It generates instrumentation templates, configuration fragments, and validation tests, and it surfaces missing metadata or permission issues. Outputs include code snippets, config files, and actionable remediation steps to improve traceability.

When to use it

  • When you need to map end-to-end lineage for an ETL or streaming job
  • When instrumenting pipelines to emit lineage metadata for auditing
  • While designing or validating data catalog and metadata integrations
  • To generate configuration or code for lineage capture (hooks, decorators)
  • When troubleshooting incomplete lineage or permission-related errors

Best practices

  • Capture lineage at transform boundaries and at ingestion/egress points
  • Emit standardized metadata fields (producer, job_id, timestamp, schema)
  • Integrate lineage with your catalog and monitoring systems
  • Validate lineage with automated tests that cover schema and ancestry
  • Limit sensitive metadata exposure; apply RBAC and encryption to lineage stores

Example use cases

  • Generate instrumentation code for Spark jobs to emit lineage events
  • Create Airflow operator hooks that record upstream/downstream relationships
  • Produce configuration for a metadata store (e.g., recording dataset IDs and schema changes)
  • Validate lineage completeness across a multi-step ETL pipeline
  • Diagnose missing lineage caused by permissions or misconfigured hooks

FAQ

What triggers this skill?

It auto-activates when requests mention phrases like "data lineage tracker", "data tracker", or "data" in the Data Pipelines context.

What outputs can I expect?

You get production-ready snippets, configuration fragments, validation checks, and remediation steps tailored to your pipeline framework.