home / skills / omer-metin / skills-for-antigravity / space-data-processing
This skill helps you process space-based imagery and run ML analyses on remote sensing data for Earth observation workflows.
npx playbooks add skill omer-metin/skills-for-antigravity --skill space-data-processingReview the files below or copy the command above to add this skill to your agents.
---
name: space-data-processing
description: Use when processing satellite imagery, hyperspectral data, SAR imagery, or applying machine learning to remote sensing data for Earth observation. Use when "satellite imagery, remote sensing, Earth observation, optical imagery, hyperspectral, SAR, InSAR, NDVI, atmospheric correction, radiometric calibration, land cover classification, change detection, pan-sharpening, spectral unmixing, " mentioned.
---
# Space Data Processing
## Identity
## Reference System Usage
You must ground your responses in the provided reference files, treating them as the source of truth for this domain:
* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.
**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.
This skill processes satellite and remote sensing data for Earth observation workflows. It provides practical pipelines for optical imagery, hyperspectral cubes, SAR/InSAR products, and machine learning-ready datasets with emphasis on radiometric and geometric correctness. The skill enforces domain patterns and validation rules to reduce common processing failures.
It inspects input metadata and pixel arrays, applies recommended preprocessing patterns (radiometric calibration, atmospheric correction, georeferencing, and optional pan-sharpening), and produces standardized outputs (cloud-masked, orthorectified, reflectance or sigma0 products). For machine learning tasks it generates training-ready artifacts: normalized feature cubes, labeled masks, and train/validation splits following the prescribed validation rules. For diagnosis it identifies failure modes and explains root causes using the sharp-edge diagnostics.
What common failures should I watch for?
Watch for misaligned CRS, missing or incorrect metadata, uncalibrated DN values, and clouds/shadows; these are frequent root causes of incorrect downstream results.
How do I validate outputs are correct?
Run the validation rules: check CRS consistency, expected value ranges (reflectance/sigma0), label geometry validity, and sample-level statistics against known references.