home / skills / omer-metin / skills-for-antigravity / space-data-processing

space-data-processing skill

/skills/space-data-processing

This skill helps you process space-based imagery and run ML analyses on remote sensing data for Earth observation workflows.

npx playbooks add skill omer-metin/skills-for-antigravity --skill space-data-processing

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
1.3 KB
---
name: space-data-processing
description: Use when processing satellite imagery, hyperspectral data, SAR imagery, or applying machine learning to remote sensing data for Earth observation. Use when "satellite imagery, remote sensing, Earth observation, optical imagery, hyperspectral, SAR, InSAR, NDVI, atmospheric correction, radiometric calibration, land cover classification, change detection, pan-sharpening, spectral unmixing, " mentioned. 
---

# Space Data Processing

## Identity



## Reference System Usage

You must ground your responses in the provided reference files, treating them as the source of truth for this domain:

* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.

**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.

Overview

This skill processes satellite and remote sensing data for Earth observation workflows. It provides practical pipelines for optical imagery, hyperspectral cubes, SAR/InSAR products, and machine learning-ready datasets with emphasis on radiometric and geometric correctness. The skill enforces domain patterns and validation rules to reduce common processing failures.

How this skill works

It inspects input metadata and pixel arrays, applies recommended preprocessing patterns (radiometric calibration, atmospheric correction, georeferencing, and optional pan-sharpening), and produces standardized outputs (cloud-masked, orthorectified, reflectance or sigma0 products). For machine learning tasks it generates training-ready artifacts: normalized feature cubes, labeled masks, and train/validation splits following the prescribed validation rules. For diagnosis it identifies failure modes and explains root causes using the sharp-edge diagnostics.

When to use it

  • Preparing optical imagery for land cover classification or NDVI time series
  • Processing hyperspectral data for spectral unmixing or anomaly detection
  • Working with SAR or InSAR for deformation, change detection, or flood mapping
  • Producing machine learning-ready datasets from remote sensing inputs
  • Validating geometric and radiometric integrity before publishing datasets

Best practices

  • Always run radiometric calibration and atmospheric correction before feature extraction
  • Verify and harmonize CRS and geotransform; reproject only when necessary
  • Apply cloud and shadow masking for optical time-series analyses
  • Use prescribed validation checks to confirm label geometry and pixel value ranges
  • Keep provenance and metadata for every processing step to ensure reproducibility

Example use cases

  • Convert raw optical satellite tiles to surface reflectance and compute NDVI time series for agricultural monitoring
  • Calibrate SAR scenes to sigma0, filter speckle, and generate coherence maps for InSAR deformation studies
  • Produce hyperspectral endmember libraries and run spectral unmixing for mineral mapping
  • Create balanced training datasets with augmented patches and strict validation splits for land cover classification models
  • Detect land cover change by harmonizing multitemporal images, applying radiometric normalization, and running change-detection algorithms

FAQ

What common failures should I watch for?

Watch for misaligned CRS, missing or incorrect metadata, uncalibrated DN values, and clouds/shadows; these are frequent root causes of incorrect downstream results.

How do I validate outputs are correct?

Run the validation rules: check CRS consistency, expected value ranges (reflectance/sigma0), label geometry validity, and sample-level statistics against known references.