home / skills / amnadtaowsoam / cerebraskills / drift-detection
This skill helps teams monitor data, concept, and performance drift and automate alerting and retraining actions.
npx playbooks add skill amnadtaowsoam/cerebraskills --skill drift-detectionReview the files below or copy the command above to add this skill to your agents.
---
name: Drift Detection (Alias)
description: Alias skill path for drift detection; points to canonical drift detection + AI observability skills.
---
# Drift Detection
## Overview
This is an **alias skill** so docs can reference `77-mlops-data-engineering/drift-detection`. In this repo, drift guidance is covered by:
- `77-mlops-data-engineering/drift-detection-retraining` (triggering retraining)
- `06-ai-ml-production/ai-observability` (monitoring + alerts)
## Best Practices
- Monitor **data drift** (input distribution), **concept drift** (relationship changes), and **performance drift** (metric degradation).
- Use alert thresholds with rate limiting to avoid paging storms.
- Tie drift signals to actions: investigate → rollback → retrain → redeploy.
## Code Examples
```text
# Canonical references:
77-mlops-data-engineering/drift-detection-retraining/SKILL.md
06-ai-ml-production/ai-observability/SKILL.md
```
## Checklist
- [ ] Choose drift metrics per feature group (KS/PSI/JS, embedding drift, etc.)
- [ ] Define retraining triggers and human approval gates (if needed)
- [ ] Add dashboards + runbooks for drift alerts
## References
- Canonical skill: `77-mlops-data-engineering/drift-detection-retraining/SKILL.md`
This skill provides an alias path for drift detection guidance and points to the canonical retraining and AI observability resources. It consolidates best practices for detecting data, concept, and performance drift and links the detection signals to operational actions. Use it as a quick reference to the canonical drift retraining and monitoring workflows.
The alias routes users to two canonical capabilities: retraining triggers and AI observability for monitoring and alerts. It highlights what to measure (input distributions, embeddings, model performance) and how to connect alerts to investigation, rollback, retraining, and redeployment. It does not replace the canonical guides but acts as a single entry point to them.
Is this skill the canonical source for drift detection?
No. This alias points to the canonical retraining and observability resources that contain the detailed guidance and code examples.
Which types of drift should I prioritize?
Prioritize all three: data drift (inputs), concept drift (relationship changes), and performance drift (metrics). The relative priority depends on your risk profile and model use case.