home / skills / zpankz / mcp-skillset / data-router
This skill routes data processing and analytics tasks through knowledge graphs, batch processing, and vector search to optimize workflows.
npx playbooks add skill zpankz/mcp-skillset --skill data-routerReview the files below or copy the command above to add this skill to your agents.
---
name: data-router
description: Routes data processing, knowledge graph, and analytics tasks. Triggers on graph, vector, knowledge, ontology, process, batch, etl, database, query, csv, json.
---
# Data Router
Routes data processing, knowledge graph, and analytics tasks.
## Subcategories
### Knowledge Graphs
```yaml
triggers: [graph, knowledge-graph, entity, relation, neo4j, networkx]
skills:
- hkgb: Hybrid Knowledge Graph building
- ontolog: Holarchic reasoning over graphs
```
### Batch Processing
```yaml
triggers: [batch, process, etl, migrate, transform]
skills:
- obsidian-batch: Obsidian vault batch operations
- process: Batch processing workflows
```
### Vector / Semantic
```yaml
triggers: [vector, embedding, semantic, similarity, rag]
skills:
- skill-discovery: Semantic skill search
```
### Analytics
```yaml
triggers: [analyze-data, query, sql, csv, json, aggregate]
skills:
- sc:analyze: Data analysis
```
## Routing Decision Tree
```
data request
│
├── Knowledge graph?
│ ├── Building? → hkgb
│ └── Reasoning → ontolog
│
├── Batch processing?
│ ├── Obsidian? → obsidian-batch
│ └── General → process
│
├── Vector/semantic?
│ └── skill-discovery
│
└── Analytics?
└── sc:analyze
```
## Managed Skills
| Skill | Purpose | Trigger |
|-------|---------|---------|
| hkgb | Hybrid KG building | "knowledge graph", "neo4j" |
| ontolog | Graph reasoning | "ontology", "holarchic" |
| obsidian-batch | Vault processing | "obsidian", "vault" |
| process | Batch processing | "batch", "process" |
| skill-discovery | Semantic search | "find skill", "discover" |
This skill routes data processing, knowledge graph, vector/semantic, and analytics tasks to the correct handler. It inspects incoming requests for triggers like graph, vector, ontology, batch, ETL, SQL, CSV, and JSON, then forwards work to specialized subskills. The router simplifies orchestration so each task is handled by the most appropriate capability.
The router analyzes request metadata and keywords against a decision tree to identify the task class: knowledge graph, batch processing, vector/semantic, or analytics. Once classified, it dispatches the request to a managed skill (for example, HKGB for graph building or sc:analyze for analytics) and returns the selected handler and routing rationale. It supports common triggers (graph, embedding, batch, query, csv, json) and can be extended with new triggers and skills.
How does the router decide between similar triggers?
It matches prioritized triggers defined in the decision tree and falls back to explicit metadata or a default handler if ambiguous.
Can I add new skills and triggers?
Yes. Add the new trigger keywords and map them to the managed skill in the routing configuration; include tests for the new branch.
What happens on routing failures?
Failures should be logged and a fallback or error handler invoked; include validation steps to catch unsupported inputs early.