home / skills / zpankz / mcp-skillset / infranodus-reasoning

infranodus-reasoning skill

/infranodus-skills/infranodus-reasoning

This skill analyzes text with state-aware reasoning, detects patterns, identifies gaps, and guides ontologies and strategic content using InfraNodus

npx playbooks add skill zpankz/mcp-skillset --skill infranodus-reasoning

Review the files below or copy the command above to add this skill to your agents.

Files (24)
SKILL.md
13.0 KB
---
name: infranodus-reasoning
description: State-aware cognitive reasoning engine combining pattern detection, critical questioning, ontology validation, and gap analysis through InfraNodus integration. Uses programmatic Python modules for algorithmic precision with Claude interpreting results contextually. Handles writing analysis, cognitive diagnosis, ontology generation, and strategic content development.
---

# InfraNodus Reasoning Engine

Programmatic cognitive reasoning system integrating:
- **Temporal state tracking** (BIASED/FOCUSED/DIVERSIFIED/DISPERSED with energy economics)
- **Writing pattern analysis** (grammatical signals of cognitive states)
- **Critical perspective generation** (state-aware questioning)
- **Ontology validation** (anti-hierarchy enforcement)
- **Gap interpretation** (contextual InfraNodus integration)
- **Intelligent routing** (context detection and pipeline selection)

## Architecture

**Programmatic Layer** (Python modules in `scripts/`):
- `router.py` - Context detection and route selection
- `coordinator.py` - Pipeline orchestration
- `state_manager.py` - Temporal state persistence
- `pattern_detector.py` - Writing pattern analysis
- `question_engine.py` - Critical question generation
- `gap_analyzer.py` - Gap interpretation with state context
- `ontology_validator.py` - Anti-hierarchy validation
- `infranodus_bridge.py` - MCP tool integration interface
- `utils.py` - Shared data structures and utilities

**Interpretive Layer** (Claude):
- `SKILL.md` (this file) - Orchestration and natural language synthesis
- `components/guidance.md` - Philosophical context for interpretation
- `components/*.md` - Reference component skills

## Usage Workflow

### Step 1: Route Detection

Invoke the router to analyze user intent and select appropriate pipeline:

```bash
python3 scripts/router.py "user message here"
```

**Router output** (JSON):
```json
{
  "route": "text_analysis",
  "confidence": 0.85,
  "reason": "Substantial text provided for comprehensive analysis",
  "components": ["pattern_detector", "gap_analyzer", "infranodus_bridge"],
  "options": null
}
```

### Step 2: Pipeline Execution

Invoke coordinator with the selected route:

```bash
python3 scripts/coordinator.py <route> "user message" "text to analyze"
```

**Coordinator output** (JSON):
```json
{
  "route": "text_analysis",
  "state_before": {...},
  "state_after": {...},
  "patterns": {...},
  "questions": [...],
  "gaps": [...],
  "recommendations": [...],
  "requires_mcp": true,
  "mcp_requests": [...],
  "errors": []
}
```

### Step 3: MCP Tool Invocation (if required)

If `requires_mcp = true`, invoke InfraNodus MCP tools using `mcp_requests` data:

```javascript
// Each request contains:
{
  "tool": "generate_knowledge_graph",
  "parameters": {...},
  "parser": "parse_graph_response"
}
```

Invoke the tool, then parse response using the specified parser from `infranodus_bridge.py`.

### Step 4: Result Interpretation

Combine programmatic output with MCP data and interpret using `components/guidance.md` context:

1. **Pattern → State correlation**: Reference guidance.md cognitive states
2. **Questions → Priority**: Use state-aware question interpretation
3. **Gaps → Strategies**: Apply state-dependent gap interpretation
4. **Recommendations → Natural language**: Synthesize into user-facing guidance

## Routes and Pipelines

### pattern_detection_only

**When**: Text provided without specific request
**Components**: [pattern_detector]
**Output**: Patterns, state detection
**MCP**: No

**Usage**:
```bash
python3 scripts/coordinator.py pattern_detection_only "analyze" "text here"
```

**Interpret**: Report patterns detected and any cognitive state shifts.

---

### text_analysis

**When**: Grammar fixes, text analysis, "analyze" keyword + text
**Components**: [pattern_detector, gap_analyzer, infranodus_bridge]
**Output**: Patterns, gap analysis request
**MCP**: Yes (generate_content_gaps)

**Usage**:
```bash
python3 scripts/coordinator.py text_analysis "fix grammar" "text here"
```

**Interpret**:
1. Report pattern findings
2. Invoke InfraNodus MCP tool with mcp_requests
3. Present grammar-corrected text with pattern-based insights
4. Suggest gap development if relevant

---

### cognitive_diagnosis

**When**: "stuck", "cognitive", "state", "thinking" keywords
**Components**: [state_manager, pattern_detector, question_engine]
**Output**: State analysis, diagnostic questions
**MCP**: No

**Usage**:
```bash
python3 scripts/coordinator.py cognitive_diagnosis "I feel stuck" "user text"
```

**Interpret**:
1. Report current cognitive state, dwelling time, energy level
2. Present diagnostic questions generated by question_engine
3. Explain state dynamics using guidance.md
4. Recommend state transition if needed

---

### critical_intervention

**When**: Energy <0.2, dwelling exceeded, "challenge" keyword
**Components**: [question_engine, gap_analyzer]
**Output**: Maximum challenge questions, state recommendations
**MCP**: No

**Usage**:
```bash
python3 scripts/coordinator.py critical_intervention "challenge assumptions" "user text"
```

**Interpret**:
1. Present challenging questions (8+ questions)
2. Explain intervention reason (energy/dwelling)
3. Recommend state transition
4. Provide blind spot analysis

---

### ontology_generation

**When**: "ontology", "knowledge graph" keywords
**Components**: [ontology_validator, infranodus_bridge]
**Output**: Validation results, graph creation request
**MCP**: Yes (create_knowledge_graph) if valid

**Usage**:
```bash
python3 scripts/coordinator.py ontology_generation "create ontology" "ontology text"
```

**Interpret**:
1. Report validation results (errors, warnings, metrics)
2. If invalid: Explain anti-hierarchy or relation code violations
3. If valid: Invoke create_knowledge_graph MCP tool
4. Provide improvement recommendations

---

### full_pipeline

**When**: Substantial text (>200 words) + "develop"/"strategic" keywords
**Components**: [pattern_detector, gap_analyzer, infranodus_bridge, question_engine]
**Output**: Comprehensive analysis
**MCP**: Yes (develop_text_tool, generate_content_gaps)

**Usage**:
```bash
python3 scripts/coordinator.py full_pipeline "develop this" "long text"
```

**Interpret**:
1. Report pattern analysis
2. Invoke multiple InfraNodus MCP tools (develop_text_tool, generate_content_gaps)
3. Parse and contextualize gap data with gap_analyzer
4. Present research questions
5. Provide development strategy recommendations
6. Generate follow-up questions

---

### clarify

**When**: Ambiguous or very short messages
**Components**: []
**Output**: Clarification request
**MCP**: No

**Interpret**: Ask user to specify intent (grammar? analysis? ontology? diagnosis?)

---

## State-Aware Interpretation

Always check current conversation state before interpreting results:

```bash
python3 -c "from scripts.state_manager import load_state; import json; print(json.dumps(load_state(), indent=2))"
```

**Key state factors**:
- `current_state`: BIASED/FOCUSED/DIVERSIFIED/DISPERSED
- `dwelling_time`: Exchanges in current state
- `energy_level`: 0.0 to 1.0
- `state_history`: Transition record

**State affects**:
- Question intensity and type
- Gap interpretation strategy
- Intervention priority
- Recommendation tone

Reference `components/guidance.md` for state-specific interpretation guidelines.

## Examples

### Example 1: Grammar Correction with Pattern Analysis

**User**: "Fix grammar: Machine learning help us understand patterns. Its about connections not just data itself."

**Workflow**:
```bash
# Route detection
python3 scripts/router.py "Fix grammar: Machine learning..."
# Output: route="text_analysis", confidence=0.85

# Execute pipeline
python3 scripts/coordinator.py text_analysis "Fix grammar" "Machine learning help us..."
# Output: patterns detected, gap analysis request
```

**Interpret**:
1. Correct grammar: "Machine learning helps us understand patterns. It's about connections, not just the data itself."
2. Report patterns: repetitive_structures=false, punctuation_rhythm=mixed
3. No significant cognitive state concerns
4. Skip MCP gap analysis (text too short)

---

### Example 2: Cognitive Diagnosis

**User**: "I keep thinking about the same problem over and over. Can't move forward."

**Workflow**:
```bash
# Route
python3 scripts/router.py "I keep thinking..."
# Output: route="cognitive_diagnosis"

# Execute
python3 scripts/coordinator.py cognitive_diagnosis "I keep thinking..." "same problem over and over"
# Output: state=BIASED, dwelling=4, energy=0.65, questions=[8 challenging questions]
```

**Interpret**:
1. Current state: BIASED (dwelling 4 exchanges, threshold 3)
2. Energy level: 65% (sustainable but declining)
3. Present diagnostic questions from question_engine
4. Recommend transition to FOCUSED state
5. Explain BIASED state dynamics from guidance.md

---

### Example 3: Ontology Validation

**User**: "Validate this ontology: [[ML]] uses [[data]] [relatedTo]\n[[ML]] has [[accuracy]] [hasAttribute]..."

**Workflow**:
```bash
# Route
python3 scripts/router.py "Validate this ontology..."
# Output: route="ontology_generation"

# Execute
python3 scripts/coordinator.py ontology_generation "validate" "[[ML]] uses [[data]]..."
# Output: validation results with errors/warnings
```

**Interpret**:
1. Report validation status
2. If errors: Explain anti-hierarchy violations ("ML dominates with 80% of statements")
3. Provide correction strategy: "Distribute relationships across multiple entity pairs"
4. If warnings: Note relation code imbalance
5. If valid: Offer to save to InfraNodus via create_knowledge_graph

---

### Example 4: Full Strategic Development

**User**: "Help me develop this 800-word article about heart rate variability for SEO."

**Workflow**:
```bash
# Route
python3 scripts/router.py "Help me develop..."
# Output: route="full_pipeline"

# Execute
python3 scripts/coordinator.py full_pipeline "develop article" "[800-word HRV article]"
# Output: patterns, mcp_requests=[develop_text_tool, generate_content_gaps]

# Invoke MCP tools
# 1. develop_text_tool → research questions, latent topics
# 2. generate_content_gaps → structural gaps

# Re-run coordinator with MCP data for gap interpretation
```

**Interpret**:
1. Present pattern analysis
2. Invoke InfraNodus MCP tools
3. Interpret gaps contextually (current state: FOCUSED → "productive expansion opportunities")
4. Present research questions
5. Recommend specific topic development
6. Provide SEO alignment suggestions (if generate_seo_report used)

---

## Error Handling

**If router errors**: Default to "clarify" route
**If coordinator errors**: Check errors array in output, report to user
**If MCP tools unavailable**: Skip MCP-dependent routes, use pattern-only analysis
**If state file corrupt**: State manager auto-initializes fresh state

## Component Skill Reference

When additional context needed beyond programmatic output:

**Writing philosophy**: `components/writing-assistant.md`
**Ontology syntax**: `components/ontology-creator.md`
**Question templates**: `components/critical-perspective.md`
**State dynamics**: `components/cognitive-variability.md`
**Interpretive guidance**: `components/guidance.md`

## Security and State Management

**State persistence**: `conversation_state.json` in skill directory
**State reset**: Delete `conversation_state.json` to start fresh
**Module safety**: All modules validate inputs before processing
**MCP validation**: infranodus_bridge validates all parameters before tool invocation

## Performance Notes

**Programmatic advantages**:
- ~10x faster pattern detection vs manual analysis
- Deterministic state tracking across sessions
- Consistent validation (no human variability in ontology checking)
- Precise energy/dwelling calculations

**Claude advantages**:
- Natural language synthesis and explanation
- Contextual recommendation tailoring
- Creative examples and analogies
- Emotional intelligence in delivery
- MCP tool invocation and integration

## When NOT to Use This Skill

**Skip if**:
- Simple question answering (no reasoning/analysis needed)
- No text analysis, pattern detection, ontology, or cognitive diagnosis requested
- User explicitly requests different skill or approach

**Prefer this skill if**:
- User provides text for analysis/correction
- Cognitive state concerns ("stuck", "obsessing", "scattered")
- Ontology/knowledge graph generation requested
- Strategic content development needed
- InfraNodus integration relevant

## Quick Reference

```bash
# Route detection
python3 scripts/router.py "message"

# Pipeline execution
python3 scripts/coordinator.py <route> "message" "text"

# Check current state
python3 -c "from scripts.state_manager import load_state; print(load_state()['current_state'])"

# Test pattern detection
python3 scripts/pattern_detector.py

# Test ontology validation
python3 scripts/ontology_validator.py

# View module documentation
cat components/guidance.md
```

---

**Remember**: You (Claude) are the interpretive layer. The Python modules provide algorithmic precision; you provide contextual wisdom, natural language synthesis, and user-facing intelligence. Use `components/guidance.md` to ground your interpretations in the philosophical framework.

Overview

This skill is a state-aware cognitive reasoning engine that combines programmatic pattern detection, critical questioning, ontology validation, and gap analysis with InfraNodus integration. It tracks temporal cognitive states and energy levels to produce context-sensitive diagnoses and strategic recommendations. The engine pairs deterministic Python modules for analysis with Claude-style interpretive synthesis for user-facing guidance.

How this skill works

The system detects user intent and selects a pipeline that runs writing-pattern analysis, state evaluation, question generation, gap interpretation, and ontology checks as needed. When graph-based insights are required, it invokes InfraNodus tools, parses results, and merges them with programmatic outputs. Finally, it synthesizes findings into natural-language recommendations adjusted for the current cognitive state and energy metrics.

When to use it

  • You have a text to analyze for writing patterns, grammar, or cognitive signals.
  • You feel stuck, repetitive, or obsesses over a problem and want a cognitive diagnosis.
  • You need ontology validation or a knowledge-graph creation workflow.
  • You want a strategic development pass on long-form content with gap analysis.
  • You need a highly state-aware set of critical questions or intervention suggestions.

Best practices

  • Provide at least a short passage or clear intent to let the router pick an appropriate pipeline.
  • Use keywords like 'analyze', 'stuck', 'ontology', or 'develop' to trigger focused routes.
  • Include context on goals and audience for better gap interpretation and recommendations.
  • Run full-pipeline only for substantive texts (>200 words) to justify graph-based analysis.
  • Review generated diagnostic questions and apply them iteratively to shift cognitive state.

Example use cases

  • Fix grammar and reveal writing patterns for a short paragraph while flagging cognitive signals.
  • Run a cognitive diagnosis when you feel stuck to get state label, energy estimate, and diagnostic questions.
  • Validate an ontology for anti-hierarchy issues, receive corrective suggestions, and create a knowledge graph if valid.
  • Develop an 800-word article strategically: detect patterns, generate research questions, and surface content gaps using InfraNodus.
  • Trigger a critical intervention when energy is low or dwelling time is high to surface 8+ challenge questions and blind spots.

FAQ

Do I always need InfraNodus to get useful results?

No. Pattern detection, state diagnosis, and question generation work without InfraNodus; graph tools are used only when gap or ontology routes request them.

How does the skill decide which pipeline to run?

It analyzes intent and text length, matches keywords and context, and selects the route with the highest confidence before orchestrating the components.