home / skills / microck / ordinary-claude-skills / reasoningbank-intelligence

reasoningbank-intelligence skill

/skills_all/reasoningbank-intelligence

This skill enables adaptive learning and meta-cognitive reasoning to improve self-learning agents, optimize workflows, and continuously refine strategies over

This is most likely a fork of the reasoningbank-intelligence skill from chrislemke
npx playbooks add skill microck/ordinary-claude-skills --skill reasoningbank-intelligence

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
4.7 KB
---
name: "ReasoningBank Intelligence"
description: "Implement adaptive learning with ReasoningBank for pattern recognition, strategy optimization, and continuous improvement. Use when building self-learning agents, optimizing workflows, or implementing meta-cognitive systems."
---

# ReasoningBank Intelligence

## What This Skill Does

Implements ReasoningBank's adaptive learning system for AI agents to learn from experience, recognize patterns, and optimize strategies over time. Enables meta-cognitive capabilities and continuous improvement.

## Prerequisites

- agentic-flow v1.5.11+
- AgentDB v1.0.4+ (for persistence)
- Node.js 18+

## Quick Start

```typescript
import { ReasoningBank } from 'agentic-flow/reasoningbank';

// Initialize ReasoningBank
const rb = new ReasoningBank({
  persist: true,
  learningRate: 0.1,
  adapter: 'agentdb' // Use AgentDB for storage
});

// Record task outcome
await rb.recordExperience({
  task: 'code_review',
  approach: 'static_analysis_first',
  outcome: {
    success: true,
    metrics: {
      bugs_found: 5,
      time_taken: 120,
      false_positives: 1
    }
  },
  context: {
    language: 'typescript',
    complexity: 'medium'
  }
});

// Get optimal strategy
const strategy = await rb.recommendStrategy('code_review', {
  language: 'typescript',
  complexity: 'high'
});
```

## Core Features

### 1. Pattern Recognition
```typescript
// Learn patterns from data
await rb.learnPattern({
  pattern: 'api_errors_increase_after_deploy',
  triggers: ['deployment', 'traffic_spike'],
  actions: ['rollback', 'scale_up'],
  confidence: 0.85
});

// Match patterns
const matches = await rb.matchPatterns(currentSituation);
```

### 2. Strategy Optimization
```typescript
// Compare strategies
const comparison = await rb.compareStrategies('bug_fixing', [
  'tdd_approach',
  'debug_first',
  'reproduce_then_fix'
]);

// Get best strategy
const best = comparison.strategies[0];
console.log(`Best: ${best.name} (score: ${best.score})`);
```

### 3. Continuous Learning
```typescript
// Enable auto-learning from all tasks
await rb.enableAutoLearning({
  threshold: 0.7,        // Only learn from high-confidence outcomes
  updateFrequency: 100   // Update models every 100 experiences
});
```

## Advanced Usage

### Meta-Learning
```typescript
// Learn about learning
await rb.metaLearn({
  observation: 'parallel_execution_faster_for_independent_tasks',
  confidence: 0.95,
  applicability: {
    task_types: ['batch_processing', 'data_transformation'],
    conditions: ['tasks_independent', 'io_bound']
  }
});
```

### Transfer Learning
```typescript
// Apply knowledge from one domain to another
await rb.transferKnowledge({
  from: 'code_review_javascript',
  to: 'code_review_typescript',
  similarity: 0.8
});
```

### Adaptive Agents
```typescript
// Create self-improving agent
class AdaptiveAgent {
  async execute(task: Task) {
    // Get optimal strategy
    const strategy = await rb.recommendStrategy(task.type, task.context);

    // Execute with strategy
    const result = await this.executeWithStrategy(task, strategy);

    // Learn from outcome
    await rb.recordExperience({
      task: task.type,
      approach: strategy.name,
      outcome: result,
      context: task.context
    });

    return result;
  }
}
```

## Integration with AgentDB

```typescript
// Persist ReasoningBank data
await rb.configure({
  storage: {
    type: 'agentdb',
    options: {
      database: './reasoning-bank.db',
      enableVectorSearch: true
    }
  }
});

// Query learned patterns
const patterns = await rb.query({
  category: 'optimization',
  minConfidence: 0.8,
  timeRange: { last: '30d' }
});
```

## Performance Metrics

```typescript
// Track learning effectiveness
const metrics = await rb.getMetrics();
console.log(`
  Total Experiences: ${metrics.totalExperiences}
  Patterns Learned: ${metrics.patternsLearned}
  Strategy Success Rate: ${metrics.strategySuccessRate}
  Improvement Over Time: ${metrics.improvement}
`);
```

## Best Practices

1. **Record consistently**: Log all task outcomes, not just successes
2. **Provide context**: Rich context improves pattern matching
3. **Set thresholds**: Filter low-confidence learnings
4. **Review periodically**: Audit learned patterns for quality
5. **Use vector search**: Enable semantic pattern matching

## Troubleshooting

### Issue: Poor recommendations
**Solution**: Ensure sufficient training data (100+ experiences per task type)

### Issue: Slow pattern matching
**Solution**: Enable vector indexing in AgentDB

### Issue: Memory growing large
**Solution**: Set TTL for old experiences or enable pruning

## Learn More

- ReasoningBank Guide: agentic-flow/src/reasoningbank/README.md
- AgentDB Integration: packages/agentdb/docs/reasoningbank.md
- Pattern Learning: docs/reasoning/patterns.md

Overview

This skill implements ReasoningBank's adaptive learning system to give agents pattern recognition, strategy optimization, and continuous improvement. It provides persistent learning, meta-cognitive capabilities, and tools to recommend and refine strategies based on past outcomes. Use it to build self-improving agents and workflows that adapt from real experience.

How this skill works

ReasoningBank records structured experiences (task, approach, outcome, context) and uses those observations to learn patterns and score strategies. It supports pattern matching, strategy comparison, meta-learning, and transfer learning, and can persist knowledge to AgentDB or similar stores. Agents request recommended strategies, execute them, and feed results back for continuous updating.

When to use it

  • Building self-learning agents that must improve from past runs
  • Optimizing multi-step workflows where strategy choice affects outcomes
  • Implementing meta-cognitive systems that reason about learning itself
  • Transferring insights across related tasks or domains
  • When you need persistent, queryable models of past experiences

Best practices

  • Record all task outcomes consistently, including failures and partial successes
  • Provide rich context (task metadata, environment, complexity) to improve matching
  • Set confidence and learning thresholds to avoid learning from noisy outcomes
  • Enable vector indexing or semantic search in storage for faster and better pattern matches
  • Periodically audit high-impact patterns and strategies to prevent drift

Example use cases

  • A code-review agent that learns which inspection sequence finds the most bugs for each language and complexity
  • An operations agent that recognizes post-deploy error patterns and suggests rollback or scaling actions
  • A data pipeline controller that transfers optimizations from one dataset type to another
  • A debugging assistant that compares multiple repair strategies and recommends the highest-scoring approach
  • A batch processing system that learns when parallel execution outperforms sequential runs

FAQ

How much data is needed for reliable recommendations?

Aim for 100+ experiences per task type for robust strategy scoring; fewer examples may still help but expect lower confidence.

How do I keep storage from growing without bound?

Use TTL, pruning policies, or summarize older experiences into aggregated patterns to reduce storage while retaining signal.