home / skills / ruvnet / ruflo / neural-training
This skill trains and optimizes neural patterns using SONA, MoE, and EWC++, improving pattern learning and adaptive routing.
npx playbooks add skill ruvnet/ruflo --skill neural-trainingReview the files below or copy the command above to add this skill to your agents.
---
name: neural-training
description: >
Neural pattern training with SONA (Self-Optimizing Neural Architecture), MoE (Mixture of Experts), and EWC++ for knowledge consolidation.
Use when: pattern learning, model optimization, knowledge transfer, adaptive routing.
Skip when: simple tasks, no learning required, one-off operations.
---
# Neural Training Skill
## Purpose
Train and optimize neural patterns using SONA, MoE, and EWC++ systems.
## When to Trigger
- Training new patterns
- Optimizing agent routing
- Knowledge consolidation
- Pattern recognition tasks
## Intelligence Pipeline
1. **RETRIEVE** — Fetch relevant patterns via HNSW (150x-12,500x faster)
2. **JUDGE** — Evaluate with verdicts (success$failure)
3. **DISTILL** — Extract key learnings via LoRA
4. **CONSOLIDATE** — Prevent catastrophic forgetting via EWC++
## Components
| Component | Purpose | Performance |
|-----------|---------|-------------|
| SONA | Self-optimizing adaptation | <0.05ms |
| MoE | Expert routing | 8 experts |
| HNSW | Pattern search | 150x-12,500x |
| EWC++ | Prevent forgetting | Continuous |
| Flash Attention | Speed | 2.49x-7.47x |
## Commands
### Train Patterns
```bash
npx claude-flow neural train --model-type moe --epochs 10
```
### Check Status
```bash
npx claude-flow neural status
```
### View Patterns
```bash
npx claude-flow neural patterns --type all
```
### Predict
```bash
npx claude-flow neural predict --input "task description"
```
### Optimize
```bash
npx claude-flow neural optimize --target latency
```
## Best Practices
1. Use pretrain hook for batch learning
2. Store successful patterns after completion
3. Consolidate regularly to prevent forgetting
4. Route based on task complexity
This skill trains and optimizes neural patterns using SONA (Self-Optimizing Neural Architecture), Mixture of Experts (MoE), and EWC++ for knowledge consolidation. It combines fast pattern retrieval, expert routing, and continual learning to produce adaptive, low-latency models for agentic workflows. Use it to improve pattern recognition, routing decisions, and transfer learning across autonomous agents.
The pipeline retrieves relevant patterns with HNSW-based search, judges candidate outcomes, distills useful updates via LoRA-style adaptations, and consolidates learned weights with EWC++ to prevent catastrophic forgetting. SONA adapts architectures on the fly while MoE routes inputs to specialized experts and Flash Attention accelerates inference. Regular consolidation keeps knowledge coherent across successive training runs.
Can I use this for single, one-off tasks?
Skip this skill for simple one-off tasks; it excels when ongoing learning, routing optimization, or knowledge transfer is required.
How does EWC++ help in continual learning?
EWC++ penalizes changes to parameters important to previous tasks, allowing new learning while retaining past knowledge and reducing catastrophic forgetting.