home / skills / dexploarer / hyper-forge / memory-manager

memory-manager skill

/.claude/skills/memory-manager

This skill optimizes agent memory usage, manages context windows, and archives conversations to improve recall and responsiveness.

npx playbooks add skill dexploarer/hyper-forge --skill memory-manager

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.9 KB
---
name: memory-manager
description: Manage elizaOS agent memory, context windows, and conversation history. Triggers on "manage memory", "optimize context", or "handle agent memory"
allowed-tools: [Read, Edit, Bash]
---

# Memory Manager Skill

Optimize agent memory usage, implement pruning strategies, and manage conversation context effectively.

## Capabilities

1. 🧠 Memory pruning and optimization
2. 📊 Context window management
3. 🗂️ Conversation history archiving
4. 🎯 Important memory consolidation
5. 🔄 Memory decay implementation
6. 📈 Memory usage monitoring

## Memory Types

### Short-term Memory
- Current conversation context
- Working memory (max 50 items default)
- Cleared per session

### Long-term Memory
- Important facts and information
- Persistent across sessions
- Decay modeling over time

### Knowledge
- Static facts from configuration
- Document-based knowledge
- Dynamically learned information

## Memory Operations

```typescript
// Create memory
await runtime.createMemory({
  entityId: userId,
  roomId: conversationId,
  content: {
    text: 'Important information',
    metadata: { importance: 'high' }
  },
  embedding: await generateEmbedding(text)
});

// Retrieve memories
const memories = await runtime.getMemories({
  roomId: conversationId,
  limit: 10,
  unique: true
});

// Search semantically
const results = await runtime.searchMemories(
  'query text',
  {
    roomId: conversationId,
    limit: 5,
    minScore: 0.7
  }
);

// Update memory
await runtime.updateMemory({
  id: memoryId,
  content: { ...updated content },
  metadata: { lastAccessed: Date.now() }
});
```

## Pruning Strategies

### Time-based Pruning
```typescript
async function pruneOldMemories(
  runtime: IAgentRuntime,
  daysToKeep: number = 30
): Promise<number> {
  const cutoffDate = Date.now() - (daysToKeep * 24 * 60 * 60 * 1000);

  const oldMemories = await runtime.getMemories({
    createdBefore: cutoffDate,
    importance: 'low'
  });

  for (const memory of oldMemories) {
    await runtime.deleteMemory(memory.id);
  }

  return oldMemories.length;
}
```

### Size-based Pruning
```typescript
async function pruneLargeMemories(
  runtime: IAgentRuntime,
  maxSize: number = 1000
): Promise<void> {
  const memories = await runtime.getMemories({ limit: 10000 });

  if (memories.length > maxSize) {
    // Keep most important and recent
    const toKeep = rankMemoriesByImportance(memories).slice(0, maxSize);
    const toDelete = memories.filter(m => !toKeep.includes(m));

    for (const memory of toDelete) {
      await runtime.deleteMemory(memory.id);
    }
  }
}
```

## Best Practices

1. Set appropriate `conversationLength` limits
2. Implement importance scoring
3. Use memory decay for temporal relevance
4. Archive important conversations
5. Monitor memory growth
6. Prune regularly
7. Use embeddings for semantic search
8. Cache frequently accessed memories
9. Batch memory operations
10. Index memories properly

Overview

This skill manages elizaOS agent memory, conversation history, and context windows to keep agents responsive and relevant. It provides pruning strategies, context-window tuning, and tools for archiving and consolidating important facts. The goal is to prevent context overload while preserving high-value long-term information.

How this skill works

The skill inspects short-term, long-term, and knowledge memory types and applies configurable operations: create, retrieve, search, update, and delete. It runs pruning routines (time-based and size-based), applies decay models for temporal relevance, and ranks memories by importance and recency. It also exposes monitoring hooks and embedding-based semantic search for efficient retrieval.

When to use it

  • When conversation context grows beyond model limits
  • Before sending prompts to ensure only relevant memories are included
  • To archive or consolidate long-lived user facts
  • When memory store costs or latency increase
  • To enforce data retention or compliance policies

Best practices

  • Define clear short-term vs long-term memory policies and limits
  • Score memories for importance and last-access to guide pruning
  • Use time-based decay to lower relevance of old transient items
  • Batch memory operations to reduce I/O and cost
  • Index and embed memories for semantic search and fast retrieval
  • Monitor memory growth and automate scheduled pruning

Example use cases

  • Trim a game-dev chat history to the most relevant 50 items before generating assets
  • Automatically archive completed design conversations into long-term storage with importance tags
  • Prune low-importance memories older than 30 days to control storage and context window size
  • Consolidate repeated user preferences into a single high-importance long-term memory
  • Run periodic audits that report memory count, average age, and top-k important memories

FAQ

How do time-based and size-based pruning differ?

Time-based pruning removes memories older than a cutoff, often filtered by low importance. Size-based pruning enforces a cap by keeping the most important and recent items and deleting the rest.

How can I preserve critical facts while pruning?

Assign importance metadata and consolidate duplicates into a single canonical memory; mark those as persistent or exempt from automated pruning.