home / skills / atrislabs / atris / memory

memory skill

/atris/skills/memory

This skill helps you recall past work and decisions by querying Atris journal history and surfacing patterns and lessons learned.

npx playbooks add skill atrislabs/atris --skill memory

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.9 KB
---
name: memory
description: Search and reason over Atris journal history. Use when user asks about past work, decisions, history, or patterns. Uses RLM pattern (grep first, reason second).
version: 1.0.0
tags:
  - memory
---

# Atris Memory Skill

Search and reason over Atris journal history using the RLM pattern (grep first, reason second).

## When to Activate

User asks about:
- Past work, decisions, history
- "Remember when...", "How did we...", "Why did we..."
- Patterns, failures, lessons learned
- "What have we tried before?"
- Anything requiring historical context

## Journal Locations

```
atris/logs/YYYY/YYYY-MM-DD.md
```

Structure of each journal:
- `## Inbox` - Raw ideas (I1, I2, ...)
- `## In Progress 🔄` - Active work
- `## Backlog` - Deferred work
- `## Notes` - Session summaries, brainstorms
- `## Completed âś…` - Finished work (C1, C2, ...)

## Search Strategy (RLM Pattern)

**Step 1: Grep first (cheap, fast)**
```bash
# Find keyword matches
grep -r "keyword" atris/logs/ --include="*.md"

# With context
grep -r -C 3 "keyword" atris/logs/ --include="*.md"

# Multiple terms
grep -r -E "auth|login|token" atris/logs/ --include="*.md"
```

**Step 2: If few matches (< 10), read directly**
- Use Read tool on matching files
- Synthesize answer yourself

**Step 3: If many matches (10+), use subagent**
```
Task(haiku): "Analyze these journal entries and find patterns related to [query]:
[paste relevant grep results]"
```

**Step 4: For complex synthesis**
- Chunk results by time period or topic
- Spawn multiple haiku subagents
- Aggregate findings

## Example Flows

### Simple: "When did we add feature X?"
```
1. grep -r "feature X" atris/logs/
2. Read the matching file
3. Answer: "Added on 2025-01-02, see C3 in that day's journal"
```

### Medium: "What auth issues have we had?"
```
1. grep -r -E "auth|login|token|credential" atris/logs/
2. Found 15 matches across 8 files
3. Read the 3 most recent matches
4. Task(haiku): "Categorize these auth-related entries: [entries]"
5. Synthesize into answer
```

### Complex: "Why do reviews keep failing?"
```
1. grep -r -E "fail|❌|reject|REVIEW" atris/logs/
2. Found 30+ matches
3. Task(haiku): "What are the failure reasons in: [chunk 1]"
4. Task(haiku): "What are the failure reasons in: [chunk 2]"
5. Aggregate: "78% missing tests, 22% outdated MAP.md"
```

## Key Patterns to Search

| Looking for | Grep pattern |
|-------------|--------------|
| Completed work | `Completed\|âś…\|C[0-9]+:` |
| Failures | `fail\|❌\|reject\|block` |
| Decisions | `decided\|decision\|chose\|pivot` |
| Ideas | `Inbox\|I[0-9]+:\|idea\|maybe` |
| Technical debt | `debt\|todo\|hack\|fixme\|refactor` |

## Cost Efficiency

- Grep: Free, instant
- Read: Counts against context, use sparingly
- Task(haiku): ~$0.001, use for semantic analysis
- Task(sonnet): ~$0.01, use only if haiku insufficient

Always grep first. Only escalate to LLM when you need reasoning, not retrieval.

Overview

This skill searches and reasons over the Atris journal history to answer questions about past work, decisions, and patterns. It uses a cost-conscious RLM pattern: grep first to locate relevant entries, then apply lightweight reasoning or subagents for synthesis. Use it whenever historical context or lessons learned are needed.

How this skill works

First perform fast, local greps across atris/logs/YYYY/YYYY-MM-DD.md to surface candidate entries and context lines. If matches are few, read the files directly and synthesize an answer. If matches are many, spawn small semantic subagents (haiku) to categorize or summarize chunks, then aggregate results for concise findings and actionable insights.

When to use it

  • User asks “Remember when…”, “How did we…”, or “Why did we…?” about past work
  • Investigating patterns, repeated failures, or lessons learned
  • Checking whether a feature or fix was implemented and when
  • Reviewing past decisions or rationale for pivots
  • Preparing retrospective summaries or action lists from historical logs

Best practices

  • Always grep first to reduce expensive reads and LLM calls
  • If fewer than ~10 matches, read the matching files directly and synthesize
  • For 10+ matches, chunk results by time or topic and use haiku subagents to summarize
  • Prioritize recent entries, then fill gaps with targeted reads of older logs
  • Use well-chosen grep patterns (see patterns for completed work, failures, decisions)

Example use cases

  • Find when feature X was added and link to the completed entry in the daily journal
  • Summarize recurring auth issues across multiple logs and quantify root causes
  • Explain why a particular design decision was made, citing the journal rationale
  • Aggregate notes from a multi-day incident to produce a concise postmortem
  • List previously tried approaches to a problem so the team avoids repeating work

FAQ

What does RLM mean and why use it?

RLM is grep-first then reason. It minimizes costly reads and LLM usage by locating relevant text cheaply before applying semantic analysis.

When should I spawn subagents?

Spawn haiku subagents when grep returns many matches (roughly 10+). Chunk by time or topic and summarize each chunk before aggregating.