home / skills / openclaw / skills / engram

engram skill

/skills/anwitch/engram

This skill enables semantic search over a local Markdown knowledge base using embeddings to reveal contextually relevant results.

npx playbooks add skill openclaw/skills --skill engram

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
920 B
---
name: engram
description: Provides semantic search for a local knowledge base using Pinecone and Gemini embeddings.
---

# 🧠 Engram - Semantic Search Skill

This skill enables an AI agent to perform semantic searches on a local folder of Markdown files (e.g., an Obsidian vault). It finds information based on the meaning and context of a query, not just exact keywords.

## Tools

### engram_search

Searches the indexed knowledge base.

-   **`query`** (string, required): The natural language question to ask.
-   **`top_k`** (number, optional): The number of results to return.
-   **`min_score`** (number, optional): The minimum relevance score (0.0 to 1.0) for results.

### engram_index

Builds or updates the search index from the local Markdown files. This tool should be run periodically to keep the search memory synchronized.

## Author

-   **Andrie Wijaya** ([@Anwitch](https://github.com/Anwitch))

Overview

This skill provides semantic search over a local knowledge base of Markdown files using Pinecone and Gemini embeddings. It turns a folder of notes (for example an Obsidian vault) into a searchable memory that finds conceptually relevant passages, not just keyword matches. Use it to surface context, related notes, and concise excerpts for agent reasoning or user queries.

How this skill works

The skill indexes Markdown files by embedding their content with Gemini and storing vectors in Pinecone. It exposes two tools: one to build or update the index from your local folder, and another to run natural-language queries that return top-k results with relevance scores. Results are ranked by semantic similarity, and you can filter by minimum score or limit the number of returned passages.

When to use it

  • You need to search a personal knowledge base or project notes by meaning rather than exact words.
  • You want an agent to reference local documentation, design notes, or archived content during tasks.
  • You want fast retrieval of related passages to support context-aware replies or summarization.
  • You need to keep a searchable backup or archive of Markdown content for offline use.
  • You want to augment an assistant with long-term memory sourced from local files.

Best practices

  • Run the index tool after major edits or periodically to keep embeddings synchronized with file changes.
  • Split long documents into logical chunks (headings or sections) so results are granular and focused.
  • Set a sensible top_k (3–10) to balance relevance and response length for agent consumption.
  • Tune min_score to filter out weak matches; start around 0.2–0.3 and adjust based on quality.
  • Exclude or tag sensitive files before indexing to avoid unintended retrieval.

Example use cases

  • Ask an agent to locate design decisions across archived project notes and return the supporting passages.
  • Search meeting notes to find action items and relevant context for follow-up tasks.
  • Provide a writer with related research snippets from a personal library of Markdown drafts.
  • Enable a chatbot to cite local policy documents or SOPs when answering compliance questions.
  • Create an offline searchable archive of all versions of notes for audit and recovery purposes.

FAQ

How do I keep the index up to date?

Run the index tool after changes or schedule periodic updates; it builds or refreshes embeddings from the local Markdown folder.

What do top_k and min_score control?

top_k limits the number of returned results; min_score filters results below a relevance threshold to remove weak matches.

Can I exclude files from indexing?

Yes. Remove or tag files you don't want included before running the index tool so they are not embedded or stored.