Provides local, persistent memory for LLMs with drift detection and source-linked memories to reduce hallucinations.
Configuration
View docs{
"mcpServers": {
"arkya-ai-ember-mcp": {
"command": "ember-mcp",
"args": [
"run"
]
}
}
}Ember MCP provides local-first memory for large language models, enabling persistent, context-aware reasoning across sessions and clients while actively managing knowledge freshness to reduce hallucinations. It runs entirely locally, preserving privacy and allowing your AI to reference up-to-date decisions and sources without relying on cloud vector stores.
You integrate Ember MCP with your MCP clients to give your AI a long-term memory that travels across conversations and tools. Ember continuously evaluates the recency and usage of stored memories, penalizing stale information and highlighting the most relevant, current context. When you switch from one MCP client to another, the AI can pick up where you left off without requiring you to re-summarize everything. Use Ember to keep your architecture decisions, design notes, and discussion outcomes coherent across sessions.
Prerequisites: ensure you have a supported operating system (macOS, Linux, or Windows with WSL) and Python 3.10 or newer installed on your machine.
1) Download or prepare Ember MCP using the installer. The installer automatically detects your MCP clients and registers Ember with each one, creates the local storage directory, and downloads the embedding model.
2) Restart your MCP client applications after installation so Ember can establish the memory bridge with each client.
3) If you want to verify the setup, run the status command to see which clients are registered and how many memories are stored.
Manual configuration is possible for MCP clients that arenβt auto-detected. Use the following configuration snippet to register Ember as an MCP server for your client.
{
"mcpServers": {
"ember": {
"command": "ember-mcp",
"args": ["run"]
}
}
}Save a named memory with importance level and optional tags
Semantic search with temporal scoring across all memories
Recall plus automatically read source files behind the embers
Auto-capture key information from conversation (facts, preferences, decisions)
Mark outdated memory stale and store corrected version
List all stored memories, optionally filtered by tag
Remove a memory by ID
View Voronoi cell distribution, statistics, and density
Auto-retrieve relevant context at conversation start with temporal ranking
Save session summary, decisions, and next steps with source linking
Run drift detection β flag stale memories in shifting knowledge regions