Provides a persistent graph-based memory and task management for AI agents with MCP API access and semantic search.
Configuration
View docs{
"mcpServers": {
"orneryd-mimir": {
"url": "http://localhost:9042/mcp",
"headers": {
"MIMIR_LLM_API": "http://copilot-api:4141",
"NEO4J_PASSWORD": "password",
"HOST_WORKSPACE_ROOT": "~/src",
"MIMIR_DEFAULT_MODEL": "gpt-4.1",
"MIMIR_DEFAULT_PROVIDER": "openai",
"MIMIR_EMBEDDINGS_MODEL": "bge-m3",
"MIMIR_EMBEDDINGS_DIMENSIONS": "1024"
}
}
}
}You run an MCP server that gives AI assistants a persistent memory graph and tools to store, retrieve, and manipulate knowledge. It lets your agents remember tasks, relate them to files and concepts, and perform retrieval-augmented actions with a built-in API you can call from your favorite AI client.
You interact with the Mimir MCP server by connecting an AI agent (such as Claude or ChatGPT) to the local MCP endpoint. Start a session, then request actions like creating tasks, adding context, indexing files, or performing semantic searches. The server exposes a structured set of tools: memory operations to manage graph nodes and relationships, file indexing to bring code into the graph, and vector search to locate relevant context by meaning. You can guide the agent to perform multi-step workflows, coordinate between agents, and retrieve context for informed responses.
# Prerequisites
# Install Docker Desktop, Node.js 18+, and Git on your machine
# Then follow these steps to deploy Mimir MCP server locally
# 1. Clone the repository
git clone https://github.com/orneryd/Mimir.git
cd Mimir
# 2. Copy environment template
cp env.example .env
# 3. Start all services (automatic platform detection)
npm run start
# Or manually using Docker Compose
# docker compose up -d# Configure workspace access (only required setting)
# Your main source code directory (default: ~/src)
HOST_WORKSPACE_ROOT=~/src # β
Tilde (~) expands automaticallyThe runtime environment is controlled via environment variables. At minimum, you typically configure the Neo4j password and your workspace root. You can also tailor the LLM provider, embedding model, and various API endpoints to fit your setup.
Common startup steps include starting all services and then verifying health endpoints. After startup you can open the web UI at http://localhost:9042 and the Neo4j Browser at http://localhost:7474.
Files from your workspace can be indexed to build a searchable knowledge graph. You can add folders to index, list indexed folders, and remove folders as needed. Embeddings can be enabled to support semantic search.
Indexing respects your .gitignore and processes files into chunks suitable for embedding and graph storage. You can monitor indexing progress in the logs.
The MCP server exposes a family of tools for memory, file indexing, vector search, and Todo management. These tools allow agents to create nodes, link them with relationships, index folders, perform semantic searches, and manage task lists.
Interact with the MCP endpoints from your AI agent by invoking the appropriate tool calls. The system persists conversations and supports multi-provider LLMs for flexibility.
If services fail to start, check that Docker is running, that there are no port conflicts, and inspect the service logs. If Neo4j takes longer to come online, wait a bit and retry health checks. Embeddings may require a running embeddings service such as Ollama or an external endpoint.
The web UI provides a portal for file indexing, an orchestration studio for workflow visualization, and access to the MCP API. The Chat API offers OpenAI-compatible chat completions with built-in MCP tool support and RAG.
Key URLs to remember: Mimir Web UI at http://localhost:9042, MCP API at http://localhost:9042/mcp, and Neo4j Browser at http://localhost:7474.
You can switch between LLM providers at runtime by updating the environment configuration and restarting the MCP server. Embeddings models can be swapped to improve semantic search quality. The system supports code-mode execution for efficient task automation via the PCTX integration.
Create/read/update memory graph nodes such as tasks, files, and concepts
Create relationships between graph nodes
Bulk operations for memory changes
Coordinate multi-agent actions to prevent conflicts
Clear memory data with care
Retrieve filtered context for a given agent or task
Index a folder into the graph and enable semantic search
Stop watching and unregister a folder from indexing
List currently watched/indexed folders
Perform semantic search over indexed nodes using embeddings
Return statistics about embeddings and dimensions
Create or update a single task in the Todo list
Manage a list of tasks