home / mcp / mimir mcp server

Mimir MCP Server

Provides a persistent graph-based memory and task management for AI agents with MCP API access and semantic search.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "orneryd-mimir": {
      "url": "http://localhost:9042/mcp",
      "headers": {
        "MIMIR_LLM_API": "http://copilot-api:4141",
        "NEO4J_PASSWORD": "password",
        "HOST_WORKSPACE_ROOT": "~/src",
        "MIMIR_DEFAULT_MODEL": "gpt-4.1",
        "MIMIR_DEFAULT_PROVIDER": "openai",
        "MIMIR_EMBEDDINGS_MODEL": "bge-m3",
        "MIMIR_EMBEDDINGS_DIMENSIONS": "1024"
      }
    }
  }
}

You run an MCP server that gives AI assistants a persistent memory graph and tools to store, retrieve, and manipulate knowledge. It lets your agents remember tasks, relate them to files and concepts, and perform retrieval-augmented actions with a built-in API you can call from your favorite AI client.

How to use

You interact with the Mimir MCP server by connecting an AI agent (such as Claude or ChatGPT) to the local MCP endpoint. Start a session, then request actions like creating tasks, adding context, indexing files, or performing semantic searches. The server exposes a structured set of tools: memory operations to manage graph nodes and relationships, file indexing to bring code into the graph, and vector search to locate relevant context by meaning. You can guide the agent to perform multi-step workflows, coordinate between agents, and retrieve context for informed responses.

How to install

# Prerequisites
# Install Docker Desktop, Node.js 18+, and Git on your machine
# Then follow these steps to deploy Mimir MCP server locally

# 1. Clone the repository
git clone https://github.com/orneryd/Mimir.git
cd Mimir

# 2. Copy environment template
cp env.example .env

# 3. Start all services (automatic platform detection)
npm run start
# Or manually using Docker Compose
# docker compose up -d
# Configure workspace access (only required setting)
# Your main source code directory (default: ~/src)
HOST_WORKSPACE_ROOT=~/src  # βœ… Tilde (~) expands automatically

Configuration and startup basics

The runtime environment is controlled via environment variables. At minimum, you typically configure the Neo4j password and your workspace root. You can also tailor the LLM provider, embedding model, and various API endpoints to fit your setup.

Common startup steps include starting all services and then verifying health endpoints. After startup you can open the web UI at http://localhost:9042 and the Neo4j Browser at http://localhost:7474.

File indexing and browsing

Files from your workspace can be indexed to build a searchable knowledge graph. You can add folders to index, list indexed folders, and remove folders as needed. Embeddings can be enabled to support semantic search.

Indexing respects your .gitignore and processes files into chunks suitable for embedding and graph storage. You can monitor indexing progress in the logs.

Using MCP tools and APIs

The MCP server exposes a family of tools for memory, file indexing, vector search, and Todo management. These tools allow agents to create nodes, link them with relationships, index folders, perform semantic searches, and manage task lists.

Interact with the MCP endpoints from your AI agent by invoking the appropriate tool calls. The system persists conversations and supports multi-provider LLMs for flexibility.

Troubleshooting tips

If services fail to start, check that Docker is running, that there are no port conflicts, and inspect the service logs. If Neo4j takes longer to come online, wait a bit and retry health checks. Embeddings may require a running embeddings service such as Ollama or an external endpoint.

Notes on the UI and endpoints

The web UI provides a portal for file indexing, an orchestration studio for workflow visualization, and access to the MCP API. The Chat API offers OpenAI-compatible chat completions with built-in MCP tool support and RAG.

Key URLs to remember: Mimir Web UI at http://localhost:9042, MCP API at http://localhost:9042/mcp, and Neo4j Browser at http://localhost:7474.

Advanced topics and examples

You can switch between LLM providers at runtime by updating the environment configuration and restarting the MCP server. Embeddings models can be swapped to improve semantic search quality. The system supports code-mode execution for efficient task automation via the PCTX integration.

Available tools

memory_node

Create/read/update memory graph nodes such as tasks, files, and concepts

memory_edge

Create relationships between graph nodes

memory_batch

Bulk operations for memory changes

memory_lock

Coordinate multi-agent actions to prevent conflicts

memory_clear

Clear memory data with care

get_task_context

Retrieve filtered context for a given agent or task

index_folder

Index a folder into the graph and enable semantic search

remove_folder

Stop watching and unregister a folder from indexing

list_folders

List currently watched/indexed folders

vector_search_nodes

Perform semantic search over indexed nodes using embeddings

get_embedding_stats

Return statistics about embeddings and dimensions

todo

Create or update a single task in the Todo list

todo_list

Manage a list of tasks