Home / MCP / Graphiti MCP Server

Graphiti MCP Server

Provides a scalable MCP server that exposes Graphiti’s knowledge graph capabilities through HTTP and stdio transports with multiple backends and LLM/embedder options.

python
20.9kstars
Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
    "mcpServers": {
        "graphiti": {
            "url": "http://localhost:8000/mcp/"
        }
    }
}

Graphiti MCP Server exposes Graphiti’s knowledge graph capabilities through the MCP protocol, enabling AI assistants to interact with episodes, entities, and semantically-driven graph queries over HTTP or via local stdio clients. It supports multiple backends, LLM providers, and embedders, making it practical to build context-aware, memory-enabled AI applications that scale.

How to use

Connect to the Graphiti MCP Server from your MCP-enabled client using either the HTTP endpoint or a local stdio workflow. The HTTP transport is the default and widely supported, exposing the MCP API at /mcp/. You can also run the server locally and pipe MCP messages directly through a standard input/output interface for desktop clients or custom runners.

Key actions you can perform include adding and retrieving episodes, querying entities and relationships, searching for facts, and managing the graph state. You can run concurrent episode processing with configurable limits, switch between backends like FalkorDB or Neo4j, and pick from several LLM and embedder providers to suit your deployment needs.

How to install

Prerequisites you need before starting: - Docker and Docker Compose - A valid API key for your chosen LLM provider (OpenAI by default, others supported) - Python 3.10+ if you plan to run the MCP server standalone with an external FalkorDB instance.

Step-by-step setup you can follow locally: 1. Clone the Graphiti repository and navigate to the mcp_server directory 2. Install and prepare the runtime tooling 3. Start the server with the preferred database backend and transport configuration

git clone https://github.com/getzep/graphiti.git
cd graphiti/mcp_server

# Install the runtime tool (uv) and dependencies
curl -LsSf https://astral.sh/uv/install.sh | sh
uv sync

# Optional: install extra LLM/embedder providers
uv sync --extra providers

# Start with the default FalkorDB setup (combined container)
docker compose up

# Or run the MCP server directly with a backend of your choice
uv run graphiti_mcp_server.py --group-id my_group

Configuration and management

Configuration is provided via a YAML file, environment variables, or command-line arguments. The default setup uses HTTP transport at http://localhost:8000/mcp/ and FalkorDB as the database with OpenAI as the LLM. You can switch to Neo4j or run with Ollama for local LLMs, and you can customize the group_id to namespace graph data.

Key configuration options you’ll encounter include the transport type, LLM provider and model, database provider, and optional environment variables for keys and endpoints. For local Ollama use cases, you can point the LLM to a local endpoint and set a local embedder such as sentence transformers.

Integrating with MCP clients

HTTP transport is the primary method for MCP clients. Use the HTTP URL http://localhost:8000/mcp/ for your client configuration.

If you prefer a local stdio workflow (for example, Claude Desktop or other desktop clients), you can run the MCP server with uv and connect via a stdio-based runner. You can also bridge via mcp-remote to Claude Desktop when using HTTP endpoints.

Sample client integration paths include: - HTTP: point your MCP client to http://localhost:8000/mcp/ - STDIO via uv: run uv with the server script and group-id, then connect your client to the stdio stream - STDIO via mcp-remote: run mcp-remote and pass the HTTP endpoint as the MCP target

Notes and troubleshooting

If you encounter rate-limit or concurrency issues, tune SEMAPHORE_LIMIT to control the number of concurrently processed episodes. Monitor logs for 429 errors and adjust based on your LLM provider’s quota and performance. You can enable or disable anonymous telemetry by setting GRAPHITI_TELEMETRY_ENABLED.

Common startup paths include starting the FalkorDB+MCP container with docker compose, or starting standalone server processes with uv and a chosen database provider. When using Neptune or Neo4j, ensure the corresponding docker-compose files or environment variables are configured with the correct URIs and credentials.

Examples of common workflows

Start default FalkorDB setup with HTTP transport and MCP endpoint: - docker compose up - MCP endpoint: http://localhost:8000/mcp/ Run with Neo4j database using direct command: - uv run graphiti_mcp_server.py --database-provider neo4j - Optional: uv run graphiti_mcp_server.py --config config/config-docker-neo4j.yaml Run FalkorDB with separate containers: - docker compose -f docker/docker-compose-falkordb.yml up - Then start MCP server with FalkorDB as the backend For desktop clients like Claude Desktop, you can expose a stdio path via mcp-remote or connect through a local uv-based runtime and configure the client accordingly.

Available tools

add_episode

Add an episode to the knowledge graph, supporting text, JSON, and message formats

search_nodes

Search the knowledge graph for relevant node summaries

search_facts

Search the knowledge graph for relevant facts (edges between entities)

delete_entity_edge

Delete an entity edge from the knowledge graph

delete_episode

Delete an episode from the knowledge graph

get_entity_edge

Get an entity edge by its UUID

get_episodes

Get the most recent episodes for a specific group

clear_graph

Clear all data from the knowledge graph and rebuild indices

get_status

Get the status of the MCP server and its database connections