home / mcp / rag_memory mcp server

rag_memory MCP Server

Provides a local, memory-backed knowledge graph with vector search and document processing for intelligent retrieval.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "ttommyth-rag-memory-mcp": {
      "command": "npx",
      "args": [
        "-y",
        "rag-memory-mcp"
      ],
      "env": {
        "MEMORY_DB_PATH": "YOUR_PATH or use default memory.db"
      }
    }
  }
}

You run an MCP server that builds a semantic, memory-backed knowledge graph with vector search. It lets you store documents, extract entities, link them into a graph, and perform hybrid searches that blend semantic similarity with graph traversal to retrieve contextually relevant information locally.

How to use

To use rag-memory, run the MCP server locally and connect your MCP clients (for example Claude Desktop or VS Code) to it. You will store documents, chunk and embed them, extract terms, create entities and relations, and then run hybrid searches that leverage both vector similarity and graph structure. This setup enables persistent memory across conversations and intelligent retrieval from a growing knowledge graph.

How to install

Prerequisites: you need Node.js and npm installed on your system.

Install and run the MCP server locally using the package manager and the included CLI host command.

# Install Node.js and npm if you don’t already have them
# ... install instructions depend on your OS ...

# Run the MCP server locally using npx (no global install required)
npx -y rag-memory-mcp
```} ]},{

Configure clients to connect to the local MCP server by pointing them to the local process launched above. You can also customize the local database path if you want to store the memory in a specific location.

Configuration and environment

The server supports a local, stdio-based MCP configuration that runs in your environment. You can customize the memory database path via an environment variable.

{
  "mcpServers": {
    "rag_memory": {
      "command": "npx",
      "args": ["-y", "rag-memory-mcp"]
    }
  }
}
```"}]} ,{

Environment variables you can configure include MEMORY_DB_PATH to set a custom SQLite database location. If not set, the server uses a default memory.db in its directory.

Usage examples

Typical workflow you can perform with the server:

// 1. Store a document
await storeDocument({
  id: "ml_intro",
  content: "Machine learning is a subset of AI...",
  metadata: { type: "educational", topic: "ML" }
});

// 2. Process the document
await chunkDocument({ documentId: "ml_intro" });
await embedChunks({ documentId: "ml_intro" });

// 3. Extract and create entities
const terms = await extractTerms({ documentId: "ml_intro" });
await createEntities({
  entities: [
    {
      name: "Machine Learning",
      entityType: "CONCEPT",
      observations: ["Subset of artificial intelligence", "Learns from data"]
    }
  ]
});

// 4. Search with hybrid approach
const results = await hybridSearch({
  query: "artificial intelligence applications",
  limit: 10,
  useGraph: true
});
```"}]} ,{

Troubleshooting and notes

If you encounter issues starting the server, verify Node.js and npm are installed, and ensure you are running the stdio command with the correct arguments. Check that MEMORY_DB_PATH (if used) points to a writable location. Ensure your MCP clients are configured to connect to the local host and port exposed by the server.

Tools and capabilities overview

The server provides a comprehensive memory management surface through a model-context-style protocol. Key capabilities include document management, knowledge graph construction, and advanced search across stored documents and graph nodes.

System considerations and notes

This server is designed to run locally alongside MCP clients and requires local file system access for the database storage.

Security and privacy

Store only non-sensitive data locally unless you configure appropriate access controls for your environment. Maintain safe handling of documents and embeddings, especially for proprietary content.

Available tools

storeDocument

Store documents with metadata for processing and retrieval.

chunkDocument

Split stored documents into chunks suitable for embedding and retrieval.

embedChunks

Create vector embeddings for document chunks to enable semantic search.

extractTerms

Extract potential entities or terms from stored documents to aid knowledge graph construction.

linkEntitiesToDocument

Create explicit associations between entities and their related documents.

deleteDocuments

Remove documents and any linked data from memory.

listDocuments

List all stored documents with their metadata.

createEntities

Create new entities with types and initial observations.

createRelations

Establish relationships between entities in the knowledge graph.

addObservations

Add contextual observations to existing entities.

deleteEntities

Remove entities and their related relations.

deleteRelations

Remove specific relationships between entities.

deleteObservations

Remove specific observations from entities.

hybridSearch

Perform search that combines vector similarity with graph traversal.

searchNodes

Find entities by name, type, or observation content.

openNodes

Retrieve an entity and its related nodes and relationships.

readGraph

Get the complete knowledge graph structure.

getKnowledgeGraphStats

Provide statistics about the knowledge base.