home / mcp / rag_memory mcp server
Provides a local, memory-backed knowledge graph with vector search and document processing for intelligent retrieval.
Configuration
View docs{
"mcpServers": {
"ttommyth-rag-memory-mcp": {
"command": "npx",
"args": [
"-y",
"rag-memory-mcp"
],
"env": {
"MEMORY_DB_PATH": "YOUR_PATH or use default memory.db"
}
}
}
}You run an MCP server that builds a semantic, memory-backed knowledge graph with vector search. It lets you store documents, extract entities, link them into a graph, and perform hybrid searches that blend semantic similarity with graph traversal to retrieve contextually relevant information locally.
To use rag-memory, run the MCP server locally and connect your MCP clients (for example Claude Desktop or VS Code) to it. You will store documents, chunk and embed them, extract terms, create entities and relations, and then run hybrid searches that leverage both vector similarity and graph structure. This setup enables persistent memory across conversations and intelligent retrieval from a growing knowledge graph.
Prerequisites: you need Node.js and npm installed on your system.
Install and run the MCP server locally using the package manager and the included CLI host command.
# Install Node.js and npm if you donβt already have them
# ... install instructions depend on your OS ...
# Run the MCP server locally using npx (no global install required)
npx -y rag-memory-mcp
```} ]},{Configure clients to connect to the local MCP server by pointing them to the local process launched above. You can also customize the local database path if you want to store the memory in a specific location.
The server supports a local, stdio-based MCP configuration that runs in your environment. You can customize the memory database path via an environment variable.
{
"mcpServers": {
"rag_memory": {
"command": "npx",
"args": ["-y", "rag-memory-mcp"]
}
}
}
```"}]} ,{Environment variables you can configure include MEMORY_DB_PATH to set a custom SQLite database location. If not set, the server uses a default memory.db in its directory.
Typical workflow you can perform with the server:
// 1. Store a document
await storeDocument({
id: "ml_intro",
content: "Machine learning is a subset of AI...",
metadata: { type: "educational", topic: "ML" }
});
// 2. Process the document
await chunkDocument({ documentId: "ml_intro" });
await embedChunks({ documentId: "ml_intro" });
// 3. Extract and create entities
const terms = await extractTerms({ documentId: "ml_intro" });
await createEntities({
entities: [
{
name: "Machine Learning",
entityType: "CONCEPT",
observations: ["Subset of artificial intelligence", "Learns from data"]
}
]
});
// 4. Search with hybrid approach
const results = await hybridSearch({
query: "artificial intelligence applications",
limit: 10,
useGraph: true
});
```"}]} ,{If you encounter issues starting the server, verify Node.js and npm are installed, and ensure you are running the stdio command with the correct arguments. Check that MEMORY_DB_PATH (if used) points to a writable location. Ensure your MCP clients are configured to connect to the local host and port exposed by the server.
The server provides a comprehensive memory management surface through a model-context-style protocol. Key capabilities include document management, knowledge graph construction, and advanced search across stored documents and graph nodes.
This server is designed to run locally alongside MCP clients and requires local file system access for the database storage.
Store only non-sensitive data locally unless you configure appropriate access controls for your environment. Maintain safe handling of documents and embeddings, especially for proprietary content.
Store documents with metadata for processing and retrieval.
Split stored documents into chunks suitable for embedding and retrieval.
Create vector embeddings for document chunks to enable semantic search.
Extract potential entities or terms from stored documents to aid knowledge graph construction.
Create explicit associations between entities and their related documents.
Remove documents and any linked data from memory.
List all stored documents with their metadata.
Create new entities with types and initial observations.
Establish relationships between entities in the knowledge graph.
Add contextual observations to existing entities.
Remove entities and their related relations.
Remove specific relationships between entities.
Remove specific observations from entities.
Perform search that combines vector similarity with graph traversal.
Find entities by name, type, or observation content.
Retrieve an entity and its related nodes and relationships.
Get the complete knowledge graph structure.
Provide statistics about the knowledge base.