home / mcp / vector memory mcp server
Provides local, private vector-based memory storage with semantic search and session handoffs for MCP clients.
Configuration
View docs{
"mcpServers": {
"aeriondyseti-vector-memory-mcp": {
"command": "bunx",
"args": [
"--bun",
"@aeriondyseti/vector-memory-mcp"
],
"env": {
"VECTOR_MEMORY_MODEL": "--placeholder--",
"VECTOR_MEMORY_DB_PATH": "--placeholder--",
"VECTOR_MEMORY_HTTP_PORT": "3271"
}
}
}
}Vector Memory MCP Server provides local, private semantic memory storage for AI assistants. It stores decisions, patterns, and session context across interactions, enabling fast semantic search with embeddings generated locally and stored in LanceDB.
You work with an MCP client to store, search, and manage memories and session handoffs. This server runs locally and communicates through the MCP protocol, so you can integrate it with your existing MCP-enabled tools. Use store_memories to save memories, search_memories to find relevant items, and store_handoff/get_handoff to save and restore session context between sessions. All actions occur locally, keeping your data private.
Prerequisites you need before installation are ready-to-run Bun and an MCP-compatible client.
bun install -g @aeriondyseti/vector-memory-mcp
```
> First install downloads ML models (~90MB). This may take a minute.Configure your MCP client to connect to the local Vector Memory MCP Server using a stdio-based local runtime. The server exposes an MCP endpoint you can start from your environment.
{
"mcpServers": {
"vector_memory": {
"type": "stdio",
"command": "bunx",
"args": ["--bun", "@aeriondyseti/vector-memory-mcp"]
}
}
}Once the server is running, you can perform the following actions from your MCP client.
Store memories: store_memories (accepts an array of memories).
Search memories: search_memories to retrieve semantically relevant memories.
Session handoffs: store_handoff to save context and get_handoff to restore context for the next session.
Environment variables control storage location, embedding model, and the HTTP port for the server.
VECTOR_MEMORY_DB_PATH=.vector-memory/memories.db
VECTOR_MEMORY_MODEL=Xenova/all-MiniLM-L6-v2
VECTOR_MEMORY_HTTP_PORT=3271The server is designed for local-first, private memory storage. It uses LanceDB for fast semantic search and runs entirely on your machine.
Save memories; accepts an array of memory objects to persist in local storage.
Retrieve memories by semantic similarity to a query, using the embedding model to generate vectors.
Retrieve memories by their IDs (accepts an array of IDs).
Update existing memories by ID with new data.
Remove memories by ID (accepts an array of IDs).
Save session context for a future session, enabling handoffs.
Restore previously saved session context for a new session.