home / mcp / in memoria mcp server

In Memoria MCP Server

Persistent Intelligence Infrastructure for AI Agents

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "pi22by7-in-memoria": {
      "command": "npx",
      "args": [
        "in-memoria",
        "server"
      ],
      "env": {
        "YOUR_ENV_VAR": "value"
      }
    }
  }
}

Memoria MCP is an on‑machine memory and intelligence layer for your AI coding assistants. It learns from your actual codebase and remembers decisions, patterns, and file routing across sessions, so tools like Claude, Copilot, and similar assistants can query persistent context and provide targeted, context-aware suggestions without re-analyzing your project from scratch.

How to use

You connect an MCP client to the Memoria server and interact with your AI assistants as you code. Learn a project to seed the memory, start the server, and then ask your AI to perform code tasks. The AI will query Memoria for project context, architectural decisions, and file routing to serve precise, pattern-aware guidance across sessions.

Typical usage flow:

  • Learn a project: run the learn step to analyze the codebase and populate persistent intelligence.
  • Start the MCP server to enable real-time querying by your AI tools.
  • Ask your AI to perform tasks like adding features, routing to files, or recalling prior decisions; Memoria will route requests to relevant files and leverage learned patterns.
  • Switch sessions days apart and still get context-aware guidance about where things live and how patterns were implemented previously.

Available tools

analyze_codebase

Analyze files/directories to extract concepts, patterns, and complexity across the codebase.

search_codebase

Perform multi‑mode searches (semantic, text, and pattern-based) within the learned intelligence.

learn_codebase_intelligence

Deep learning processes that extract patterns and architectural relationships from the codebase.

get_project_blueprint

Provide instant project context with tech stack, entry points, and architecture.

get_semantic_insights

Query learned concepts and relationships to surface meaningful connections.

get_pattern_recommendations

Return suggested patterns with related files to maintain consistency.

predict_coding_approach

Offer implementation guidance with file routing hints based on history.

get_developer_profile

Access coding style and work context to tailor suggestions.

contribute_insights

Record architectural decisions and update the learned model.

auto_learn_if_needed

Smart auto-learning with staleness detection to keep intelligence fresh.

get_system_status

Health check for Memoria components.

get_intelligence_metrics

Analytics on learned patterns and coverage.

get_performance_status

Performance diagnostics for the Memoria server.