home / mcp / sensei mcp server
Provides Sensei MCP server integration for multi-persona guidance with 64 personas and workflow tools.
Configuration
View docs{
"mcpServers": {
"amarodeabreu-sensei-mcp": {
"command": "uvx",
"args": [
"sensei-mcp"
]
}
}
}Sensei MCP is an orchestrated multi-persona guidance server that injects engineering standards into reasoning before you code. It coordinates 64 expert personas to provide CTO-level perspectives, helps you manage session memory of architectural decisions, and integrates with your development workflow to keep decisions consistent across projects.
You use Sensei MCP by connecting your MCP client to the Sensei server and then guiding the collaboration through persona-selected perspectives. Start a session for your project, then request guidance on architecture, API design, security, or cost optimization. Sensei MCP automatically loads relevant standards and pulls in the appropriate personas to synthesize a cohesive recommendation. You can drill into specific personas for targeted input, merge sessions from multiple developers, and export ADR-like summaries for team alignment.
Prerequisites: ensure you have a compatible runtime and an MCP client installed. This server configuration uses a local stdio-based runner that you invoke from your MCP client.
Step-by-step starter flow you can follow to run Sensei MCP locally via the provided stdio command snippet. Run these steps in your terminal.
# Install prerequisites (adjust if your environment differs)
# The following commands reflect the standard runtime flow for Sensei MCP
# If you need to bootstrap via a package manager, install the uvx runtime if required by your setup
# Example (replace with your actual environment setup):
# curl -sS https://example.com/install-uvx.sh | bash
# Start the Sensei MCP server via the MCP CLI or direct stdio invocation as configured
# The canonical runtime per configuration is provided below in the recommended snippet
uvx sensei-mcpSensei MCP is configured to run as a stdio server from the command line. The typical runtime entry is a command and argument list that launches the MCP server so your client can connect and begin querying with persona-aware guidance.
Config snippets shown here illustrate how to wire the server into your client environment. You will typically supply the command and arguments to launch Sensei MCP from your MCP client or development environment.
Sensei MCP exposes a set of tools that help you discover personas, fetch content, manage session memory, and export decisions. These tools are designed to integrate with your existing workflows and CI/CD pipelines to keep engineering decisions traceable and actionable.
Architecture decision flow: request multi-persona guidance, fetch relevant persona SKILL content, synthesize perspectives, and record the final decision with rationale. Crisis workflows marshal an incident response team for fast triage, while standard workflow modes support regular design reviews and optimization tasks.
Sensei MCP emphasizes local operation and memory persistence within your workspace. It maintains session memory per project and supports team collaboration while isolating decisions across different projects to prevent cross-project leakage.
If the server does not start as expected, verify that the runtime command and arguments match the configured entry point, and ensure your MCP client can reach the stdio interface. Check for common issues such as missing dependencies, permission errors, or conflicting processes using your systemβs process manager.
This server is designed to integrate with a broad ecosystem of MCP servers and tools. You can extend your setup by adding other MCP servers and workflows to create a cohesive CTO co-pilot that scales across projects.
Returns the full SKILL.md content for a specific persona, enabling the LLM to analyze queries from that personaβs perspective.
Suggests relevant personas for a given query with relevance scores and rationale to guide context selection.
Fetches the current session memory including constraints, decisions, and patterns in JSON format for context-aware analysis.
Records consultations after analysis to persist decisions and provide traceability.
Merges multiple developer sessions with configurable conflict resolution strategies for team collaboration.
Provides a side-by-side comparison of two sessions to aid reconciliation before merging.
Generates data-driven insights on persona usage, consultation patterns, and decision velocity.
Exports a single consultation as a professional report in multiple formats (markdown/json/text).
Exports comprehensive ADRs and session summaries for team sharing.
Gets collaborative multi-persona guidance on engineering questions in various modes (orchestrated, quick, crisis, standards).
Consults a single persona directly for targeted expertise.
Lists all available personas, with options for detailed or quick formats.
Loads context and standards for a given operation and files, used in legacy workflows.
Saves an architectural decision to prevent re-litigation and preserve rationale.
Performs pre-implementation validation against defined standards.
Reviews all decisions and constraints for the current project.
Lists active project sessions to manage multiple work streams.
Accesses specific sections of the rulebook for targeted guidance.
Validates proposed changes against past decisions to prevent regressions.
Automatically infers context from staged changes and suggests personas.