Provides an MCP server to analyze AI-generated code for vulnerabilities and orchestrate automated security tests.
Configuration
View docs{
"mcpServers": {
"vibeshift": {
"command": "uv",
"args": [
"--directory",
"path/to/cloned_repo",
"run",
"mcp_server.py"
]
}
}
}VibeShift is an MCP server that analyzes AI-generated code for security vulnerabilities and coordinates automated security checks and tests within your AI-assisted coding workflow. It enables a rapid, shift-left security feedback loop by running static analysis, coordinating test recording and execution, and surfacing actionable remediation guidance to your AI coding assistant so you can ship more secure code faster.
You interact with VibeShift through an MCP-enabled AI coding assistant. When you prompt your AI assistant to analyze code, record tests, or run regression tests, VibeShift is invoked to perform security checks and generate feedback. The server returns vulnerability details, evidence, suggested remediations, and test results back to your AI assistant so you can act on them immediately.
Prerequisites include Python 3.10+, a compatible MCP client, and Playwright browsers for UI test automation. Install the MCP client package, then install dependencies and start the MCP server as described.
# Prerequisites
python3.10+
# Install MCP client tooling
pip install mcp[cli]
# Install project dependencies
pip install -r requirements.txt
# Install Playwright browsers (where applicable)
patchright install --with-depsConfigure the MCP client to load the VibeShift server definition so your AI coding assistant can reach it. The server is intended to run locally and be invoked by the MCP client during your AI-assisted development sessions.
{
"mcpServers": {
"vibeshift": {
"type": "stdio",
"command": "uv",
"args": ["--directory","path/to/cloned_repo", "run", "mcp_server.py"]
}
}
}The server integrates with static analysis (SAST) and dynamic analysis (DAST) tools, supports AI-assisted test recording and deterministic test execution, and provides a feedback loop with the AI assistant. Ensure your environment variables, such as your LLM API key if required, are configured in your environment before starting the server.
Triggers a traditional security scan on code snippets generated by the AI assistant, invoking static analysis tools like Semgrep to identify vulnerabilities and report findings.
Records a user-described test flow by controlling the browser (via Playwright) and saving the steps to a JSON file in the output directory.
Executes a specified recorded test JSON file and reports PASS/FAIL status along with any evidence gathered during the run.
Crawls a site to suggest potential test steps for discovered pages, producing recommended actions for AI-assisted test creation.
Lists available recorded web tests by scanning the output directory for JSON files.