home / mcp / adversary mcp server

Adversary MCP Server

An Application Security Oriented MCP Server - Hardens your code so you don't have to.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "brettbergin-adversary-mcp-server": {
      "command": "uvx",
      "args": [
        "adversary-mcp-server"
      ],
      "env": {
        "ADVERSARY_LOG_LEVEL": "INFO",
        "ADVERSARY_WORKSPACE_ROOT": "/path/to/workspace"
      }
    }
  }
}

Adversary MCP Server provides a clean architecture, AI-powered vulnerability analysis, and seamless MCP integration to help you securely analyze code across files, folders, and snippets. It combines static analysis with AI validation, stores results locally, and exposes tooling that works with Cursor IDE and Claude Code for fast, repeatable security scanning.

How to use

You connect to the Adversary MCP Server from your MCP client (Cursor IDE or Claude Code) and run scans using the built-in commands. Start with a quick setup, choose your preferred AI provider, and then run scans on files, folders, or code snippets. Review results in the local dashboard and use the provided tools to persist findings in JSON, Markdown, and CSV formats for downstream workflows.

How to install

Prerequisites you need before installation are ready on your system.

Install the MCP runtime helper and analysis tools, then install the server package.

Step 1: Install the MCP runtime helper

brew install uv

Step 2: Install the static analysis engine

brew install semgrep

Or use pip as an alternative to Semgrep if you are not on macOS or prefer Python tooling.

pip install semgrep

Step 3: Install Adversary MCP Server

uv pip install adversary-mcp-server

Step 4: Verify the installation

adv --version
adv status

Configure security engines and start scanning

Configure your security engines and authentication, then run scans against your codebase. You can enable AI analysis and validation for more accurate results, and target a single file, a directory, or a code snippet.

# Example interactive setup
adv configure setup

# Or configure directly with options
adv configure --llm-provider openai --llm-api-key YOUR_OPENAI_API_KEY
adv configure --llm-provider anthropic --llm-api-key YOUR_ANTHROPIC_API_KEY

# Check current configuration
adv status

Available tools

adv_scan_code

Scan code snippets directly with full analysis including optional AI validation and semantic checks.

adv_scan_file

Scan a specific file with full analysis, including optional AI components and validation.

adv_scan_folder

Recursively scan a directory for vulnerabilities with optional AI analysis and validation.

adv_get_status

Query the server to report available scan engines and current configuration.

adv_get_version

Return the running server version information.

adv_mark_false_positive

Mark a finding as a false positive to improve future scans.

adv_unmark_false_positive

Remove a previously marked false positive.