Experimental MCP server for offloading AI coding tasks to Aider with model selection and local orchestration.
Configuration
View docs{
"mcpServers": {
"aider_mcp": {
"command": "uv",
"args": [
"--directory",
"<path to this project>",
"run",
"aider-mcp-server",
"--editor-model",
"gpt-4o",
"--current-working-dir",
"<path to your project>"
],
"env": {
"GEMINI_API_KEY": "<your gemini api key>",
"OPENAI_API_KEY": "<your openai api key>",
"ANTHROPIC_API_KEY": "<your anthropic api key>"
}
}
}
}You set up and run an MCP server that offloads AI coding work to Aider, enabling you to delegate coding tasks while retaining control and oversight over the results.
Once the MCP server is running, you connect to it with an MCP client to offload AI coding tasks or to list available models. You provide a prompt and the files you want modified; the server delegates the work to Aider and returns a report indicating success or failure. Use the list models capability to discover models that match your needs, then invoke the AI coding tool to apply changes to your project files.
Prerequisites: you need Python and a supported MCP runtime installed. You also need network access to fetch dependencies and any API keys for model access.
# Step 1: Clone the MCP server project
git clone https://github.com/disler/aider-mcp-server.git
# Step 2: Install dependencies via the MCP runtime
uv sync
# Step 3: Create your environment file
cp .env.sample .env
# Step 4: Configure your API keys in the .env file (or use the mcpServers env section) for the models you will use
# Example keys you may need
# GEMINI_API_KEY=your_gemini_api_key_here
# OPENAI_API_KEY=your_openai_api_key_here
# ANTHROPIC_API_KEY=your_anthropic_api_key_here
# Step 5: Prepare MCP configuration for this project
# Copy and fill out the .mcp.json into the root of your project and update the --directory and --current-working-dir values to point to your projectThe project uses a stdio MCP configuration to run the local server. You start the server via the MCP runner and specify the editor model you want to use. A typical configuration snippet is provided to connect the local server to your project.
Tests are designed to validate AI coding and model listing functionality. Run tests with the MCP runner. Make sure you provide valid API keys in your environment when testing features that require model access.
You can add the local MCP server to your Claude Code environment using multiple editor models. Each add command wires the local project path to the MCP server and specifies the editor model to use during code refinement and generation.
Store API keys securely in your environment file. Do not commit keys to version control. Use placeholder values in templates and substitute your own keys in your deployment environment.
Runs the Aider AI coding task to implement changes specified by a prompt on a set of editable and read-only files. Returns a success flag and a diff of changes.
Returns a list of available AI models that match a given substring, helping you discover supported models for your coding tasks.