Provides an integrated shell and coding MCP server to run, edit, and test code locally with Claude/other MCP clients.
Configuration
View docs{
"mcpServers": {
"wcgw": {
"command": "uvx",
"args": [
"wcgw@latest",
"wcgw_local",
"--limit",
"0.1"
],
"env": {
"OPENAI_API_KEY": "YOUR_API_KEY_PLACEHOLDER",
"OPENAI_ORG_ID": "YOUR_ORG_ID_PLACEHOLDER",
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_PLACEHOLDER"
}
}
}
}You can run and interact with a tightly integrated shell and code-editing MCP server on your own machine. This server lets you compose, execute, and iterate on code and shell commands from your chat app, with protections and features that help you manage long-running tasks, large files, and interactive commands.
You work with an MCP client to talk to the wcgw server. You can switch between modes to plan, write code, or run unrestricted actions. You can attach to the working terminal to see the exact commands the AI runs, and you can use a multiplexed terminal setup so the AI’s terminal session stays accessible from your editor (or via screen). The editor can load CLAUDE.md or AGENTS.md automatically when you start a project, and it can save task context for resuming work later.
Key capabilities you can leverage include creating and executing compiler checks until all issues are resolved, editing large files in incremental edits, and getting syntax feedback on edits so the AI can correct its work. You can run interactive commands with standard input, and you can protect files so the AI only edits after reading them once, with careful handling of large files by chunking.
To control the AI workflow, you can pick from three modes: architect for planning, code-writer for targeted code edits, and wcgw for full access. By default, you operate in wcgw mode, which has no restrictions. If you need planning, switch to architect; for code changes, switch to code-writer and constrain the scope with specified paths.
Attach to the real terminal the AI uses to run commands. If you have the screen utility, wcgw can run inside a screen session. You can list sessions with screen -ls and attach with screen -x <session>. If you start a command here, you can safely detach with Ctrl+A D and resume later. For better scrolling, keep these lines in your ~/.screenrc: defscrollback 10000 and termcapinfo xterm* ti@:te@.
Optional: install the VS Code extension for wcgw to paste context into Claude and to switch the app into Claude with a few keystrokes.
Prerequisites: you need a host system with a compatible runtime for the MCP client you plan to use. This guide focuses on the wcgw server as shown in the supported configurations.
1) Install the runtime tool for running MCP servers on your machine. If you use macOS or Linux, install uv via Homebrew (homebrew is required to ensure a global location for uv). Run: brew install uv
2) Add or update your Claude desktop configuration so Claude can start the wcgw MCP server. Create or modify the file at the following path with this JSON:
{
"mcpServers": {
"wcgw": {
"command": "uvx",
"args": ["wcgw@latest"]
}
}
}Optional: force a specific shell for wcgw by adding the --shell argument in the configuration. Example:
{
"mcpServers": {
"wcgw": {
"command": "uvx",
"args": ["wcgw@latest", "--shell", "/bin/bash"]
}
}
}The server can be run locally using the uvx command. OpenAI users can set environment variables and launch wcgw with a local wrapper:
- OpenAI users: set OPENAI_API_KEY and OPENAI_ORG_ID in your environment, then start with the local shell command:
uvx wcgw wcgw_local --limit 0.1Anthropic users set ANTHROPIC_API_KEY and start similarly:
uvx wcgw wcgw_local --claudeReset the shell, set up the workspace, and prepare the environment for a new task.
Execute shell commands with optional timeouts and interactive input support.
Read content from one or more files to provide context for the AI.
Create new files or write to files that are currently empty.
Edit existing files using search and replace blocks with tolerance for indentation and matching.
Read image files for display or processing within the MCP session.
Save project context and files for knowledge transfer or task checkpointing.