home / mcp / scrapybara mcp server
A Model Context Protocol server for Scrapybara
Configuration
View docs{
"mcpServers": {
"scrapybara-scrapybara-mcp": {
"command": "node",
"args": [
"path/to/scrapybara-mcp/dist/index.js"
],
"env": {
"ACT_MODEL": "<YOUR_ACT_MODEL>",
"AUTH_STATE_ID": "<YOUR_AUTH_STATE_ID>",
"SCRAPYBARA_API_KEY": "<YOUR_SCRAPYBARA_API_KEY>"
}
}
}
}You can run a Scrapybara-based MCP server that lets clients like Claude Desktop, Cursor, and Windsurf interact with virtual Ubuntu desktops. This MCP server enables actions such as browsing the web and running code through a secure, containerized desktop environment, all accessible from your MCP clients.
You use an MCP client to connect to the Scrapybara MCP server and start virtual Ubuntu instances. Start an instance to serve as a desktop sandbox you can browse or run code in, then watch the live stream URL so you can see the desktop in real time. You can manage multiple instances, stop them when you’re done, and issue commands or actions to control the desktop via the built‑in agent.
Prerequisites: you need Node.js 18+ and pnpm installed on your system. You also need a Scrapybara API key.
Clone the MCP server repository, install dependencies, and build the project.
Run these commands in your terminal:
git clone https://github.com/scrapybara/scrapybara-mcp.git
cd scrapybara-mcp
pnpm install
pnpm buildAdd the following MCP server configuration to your MCP client so it can launch and interact with the Scrapybara MCP server. This config uses a local stdio-based runtime that starts via Node and loads the built index.
{
"mcpServers": {
"scrapybara_mcp": {
"command": "node",
"args": ["path/to/scrapybara-mcp/dist/index.js"],
"env": {
"SCRAPYBARA_API_KEY": "<YOUR_SCRAPYBARA_API_KEY>",
"ACT_MODEL": "<YOUR_ACT_MODEL>", // "anthropic" or "openai"
"AUTH_STATE_ID": "<YOUR_AUTH_STATE_ID>" // Optional, for authenticating the browser
}
}
}
}After configuring the MCP client, restart it to apply the new server entry. Use the client’s instance management commands to start a new Scrapybara Ubuntu instance, obtain its stream URL, and interact with the desktop. You can fetch all running instances, start new ones, or stop them when you’re finished.
Notes on security and usage: keep your API key and model choice confidential. The client handles the browser authentication state via AUTH_STATE_ID if you enable it. If you plan to scale usage, consider creating separate instances for different tasks and monitoring resource utilization.
If an instance starts but you don’t see a live stream, verify that the instance is running and the stream URL is accessible from your network. If you encounter authentication issues, double‑check the ACT_MODEL and AUTH_STATE_ID values you provided in the environment variables.
Launch a new Scrapybara Ubuntu instance to act as a desktop sandbox for web access and code execution.
Retrieve a list of all currently running Scrapybara instances.
Stop a running Scrapybara instance to free resources.
Execute a bash command inside a Scrapybara instance.
Issue actions to an agent to control the instance via mouse/keyboard and commands.