Home / MCP / Hyperbrowser MCP Server

Hyperbrowser MCP Server

Provides tools to scrape, extract structured data, and crawl webpages via an MCP-compatible server with browser-use agents.

typescript
Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
    "mcpServers": {
        "hyperbrowser": {
            "command": "npx",
            "args": [
                "-y",
                "hyperbrowser-mcp"
            ],
            "env": {
                "HYPERBROWSER_API_KEY": "YOUR-API-KEY"
            }
        }
    }
}

Hyperbrowser MCP Server provides tooling to scrape, extract structured data, and crawl web content, while offering access to browser-use agents. You can run it locally or integrate it with MCP clients to enable automated web data extraction and navigation workflows.

How to use

You use the Hyperbrowser MCP Server by connecting it to an MCP client or integration that can load and run an MCP server. The server exposes tools to scrape pages, extract structured data, and crawl through linked content. Use it to build workflows that fetch web data, convert it into structured formats, and drive automated browsing tasks with browser-use agents.

How to install

Prerequisites you need before installation: Node.js is required to run MCP servers that use npm or npx.

Install the Hyperbrowser MCP Server locally by running the following command in your terminal, replacing YOUR-API-KEY with your actual API key:

npx hyperbrowser-mcp <YOUR-HYPERBROWSER-API-KEY>

Configuration notes

You can enable the MCP server in your environment by providing the API key via an environment variable when launching through supported MCP client integrations. The typical configuration carries the API key as HYPERBROWSER_API_KEY.

Example configurations for client integrations show using a standard npx invocation with the API key passed through environment variables.

Available tools

scrape_webpage

Extract formatted content from a webpage, including markdown or screenshots, suitable for feeding into an LLM or downstream processing.

crawl_webpages

Traverse linked pages to collect and format content into LL-friendly structures for easier processing and analysis.

extract_structured_data

Convert messy HTML into structured JSON, enabling downstream data consumption.

search_with_bing

Query the web using Bing and return results to steer data gathering.

browser_use_agent

Fast, lightweight browser automation using the Browser Use agent for quick interactions.

openai_computer_use_agent

General-purpose automation using OpenAI’s Computer Use model for automation tasks.

claude_computer_use_agent

Advanced browser tasks using Claude computer use for complex automation.

create_profile

Create a persistent Hyperbrowser profile to reuse settings across sessions.

delete_profile

Remove an existing persistent Hyperbrowser profile.

list_profiles

List all existing persistent Hyperbrowser profiles.