home / mcp / hyperbrowser mcp server

Hyperbrowser MCP Server

Hyperbrowser - 由 MCP工厂自动创建

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "bach-ai-tools-hyperbrowser": {
      "command": "npx",
      "args": [
        "-y",
        "bach-hyperbrowser-mcp"
      ],
      "env": {
        "HYPERBROWSER_API_KEY": "YOUR-API-KEY"
      }
    }
  }
}

You can run Hyperbrowser MCP Server to access structured data extraction and web crawling capabilities from a simple MCP client. It exposes a set of tools and ready-to-use browser agents, letting you scrape pages, extract structured data, and perform automated browser tasks with a scalable, scriptable interface.

How to use

To use the Hyperbrowser MCP Server, start it in your environment and connect your MCP client. You’ll provide an API key if required by your deployment, and then issue requests through your client to access tools like page scraping, data extraction, and web crawling.

How to install

Prerequisites: ensure you have Node.js and npm installed on your machine. You will run commands from your terminal to start the MCP server.

Install and start the MCP server using the following runtime command which runs the MCP server and forwards your API key.

npx hyperbrowser-mcp <YOUR-HYPERBROWSER-API-KEY>

Configuration and usage notes

You can configure how the MCP server is connected in your client by providing the required API key and selecting the hyperbrowser MCP server as the endpoint you want to use for scraping, data extraction, and browser automation tasks.

In client configurations, you typically supply an environment variable for the API key and point the client to the Hyperbrowser MCP server command you’re using, such as the standard npx invocation shown in examples.

Development

If you are developing or contributing, you can run the server directly from source code.

git clone [email protected]:hyperbrowserai/mcp.git hyperbrowser-mcp
cd hyperbrowser-mcp

npm install
npm run build

node dist/server.js

Available tools

scrape_webpage

Extract formatted content from a single webpage, including markdown and screenshots.

crawl_webpages

Navigate through linked pages and extract content in an LLM-friendly format.

extract_structured_data

Convert messy HTML into clean, structured JSON data.

search_with_bing

Query the web using Bing and return results for further processing.

browser_use_agent

Fast browser automation using the Browser Use agent for lightweight tasks.

openai_computer_use_agent

General-purpose automation using OpenAI’s CUA model for browser-like tasks.

claude_computer_use_agent

Complex browser tasks powered by Claude computer use.

create_profile

Create a persistent Hyperbrowser profile for sessions and state.

delete_profile

Remove an existing Hyperbrowser profile.

list_profiles

List all existing Hyperbrowser profiles.