home / mcp / scrapeless mcp server

Scrapeless MCP Server

Scrapeless Mcp Server

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "scrapeless-ai-scrapeless-mcp-server": {
      "url": "https://api.scrapeless.com/mcp",
      "headers": {
        "x-api-token": "YOUR_SCRAPELESS_KEY",
        "SCRAPELESS_KEY": "sk_abcdef12345",
        "BROWSER_PROFILE_ID": "profile-01",
        "BROWSER_SESSION_TTL": "3600",
        "BROWSER_PROFILE_PERSIST": "true"
      }
    }
  }
}

Scrapeless MCP Server enables AI agents to interact with the web in real time by routing their requests through a standards-based MCP layer. It connects models like ChatGPT and Claude to tools for browser automation, web scraping, and access to Google services, delivering dynamic context and live data for autonomous workflows.

How to use

You connect your MCP client to Scrapeless MCP Server to access universal web capabilities. Use the HTTP (Streamable) route for hosted API access or the Stdio route for local execution. Once connected, you can perform tasks such as browsing pages, clicking elements, extracting HTML or Markdown, taking screenshots, and querying Google Search or Google Trends. Use conversational prompts to drive browser actions and data extraction, then process results in your AI workflow.

How to install

Prerequisites: ensure you have Node.js installed on your machine. You will also need access to a Scrapeless API key if you plan to use the hosted HTTP option.

Install and run Scrapeless MCP Server in Stdio (local execution) or configure Streamable HTTP (hosted API mode) in your MCP client.

Additional setup notes

If you want to run Scrapeless locally, use the following runtime command to start the MCP server and pass your API key via environment variables.

Security and best practices

Treat all scraped content as untrusted by default. Sanitize and validate data before feeding it to AI models. Prefer structured extraction (HTML or Markdown) and apply domain/selector whitelisting to limit data exposure. Log outbound requests and monitor for anomalies.

Examples of what you can do

Automate web interactions, extract live content, and write results to local files or streams. For instance, you can search the web, retrieve top results with summaries, scrape dynamic pages, or save content as Markdown or HTML for downstream processing.

Available tools

google_search

Universal information search engine to query web results and return structured findings.

google_trends

Fetch trending data from Google Trends for given topics or keywords.

browser_create

Create or reuse a cloud browser session to navigate web pages.

browser_close

Close the current cloud browser session and disconnect.

browser_goto

Navigate the browser to a specified URL.

browser_go_back

Go back one step in browser history.

browser_go_forward

Go forward one step in browser history.

browser_click

Click a specified element on the page.

browser_type

Type text into a targeted input field.

browser_press_key

Simulate a keyboard key press.

browser_wait_for

Wait for a specific element to appear on the page.

browser_wait

Pause execution for a fixed duration.

browser_screenshot

Capture a screenshot of the current page.

browser_get_html

Retrieve the full HTML content of the current page.

browser_get_text

Extract all visible text from the current page.

browser_scroll

Scroll to the bottom of the page.

browser_scroll_to

Scroll a particular element into view.

scrape_html

Scrape a URL and return its full HTML content.

scrape_markdown

Scrape a URL and return its content as Markdown.

scrape_screenshot

Capture a high-quality screenshot of any webpage.