home / mcp / websearch mcp server
[Self-hosted] A Model Context Protocol (MCP) server implementation that provides a web search capability over stdio transport. This server integrates with a WebSearch Crawler API to retrieve search results.
Configuration
View docs{
"mcpServers": {
"mnhlt-websearch-mcp": {
"command": "npx",
"args": [
"websearch-mcp"
],
"env": {
"API_URL": "http://localhost:3001",
"MAX_SEARCH_RESULT": "5"
}
}
}
}WebSearch-MCP is a Model Context Protocol server that gives AI assistants real-time web search capabilities through a stdio-based interface. It connects with a crawler service to fetch up-to-date results, enabling you to answer questions with current information while keeping interactions lightweight and scriptable.
You use WebSearch-MCP by connecting your MCP client to the local or remote WebSearch-MCP server. Once connected, your client can request web results for a query, control how many results to fetch, and filter by language or domains. The server delegates the actual search work to a crawler service and returns structured results suitable for AI assistants to present to users. Typical workflows include querying for recent news, validating claims with live sources, and gathering multiple perspectives on a topic.
Prerequisites: you need Node.js and npm installed on your system. If you want to run the crawler service locally as described, Docker and Docker Compose are also required.
# Install the MCP server globally
npm install -g websearch-mcp
# Or run directly without installing
npx websearch-mcp
# If you want to install via the Smithery workflow for a specific client
npx -y @smithery/cli install @mnhlt/WebSearch-MCP --client claudeConfigure the MCP server to point to your crawler API and adjust default results. You can set environment variables to customize behavior, such as the crawler API URL and the maximum number of results returned when not specified by a request.
API_URL=https://crawler.example.com MAX_SEARCH_RESULT=10 npx websearch-mcpIf you plan to run the crawler service in a container, use Docker Compose to start the crawler and its dependencies. The crawler exposes a health endpoint you can query to verify readiness before connecting your MCP client.
Performs a web search using the configured crawler API and returns a list of results including titles, snippets, URLs, site names, and bylines.