home / mcp / mcp omnisearch mcp server
🔍 A Model Context Protocol (MCP) server providing unified access to multiple search engines (Tavily, Brave, Kagi), AI tools (Perplexity, FastGPT), and content processing services (Jina AI, Kagi). Combines search, AI responses, content processing, and enhancement features through a single interface.
Configuration
View docs{
"mcpServers": {
"spences10-mcp-omnisearch": {
"command": "node",
"args": [
"/path/to/mcp-omnisearch/dist/index.js"
],
"env": {
"EXA_API_KEY": "your-exa-key",
"KAGI_API_KEY": "your-kagi-key",
"BRAVE_API_KEY": "your-brave-key",
"GITHUB_API_KEY": "your-github-key",
"TAVILY_API_KEY": "your-tavily-key",
"JINA_AI_API_KEY": "your-jina-key",
"FIRECRAWL_API_KEY": "your-firecrawl-key",
"FIRECRAWL_BASE_URL": "http://localhost:3002",
"PERPLEXITY_API_KEY": "your-perplexity-key"
}
}
}
}You set up MCP Omnisearch to access multiple search providers and AI tools through a single MCP interface. Install it, configure API keys for the providers you want, and then use a client to run searches, get AI responses, and process content from many sources in one place.
Launch the MCP Omnisearch server and connect with your MCP client. You can perform web searches across Tavily, Brave, Kagi, Exa, and GitHub search, and you can request AI responses from Perplexity, Kagi FastGPT, and Exa Answer. You can also process content with Jina AI Reader, Kagi Summarizer, Tavily Extract, and Firecrawl tools, then enrich or verify information with Jina Grounding and Kagi Enrichment.
Practical usage patterns include: performing a targeted domain search with Brave or Kagi operators, extracting and summarizing long articles, and requesting AI-generated explanations with citations. For code-focused work, use GitHub search tools with file, path, or repo qualifiers, then pull the matching snippets into your analysis. If you need structured data from pages, run Firecrawl Extract or Actions to interact with dynamic content before extraction.
Prerequisites you need before starting: Node.js to run MCP Omnisearch, and a container runtime if you choose the Docker route. Ensure you have the necessary API keys for the providers you plan to enable.
Option 1: Quick Start with Docker Compose (recommended) You will run MCP Omnisearch inside containers and supply keys via a .env file.
# Clone the implementation and move into the project
git clone https://github.com/spences10/mcp-omnisearch.git
cd mcp-omnisearch
# Create .env file with your API keys
echo "TAVILY_API_KEY=your-tavily-key" > .env
echo "KAGI_API_KEY=your-kagi-key" >> .env
echo "PERPLEXITY_API_KEY=your-perplexity-key" >> .env
echo "EXA_API_KEY=your-exa-key" >> .env
echo "GITHUB_API_KEY=your-github-key" >> .env
# Start the containers
docker-compose up -dOption 2: Run with Docker directly You can build and run the MCP Omnisearch container, supplying API keys as environment variables.
docker build -t mcp-omnisearch .
docker run -d \
-p 8000:8000 \
-e TAVILY_API_KEY=your-tavily-key \
-e KAGI_API_KEY=your-kagi-key \
-e PERPLEXITY_API_KEY=your-perplexity-key \
-e EXA_API_KEY=your-exa-key \
-e GITHUB_API_KEY=your-github-key \
--name mcp-omnisearch \
mcp-omnisearchConfigure the server with environment variables for each provider you enable. Only the keys you provide will be activated. If you leave a key out, the corresponding provider remains disabled but other providers operate normally.
GitHub search requires a public-access token. If you plan to search public code, set GITHUB_API_KEY to your token. Self-hosted Firecrawl is optional; provide FIRECRAWL_BASE_URL if you run Firecrawl locally.
If you encounter rate limits, adjust your request frequency or enable additional providers to load-balance requests. Start with one provider, then add others as you acquire API keys.
Security considerations: use public-access tokens for GitHub as described, and keep API keys secure in your environment. You can revoke keys at any time if needed.
The server exposes a broad set of search, AI response, content processing, and enhancement endpoints. These include Tavily, Brave, Kagi, Exa, and GitHub search, PerplexityAI, Kagi FastGPT, Exa Answer, Jina Reader, Kagi Summarizer, Tavily Extract, Firecrawl tools, and Jina Grounding and Enrichment endpoints for verification and enrichment.
Notes about environment variables: each provider has its own key, and you only enable the ones you have keys for. The server logs will indicate which providers are active at startup.
Query Tavily Search API for factual results with citations
Perform Brave Search with full operator support in the query string
Perform Kagi search with full operator support and optional language/no_cache filters
GitHub code search across public repositories with code-focused qualifiers
Discover GitHub repositories with enriched metadata
Find GitHub users and organizations with profile information
AI-powered web search using neural and keyword methods with domain filtering
AI-powered response generation combining live web results with AI models
Quick AI-generated answers with citations via Kagi FastGPT
Direct AI-generated answers via Exa Answer API
Convert URLs to clean text with optional image captioning and PDF support
Summarize content from pages, videos, and podcasts
Extract raw content from web pages with basic or advanced depth
Extract clean data from single URLs with enhanced formatting options
Deep crawling of a website with controllable depth for content extraction
Fast URL collection for site mapping
Structured data extraction with AI prompts
Page interactions before extraction to handle dynamic content
Extract full content from Exa search result IDs
Find semantically similar pages to a given URL
Supplementary content from specialized indexes for enrichment
Real-time fact verification against web knowledge