home / mcp / hexdocs mcp server
Semantic search for Hex documentation, right in your editor ✨
Configuration
View docs{
"mcpServers": {
"bradleygolden-hexdocs-mcp": {
"command": "npx",
"args": [
"-y",
"[email protected]"
],
"env": {
"HEXDOCS_MCP_PATH": "/path/to/custom/directory",
"HEXDOCS_MCP_MIX_PROJECT_PATHS": "/path/to/project1/mix.exs,/path/to/project2/mix.exs"
}
}
}
}HexDocs MCP provides a TypeScript server that implements the Model Context Protocol (MCP) and coordinates with an Elixir binary to fetch, process, and semantically search Hex package documentation for AI-powered tooling.
You run the MCP server alongside your MCP-compatible client to enable semantic search over Hex package documentation. Start by configuring your MCP client to connect to the HexDocs MCP server, then use the server to fetch documentation for packages you work with and run semantic searches against the resulting embeddings. Typical workflows include fetching docs for a package, generating embeddings, and querying those embeddings for relevant information.
Common usage patterns include: fetch docs for a package and version, regenerate embeddings after changes, and run semantic searches with a query that describes what you are looking for. The server handles downloading and preparing the necessary binaries and embeddings so you can focus on documentation discovery and AI-assisted workflows.
Prerequisites you need before installing and using HexDocs MCP:
- Ollama for embedding generation; pull the recommended model and ensure the service is running before issuing embedding-related commands.
- Elixir 1.16+ and Erlang/OTP 26+ for local development and Elixir tooling.
- Mix, the Elixir build tool, comes with Elixir installations.
- Node.js 22 or later for the MCP server runtime.
Step 1 — Install the MCP client configuration for your tooling. Create an MCP JSON configuration that points to the HexDocs MCP server. Use the following snippet exactly as shown to connect to the server via your MCP client.
{
"mcpServers": {
"hexdocs-mcp": {
"command": "npx",
"args": [
"-y",
"[email protected]"
]
}
}
}You can customize where the server stores data and the Mix project paths used for documentation processing.
Environment variables you can set to configure the tool:
export HEXDOCS_MCP_PATH=~/.hexdocs_mcp
export HEXDOCS_MCP_MIX_PROJECT_PATHS="/path/to/project1/mix.exs,/path/to/project2/mix.exs"Configure the MCP server so your client can start the HexDocs MCP server using the MCP protocol.
{
"mcpServers": {
"hexdocs-mcp": {
"command": "npx",
"args": [
"-y",
"[email protected]"
],
"env": {
"HEXDOCS_MCP_PATH": "/path/to/custom/directory",
"HEXDOCS_MCP_MIX_PROJECT_PATHS": "/path/to/project1/mix.exs,/path/to/project2/mix.exs"
}
}
}
}The server coordinates with an embedding model provided via Ollama. Ensure you pull the recommended model and have Ollama running before generating embeddings. The default embedding model and the workflow to fetch and search documentation are designed to deliver high-quality semantic results across platforms.
Use the MCP server with any MCP-compatible tooling to fetch documentation, generate embeddings, and run semantic searches. The typical commands you will use from the Elixir side include fetch_docs for a package, fetch_docs with a version, and semantic_search to query the existing embeddings.
Examples of common Elixir commands you will run through the server include: fetch docs for a package, fetch docs for a specific version, fetch docs for a package using project paths, perform semantic searches, and check whether embeddings exist for a given package.
This project uses tooling to manage development tasks and ensure consistent environments across contributors. If you are contributing, you can follow the standard setup and build steps to verify changes locally.
Fetches documentation for a package (and optionally a version) and stores it for embedding generation.
Searches the existing embeddings for relevant results based on a user query.
Checks whether embeddings exist for a given package (and optionally a version).
Pulls the recommended embedding model into Ollama for embedding generation.