home / mcp / foundry mcp server

Foundry MCP Server

Cloud-hosted MCP endpoint for Foundry with tools for models, knowledge, evaluations, and agent workflows.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "microsoft-foundry-mcp-foundry": {
      "command": "uvx",
      "args": [
        "--prerelease=allow",
        "--from",
        "git+https://github.com/azure-ai-foundry/mcp-foundry.git",
        "run-azure-ai-foundry-mcp",
        "--envFile",
        "${workspaceFolder}/.env"
      ],
      "env": {
        "GITHUB_TOKEN": "<GITHUB_TOKEN>",
        "EVAL_DATA_DIR": "<EVAL_DATA_DIR>",
        "AZURE_CLIENT_ID": "<AZURE_CLIENT_ID>",
        "AZURE_TENANT_ID": "<AZURE_TENANT_ID>",
        "AZURE_CLIENT_SECRET": "<AZURE_CLIENT_SECRET>",
        "AZURE_OPENAI_API_KEY": "<AZURE_OPENAI_API_KEY>",
        "AZURE_OPENAI_ENDPOINT": "<AZURE_OPENAI_ENDPOINT>",
        "AZURE_AI_SEARCH_API_KEY": "<AZURE_AI_SEARCH_API_KEY>",
        "AZURE_OPENAI_DEPLOYMENT": "<AZURE_OPENAI_DEPLOYMENT>",
        "AZURE_AI_SEARCH_ENDPOINT": "https://mysearchservice.search.windows.net/",
        "AZURE_OPENAI_API_VERSION": "<AZURE_OPENAI_API_VERSION>",
        "AZURE_AI_PROJECT_ENDPOINT": "<AZURE_AI_PROJECT_ENDPOINT>",
        "AZURE_AI_SEARCH_API_VERSION": "<AZURE_AI_SEARCH_API_VERSION>",
        "SEARCH_AUTHENTICATION_METHOD": "service-principal"
      }
    }
  }
}

You can use the Foundry MCP Server to orchestrate and interact with Azure AI Foundry resources through MCP-compliant clients. It provides tools to work with models, knowledge indexes, evaluations, and agent services in a cloud-hosted, secure environment, enabling multi-agent workflows and on-behalf-of authentication. This guide shows you how to use the server, how to install and run it locally when needed, and important configuration notes to get you started quickly.

How to use

Use an MCP client to discover, invoke, and manage tools that operate on models, knowledge bases, evaluations, and agent services. You can start by launching the server in your development environment and then issuing requests to interact with Azure AI Foundry resources through the supported MCP tools. The workflow supports running local or remote servers, loading environment variables from a file, and using the standard MCP client protocol to perform actions such as listing models, querying indexes, running evaluations, or querying agents.

How to install

Prerequisites: ensure you have the MCP runtime available in your environment. You will install and run the server via an MCP command in your development workspace.

1) Install the runtime you will use (for example, ensure you have the MCP runtime runner available as shown in your setup instructions.

Additional setup and runtime configuration

The following local setup example shows how to configure a standard MCP server in a VS Code workspace. Create the MCP configuration file at .vscode/mcp.json with the stdio server entry below. This configuration uses uvx to run a local MCP server and points to an environment file for sensitive data.

{
  "servers": {
    "mcp_foundry_server": {
      "type": "stdio",
      "command": "uvx",
      "args": [
        "--prerelease=allow",
        "--from",
        "git+https://github.com/azure-ai-foundry/mcp-foundry.git",
        "run-azure-ai-foundry-mcp",
        "--envFile",
        "${workspaceFolder}/.env"
      ]
    }
  }
}

Environment variables you may use

To securely pass information such as API keys and endpoints to the MCP server, you can place environment variables in a .env file in your workspace. The variables shown here illustrate common needs for model discovery, knowledge indexing, and evaluation workflows.

| Category       | Variable                   | Required? | Description                                                  |
| -------------- | -------------------------- | --------- | ------------------------------------------------------------ |
| Model          | GITHUB_TOKEN               | No        | GitHub token for testing models for free with rate limits.  |
| Knowledge      | AZURE_AI_SEARCH_ENDPOINT   | Always    | Endpoint URL for your Azure AI Search service.               |
|                | AZURE_AI_SEARCH_API_VERSION| No        | API Version to use. Defaults to 2025-03-01-preview.          |
|                | SEARCH_AUTHENTICATION_METHOD| Always   | service-principal or api-search-key.                          |
|                | AZURE_TENANT_ID            | Yes (with service-principal) | Azure AD tenant ID.                    |
|                | AZURE_CLIENT_ID            | Yes (with service-principal) | Service Principal client ID.          |
|                | AZURE_CLIENT_SECRET        | Yes (with service-principal) | Service Principal client secret.        |
|                | AZURE_AI_SEARCH_API_KEY    | Yes (with api-search-key) | API key for your Azure AI Search service. |
| Evaluation     | EVAL_DATA_DIR              | Always    | Path to the JSONL evaluation dataset.                       |
|                | AZURE_OPENAI_ENDPOINT      | Text quality evaluators | Endpoint for Azure OpenAI.                                |
|                | AZURE_OPENAI_API_KEY       | Text quality evaluators | API key for Azure OpenAI.                                   |
|                | AZURE_OPENAI_DEPLOYMENT    | Text quality evaluators | Deployment name (e.g., gpt-4o).                               |
|                | AZURE_OPENAI_API_VERSION   | Text quality evaluators | Version of the OpenAI API.                                   |
|                | AZURE_AI_PROJECT_ENDPOINT  | Agent services | Used for Azure AI Agent querying and evaluation.            |

Notes on usage and authentication

If you are using agent tools or safety evaluators, ensure your Azure project credentials are valid. If you are only performing text quality evaluation, the OpenAI endpoint and API key are sufficient.

Security and access control

The server supports secure, authenticated access via standard MCP client flows. Use environment-based configuration to avoid exposing sensitive data in code or logs.

Troubleshooting

If the server fails to start, verify that the runtime command and arguments match the provided configuration. Ensure the environment file path is correct and all required environment variables are present. Check for missing dependencies and confirm network access to required Azure services.

Available tools

list_models_from_model_catalog

Retrieves a list of supported models from the Azure AI Foundry catalog.

list_azure_ai_foundry_labs_projects

Retrieves a list of state-of-the-art AI models from Microsoft Research available in Azure AI Foundry Labs.

get_model_details_and_code_samples

Retrieves detailed information for a specific model from the Azure AI Foundry catalog.

get_prototyping_instructions_for_github_and_labs

Provides comprehensive instructions and setup guidance for starting to work with models from Azure AI Foundry and Azure AI Foundry Labs.

get_model_quotas

Get model quotas for a specific Azure location.

create_azure_ai_services_account

Creates an Azure AI Services account.

list_deployments_from_azure_ai_services

Retrieves a list of deployments from Azure AI Services.

deploy_model_on_ai_services

Deploys a model on Azure AI Services.

create_foundry_project

Creates a new Azure AI Foundry project.

list_index_names

Retrieve all names of indexes from the AI Search Service

list_index_schemas

Retrieve all index schemas from the AI Search Service

retrieve_index_schema

Retrieve the schema for a specific index from the AI Search Service

create_index

Creates a new index

modify_index

Modifies the index definition of an existing index

delete_index

Removes an existing index

add_document

Adds a document to the index

delete_document

Removes a document from the index

query_index

Searches a specific index to retrieve matching documents

get_document_count

Returns the total number of documents in the index

list_indexers

Retrieve all names of indexers from the AI Search Service

get_indexer

Retrieve the full definition of a specific indexer from the AI Search Service

create_indexer

Create a new indexer in the Search Service with the skill, index and data source

delete_indexer

Delete an indexer from the AI Search Service by name

list_data_sources

Retrieve all names of data sources from the AI Search Service

get_data_source

Retrieve the full definition of a specific data source

list_skill_sets

Retrieve all names of skill sets from the AI Search Service

get_skill_set

Retrieve the full definition of a specific skill set

fk_fetch_local_file_contents

Retrieves the contents of a local file path (sample JSON, document etc)

fk_fetch_url_contents

Retrieves the contents of a URL (sample JSON, document etc)

list_text_evaluators

List all available text evaluators.

list_agent_evaluators

List all available agent evaluators.

get_text_evaluator_requirements

Show input requirements for each text evaluator.

get_agent_evaluator_requirements

Show input requirements for each agent evaluator.

run_text_eval

Run one or multiple text evaluators on a JSONL file or content.

format_evaluation_report

Convert evaluation output into a readable Markdown report.

agent_query_and_evaluate

Query an agent and evaluate its response using selected evaluators.

run_agent_eval

Evaluate a single agent interaction with specific data (query, response, tool calls, definitions).

list_agents

List all Azure AI Agents available in the configured project.

connect_agent

Send a query to a specified agent.

query_default_agent

Query the default agent defined in environment variables.

fetch_finetuning_status

Retrieves detailed status and metadata for a specific fine-tuning job, including job state, model, creation and finish times, hyperparameters, and any errors.

list_finetuning_jobs

Lists all fine-tuning jobs in the resource, returning job IDs and their current statuses for easy tracking and management.

get_finetuning_job_events

Retrieves a chronological list of all events for a specific fine-tuning job, including timestamps and detailed messages for each training step, evaluation, and completion.

get_finetuning_metrics

Retrieves training and evaluation metrics for a specific fine-tuning job, including loss curves, accuracy, and other relevant performance indicators for monitoring and analysis.

list_finetuning_files

Lists all files available for fine-tuning in Azure OpenAI, including file IDs, names, purposes, and statuses.

execute_dynamic_swagger_action

Executes any tool dynamically generated from the Swagger specification, allowing flexible API calls for advanced scenarios.

list_dynamic_swagger_tools

Lists all dynamically registered tools from the Swagger specification, enabling discovery and automation of available API endpoints.