home / mcp / azure ai foundry mcp server
Repository created by GitHub Project Operator
Configuration
View docs{
"mcpServers": {
"youssef7788-mcp-foundry": {
"command": "uvx",
"args": [
"--prerelease=allow",
"--from",
"git+https://github.com/azure-ai-foundry/mcp-foundry.git",
"run-azure-ai-foundry-mcp",
"--envFile",
"${workspaceFolder}/.env"
],
"env": {
"GITHUB_TOKEN": "YOUR_GITHUB_TOKEN",
"EVAL_DATA_DIR": "path/to/eval/data",
"AZURE_CLIENT_ID": "YOUR_CLIENT_ID",
"AZURE_TENANT_ID": "YOUR_TENANT_ID",
"AZURE_CLIENT_SECRET": "YOUR_CLIENT_SECRET",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_OPENAI_API_KEY",
"AZURE_OPENAI_ENDPOINT": "https://<your-openai-endpoint>.cognitiveservices.azure.com/",
"AZURE_AI_SEARCH_API_KEY": "YOUR_SEARCH_API_KEY",
"AZURE_OPENAI_DEPLOYMENT": "YOUR_DEPLOYMENT_NAME",
"AZURE_AI_SEARCH_ENDPOINT": "https://<your-search-service-name>.search.windows.net/",
"AZURE_OPENAI_API_VERSION": "2023-06-01-preview",
"AZURE_AI_PROJECT_ENDPOINT": "https://<your-ai-project-endpoint>",
"AZURE_AI_SEARCH_API_VERSION": "2025-03-01-preview",
"SEARCH_AUTHENTICATION_METHOD": "service-principal"
}
}
}
}You have an MCP server that connects with Azure AI Foundry to provide a unified set of tools for models, knowledge, evaluation, and more. This server enables you to explore, build, deploy, evaluate, and fine-tune AI models within Azure AI Foundry, all through a consistent MCP interface you can run locally or remotely.
You interact with the MCP server using a compatible MCP client. Start the server locally, then point your MCP client to the local stdio interface or to a remote HTTP endpoint if you deploy it behind a web gateway. You can access tools to explore models, manage indexes and documents, execute evaluations, and handle fine-tuning tasks. Use the client to invoke specific tools by name, pass the required inputs, and receive structured responses that you can display or further process in your application.
Prerequisites: install the MCP runtime adapter you will use to run stdio MCP servers, such as uvx.
Step 1: Create your workspace and environment file if you plan to use environment variables. Create a file named .env at the root of your workspace and add the necessary credentials and configuration values.
Step 2: Create the MCP configuration file in your workspace at .vscode/mcp.json with the stdio server configuration provided below.
Step 3: Start the MCP server from your editor or terminal. Use the Start button or run the following command through your editor integration to launch the server.
{
"servers": {
"foundry": {
"type": "stdio",
"command": "uvx",
"args": [
"--prerelease=allow",
"--from",
"git+https://github.com/azure-ai-foundry/mcp-foundry.git",
"run-azure-ai-foundry-mcp",
"--envFile",
"${workspaceFolder}/.env"
]
}
}
}Environment variables (examples shown; replace with your actual values) can be provided in a .env file or in your environment when launching the server. These variables control access to Azure AI Search, Azure OpenAI, evaluation datasets, and other service endpoints. See the var list in the environment section for details.
Environment variables you may configure include tokens, endpoints, and API keys for your Azure services. Ensure you provide the correct values for authentication methods and endpoints used by search, OpenAI, and evaluation components.
Retrieves a list of supported models from the Azure AI Foundry catalog.
Retrieves a list of state-of-the-art AI models from Microsoft Research available in Azure AI Foundry Labs.
Retrieves detailed information for a specific model from the Azure AI Foundry catalog.
Provides comprehensive instructions and setup guidance for starting to work with models from Azure AI Foundry and Azure AI Foundry Labs.
Get model quotas for a specific Azure location.
Creates an Azure AI Services account.
Retrieves a list of deployments from Azure AI Services.
Deploys a model on Azure AI Services.
Creates a new Azure AI Foundry project.
Retrieve all names of indexes from the AI Search Service
Retrieve all index schemas from the AI Search Service
Retrieve the schema for a specific index from the AI Search Service
Creates a new index
Modifies the index definition of an existing index
Removes an existing index
Adds a document to the index
Removes a document from the index
Searches a specific index to retrieve matching documents
Returns the total number of documents in the index
Retrieve all names of indexers from the AI Search Service
Retrieve the full definition of a specific indexer from the AI Search Service
Create a new indexer in the Search Service with the skill, index and data source
Delete an indexer from the AI Search Service by name
Retrieve all names of data sources from the AI Search Service
Retrieve the full definition of a specific data source
Retrieve all names of skill sets from the AI Search Service
Retrieve the full definition of a specific skill set
Retrieves the contents of a local file path (sample JSON, document etc)
Retrieves the contents of a URL (sample JSON, document etc)
List all available text evaluators.
List all available agent evaluators.
Show input requirements for each text evaluator.
Show input requirements for each agent evaluator.
Run one or multiple text evaluators on a JSONL file or content.
Convert evaluation output into a readable Markdown report.
Query an agent and evaluate its response using selected evaluators. End-to-End agent evaluation.
Evaluate a single agent interaction with specific data (query, response, tool calls, definitions).
List all Azure AI Agents available in the configured project.
Send a query to a specified agent.
Query the default agent defined in environment variables.
Retrieves detailed status and metadata for a specific fine-tuning job, including job state, model, creation and finish times, hyperparameters, and any errors.
Lists all fine-tuning jobs in the resource, returning job IDs and their current statuses for easy tracking and management.
Retrieves a chronological list of all events for a specific fine-tuning job, including timestamps and detailed messages for each training step, evaluation, and completion.
Retrieves training and evaluation metrics for a specific fine-tuning job, including loss curves, accuracy, and other relevant performance indicators for monitoring and analysis.
Lists all files available for fine-tuning in Azure OpenAI, including file IDs, names, purposes, and statuses.
Executes any tool dynamically generated from the Swagger specification, allowing flexible API calls for advanced scenarios.
Lists all dynamically registered tools from the Swagger specification, enabling discovery and automation of available API endpoints.