home / mcp / fal.ai mcp server
Provides an MCP server to interact with fal.ai models and services, including listing models, searching, schemas, generation, queue management, and CDN uploads.
Configuration
View docs{
"mcpServers": {
"am0y-mcp-fal": {
"command": "fastmcp",
"args": [
"dev",
"main.py"
],
"env": {
"FAL_KEY": "YOUR_FAL_API_KEY_HERE"
}
}
}
}You set up and run a fal.ai MCP Server to interact with fal.ai models and services, manage model execution (direct or queued), upload files to the CDN, and query or control ongoing tasks. This guide walks you through practical usage, installation steps, and important notes so you can get up and running quickly.
Run the MCP server using the standard CLI workflow to start a local server that exposes model endpoints, queue management, and file uploads. You can drive the server from an MCP client to list models, search by keywords, fetch model schemas, generate content, and monitor or cancel queued tasks.
Prerequisites you need before installing: - Python 3.10+ - fastmcp - httpx - aiofiles - A fal.ai API key To install and prepare the server, follow these steps exactly:
# Step 1: Prepare your environment
# (If you already have Python 3.10+ and pip, you can skip to Step 2)
# No code here, this is just a prerequisite note
# Step 2: Install required Python packages
pip install fastmcp httpx aiofiles
# Step 3: Set your Fal API key as an environment variable
export FAL_KEY="YOUR_FAL_API_KEY_HERE"
# Step 4: Run the server in development mode
fastmcp dev main.pyNotes for installation in Claude Desktop or other environments: you can optionally install with environment variables in place to enable secure access.
Server start options and environment configuration are shown below. The server can be started in development mode or installed for use within client environments.
Security: Keep your fal.ai API key secure. Do not expose FAL_KEY in public scripts or logs. Use environment-variable injection in production setups where possible.
Notes: If you plan to integrate this server with a desktop app like Claude Desktop, you can install with environment variables set in the CLI. For example, the command may include an explicit API key flag during installation.
List available fal.ai models with optional pagination to browse all options.
Search for models by keywords to filter results based on your needs.
Retrieve the OpenAPI schema for a specific model to understand inputs, outputs, and constraints.
Generate content using a specified fal.ai model, with optional queue support for asynchronous processing.
Fetch the result of a previously queued request using its URL.
Check the current status of a queued request by URL.
Cancel a previously queued request by URL to free up resources.
Upload files to the fal.ai CDN for use with models or assets.