home / mcp / fal.ai mcp server

fal.ai MCP Server

Provides an MCP server to interact with fal.ai models and services, including listing models, searching, schemas, generation, queue management, and CDN uploads.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "am0y-mcp-fal": {
      "command": "fastmcp",
      "args": [
        "dev",
        "main.py"
      ],
      "env": {
        "FAL_KEY": "YOUR_FAL_API_KEY_HERE"
      }
    }
  }
}

You set up and run a fal.ai MCP Server to interact with fal.ai models and services, manage model execution (direct or queued), upload files to the CDN, and query or control ongoing tasks. This guide walks you through practical usage, installation steps, and important notes so you can get up and running quickly.

How to use

Run the MCP server using the standard CLI workflow to start a local server that exposes model endpoints, queue management, and file uploads. You can drive the server from an MCP client to list models, search by keywords, fetch model schemas, generate content, and monitor or cancel queued tasks.

How to install

Prerequisites you need before installing: - Python 3.10+ - fastmcp - httpx - aiofiles - A fal.ai API key To install and prepare the server, follow these steps exactly:

How to install

# Step 1: Prepare your environment
# (If you already have Python 3.10+ and pip, you can skip to Step 2)
# No code here, this is just a prerequisite note

# Step 2: Install required Python packages
pip install fastmcp httpx aiofiles

# Step 3: Set your Fal API key as an environment variable
export FAL_KEY="YOUR_FAL_API_KEY_HERE"

# Step 4: Run the server in development mode
fastmcp dev main.py

How to install

Notes for installation in Claude Desktop or other environments: you can optionally install with environment variables in place to enable secure access.

Additional sections

Server start options and environment configuration are shown below. The server can be started in development mode or installed for use within client environments.

Additional sections

Security: Keep your fal.ai API key secure. Do not expose FAL_KEY in public scripts or logs. Use environment-variable injection in production setups where possible.

Additional sections

Notes: If you plan to integrate this server with a desktop app like Claude Desktop, you can install with environment variables set in the CLI. For example, the command may include an explicit API key flag during installation.

Available tools

models

List available fal.ai models with optional pagination to browse all options.

search

Search for models by keywords to filter results based on your needs.

schema

Retrieve the OpenAPI schema for a specific model to understand inputs, outputs, and constraints.

generate

Generate content using a specified fal.ai model, with optional queue support for asynchronous processing.

result

Fetch the result of a previously queued request using its URL.

status

Check the current status of a queued request by URL.

cancel

Cancel a previously queued request by URL to free up resources.

upload

Upload files to the fal.ai CDN for use with models or assets.