fal.ai MCP server

Bridges AI systems with fal.ai's machine learning models and services, enabling image generation, media processing, and specialized AI capabilities through direct or queued execution modes with authentication and file management support.
Back to servers
Provider
am0y
Release date
Mar 16, 2025
Language
Python
Stats
38 stars

The fal.ai MCP Server is a Model Context Protocol server that allows you to interact with fal.ai models and services. It provides a convenient way to access, search, and use various AI models available on the fal.ai platform.

Requirements

  • Python 3.10+
  • fastmcp
  • httpx
  • aiofiles
  • A fal.ai API key

Installation

To get started with the fal.ai MCP server:

  1. Clone the repository:
git clone https://github.com/am0y/mcp-fal.git
cd mcp-fal
  1. Install the required packages:
pip install fastmcp httpx aiofiles
  1. Set your fal.ai API key as an environment variable:
export FAL_KEY="YOUR_FAL_API_KEY_HERE"

Using the Server

Running in Development Mode

To run the server in development mode with the MCP Inspector web interface:

fastmcp dev main.py

This launches an interactive web interface where you can test all available tools.

Installing in Claude Desktop

To integrate the server with Claude Desktop:

fastmcp install main.py -e FAL_KEY="YOUR_FAL_API_KEY_HERE"

This makes all server capabilities available directly within the Claude Desktop application.

Running Directly

For a more traditional approach, you can run the server directly:

python main.py

Available Tools

Browsing and Searching Models

  • List all models:

    models(page=None, total=None)  # Optional pagination parameters
    
  • Search for specific models:

    search(keywords)  # Find models matching specific keywords
    
  • Get model schema:

    schema(model_id)  # Retrieve the OpenAPI schema for a specific model
    

Using Models

  • Generate content:
    generate(model, parameters, queue=False)  # Run a model with parameters
    
    Set queue=True for queued execution of longer-running models.

Queue Management

  • Check request status:

    status(url)  # Check the current status of a queued request
    
  • Retrieve results:

    result(url)  # Get results from a completed queued request
    
  • Cancel requests:

    cancel(url)  # Cancel an in-progress queued request
    

File Management

  • Upload files:
    upload(path)  # Upload a file to the fal.ai CDN for use with models
    

How to add this MCP server to Cursor

There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json file so that it is available in all of your projects.

If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json file.

Adding an MCP server to Cursor globally

To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".

When you click that button the ~/.cursor/mcp.json file will be opened and you can add your server like this:

{
    "mcpServers": {
        "cursor-rules-mcp": {
            "command": "npx",
            "args": [
                "-y",
                "cursor-rules-mcp"
            ]
        }
    }
}

Adding an MCP server to a project

To add an MCP server to a project you can create a new .cursor/mcp.json file or add it to the existing one. This will look exactly the same as the global MCP server example above.

How to use the MCP server

Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.

The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.

You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.

Want to 10x your AI skills?

Get a free account and learn to code + market your apps using AI (with or without vibes!).

Nah, maybe later