A TypeScript implementation of a Model Context Protocol (MCP) server that integrates with PiAPI's API. PiAPI makes user able to generate media content with Midjourney/Flux/Kling/LumaLabs/Udio/Chrip/Trellis directly from Claude or any other MCP-compatible apps.
Configuration
View docs{
"mcpServers": {
"apinetwork-piapi-mcp-server": {
"command": "node",
"args": [
"/absolute/path/to/piapi-mcp-server/dist/index.js"
],
"env": {
"PIAPI_API_KEY": "YOUR_API_KEY_HERE"
}
}
}
}You run a TypeScript MCP server that connects to PiAPI so you can generate media content from Claude or other MCP-compatible apps. This server exposes a flexible bridge between your MCP client and PiAPIβs generation tools, letting you trigger image, video, and audio pipelines from your prompts.
Connect your MCP client to the PiAPI MCP Server using a local stdio setup. You run the server locally and point the client at the same runtime process, enabling you to issue prompts that translate into PiAPI-powered content generation.
Prerequisites you need before installation:
Concrete steps you follow to install the MCP server locally are shown here. Copy each command exactly as written and run in your terminal.
npx -y @smithery/cli install piapi-mcp-server --client claudeIf you prefer to install manually, follow these steps to clone, install dependencies, and build the project.
git clone https://github.com/apinetwork/piapi-mcp-server
cd piapi-mcp-server
```
```bash
npm install
```
```bash
npm run buildIf you want to test the MCP server locally, create an environment file with your API key and launch the inspector to verify the MCP integration.
# in project root
PIAPI_API_KEY=your_api_key_here
```
```bash
npm run inspectOpen the inspector interface at http://localhost:5173 to test functions, inspect requests, and adjust timeouts for time-consuming tasks like image or video generation.
Base toolkit for generating images from prompts using PiAPI capabilities.
Base toolkit for generating videos from prompts or image prompts.
Image generation from text or image prompts via Flux integration.
Video generation from text or image prompts via Hunyuan integration.
Video creation from image prompts using Skyreels capabilities.
Video generation from text or image prompts via Wan integration.
Music generation derived from video assets.
Zero-shot voice synthesis for TTS within MCP workflows.
Video generation and effects composition using Kling.
Video generation using Luma Dream Machine capabilities.
Music generation using Suno tools.
3D model generation from image prompts via Trellis.