Llama3 Terminal MCP server

Integrates with Ollama's Llama3 model to provide a terminal-based chat interface with persistent context management and YouTube search capabilities via Node.js and Socket.IO
Back to servers
Setup instructions
Provider
Alessandro Rumampuk
Release date
Mar 25, 2025
Language
TypeScript

The Model Context Protocol (MCP) Server provides a simple way to run Llama 3 models locally with a browser-based terminal interface using Xterm.js. This implementation allows you to interact with AI models through a familiar command-line experience while maintaining the context of your conversation.

Installation

Prerequisites

  • Node.js (v18 or higher)
  • npm (v9 or higher)
  • A Llama 3 model file

Basic Setup

  1. Clone the repository to your local machine:
git clone https://github.com/username/mcp-server.git
cd mcp-server
  1. Install the required dependencies:
npm install
  1. Set up your environment variables by creating a .env file in the project root:
MODEL_PATH=/path/to/your/llama3/model.gguf
PORT=3000

Running the Server

Start the MCP server with the following command:

npm start

The server will initialize and load the specified model. Once running, you'll see output indicating the server is listening on the configured port (default: 3000).

Accessing the Terminal Interface

After starting the server:

  1. Open your web browser
  2. Navigate to http://localhost:3000
  3. You'll see an Xterm.js terminal interface where you can interact with the Llama 3 model

Using the Terminal Interface

Basic Commands

  • Type your questions or prompts directly into the terminal
  • Press Enter to send your message to the model
  • The model's response will appear in the terminal

Special Commands

The terminal interface supports several special commands:

/clear      # Clears the terminal screen
/reset      # Resets the conversation context
/help       # Shows available commands
/exit       # Closes the terminal session

Conversation Context

The MCP server maintains context between messages, allowing for more coherent multi-turn conversations. The context window is determined by your model's capabilities and configuration.

Configuration Options

Server Configuration

Edit the .env file to customize your server:

MODEL_PATH=/path/to/your/model.gguf  # Path to your Llama 3 model
PORT=3000                            # Server port
MAX_TOKENS=2048                      # Maximum tokens per response
TEMPERATURE=0.7                      # Response randomness (0.0-1.0)
TOP_P=0.9                            # Nucleus sampling parameter

Model Parameters

You can adjust model parameters for each session by using special commands:

/temp 0.8   # Set temperature to 0.8
/tokens 512 # Limit response to 512 tokens
/top_p 0.95 # Change top_p sampling parameter

Troubleshooting

Common Issues

  • Model loading error: Ensure the path in MODEL_PATH is correct and the model file exists
  • Out of memory: Try using a smaller model or reduce the context window size
  • Slow responses: Llama 3 performance depends on your hardware capabilities

How to install this MCP server

For Claude Code

To add this MCP server to Claude Code, run this command in your terminal:

claude mcp add-json "llama3-terminal" '{"command":"npm","args":["start"]}'

See the official Claude Code MCP documentation for more details.

For Cursor

There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json file so that it is available in all of your projects.

If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json file.

Adding an MCP server to Cursor globally

To add a global MCP server go to Cursor Settings > Tools & Integrations and click "New MCP Server".

When you click that button the ~/.cursor/mcp.json file will be opened and you can add your server like this:

{
    "mcpServers": {
        "llama3-terminal": {
            "command": "npm",
            "args": [
                "start"
            ]
        }
    }
}

Adding an MCP server to a project

To add an MCP server to a project you can create a new .cursor/mcp.json file or add it to the existing one. This will look exactly the same as the global MCP server example above.

How to use the MCP server

Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.

The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.

You can also explicitly ask the agent to use the tool by mentioning the tool name and describing what the function does.

For Claude Desktop

To add this MCP server to Claude Desktop:

1. Find your configuration file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

2. Add this to your configuration file:

{
    "mcpServers": {
        "llama3-terminal": {
            "command": "npm",
            "args": [
                "start"
            ]
        }
    }
}

3. Restart Claude Desktop for the changes to take effect

Want to 10x your AI skills?

Get a free account and learn to code + market your apps using AI (with or without vibes!).

Nah, maybe later