Semantic Scholar MCP server

Integrates with Semantic Scholar API to enable academic literature search, citation analysis, and paper recommendations at scale.
Back to servers
Provider
YU Zongmin
Release date
Dec 25, 2024
Language
Python
Stats
35 stars

Semantic Scholar MCP Server provides comprehensive access to academic paper data, author information, and citation networks through a FastMCP implementation. It enables advanced academic research capabilities including paper search, citation analysis, and author information retrieval through the Semantic Scholar API.

Installation

Installing via Smithery

The easiest way to install the Semantic Scholar MCP Server for Claude Desktop is automatically via Smithery:

npx -y @smithery/cli install semantic-scholar-fastmcp-mcp-server --client claude

Manual Installation

  1. Clone the repository:
git clone https://github.com/YUZongmin/semantic-scholar-fastmcp-mcp-server.git
cd semantic-scholar-server
  1. Install FastMCP and other dependencies following: https://github.com/jlowin/fastmcp

  2. Configure FastMCP:

For Claude Desktop users, add the following to your configuration file (typically in ~/.config/claude-desktop/config.json):

{
  "mcps": {
    "Semantic Scholar Server": {
      "command": "/path/to/your/venv/bin/fastmcp",
      "args": [
        "run",
        "/path/to/your/semantic-scholar-server/run.py"
      ],
      "env": {
        "SEMANTIC_SCHOLAR_API_KEY": "your-api-key-here"  # Optional
      }
    }
  }
}

Be sure to:

  • Replace the paths with actual paths on your system
  • Add your API key if you have one, or remove the env section if not
  1. Start using the server:

Claude Desktop will automatically start and manage the server process when needed.

API Key (Optional)

For higher rate limits and better performance:

  1. Get an API key from Semantic Scholar API
  2. Add it to your FastMCP configuration as shown above

Configuration

Environment Variables

  • SEMANTIC_SCHOLAR_API_KEY: Your Semantic Scholar API key (optional)

Rate Limits

With API Key:

  • Search, batch and recommendation endpoints: 1 request per second
  • Other endpoints: 10 requests per second

Without API Key:

  • All endpoints: 100 requests per 5 minutes
  • Longer timeouts for requests

Available MCP Tools

Paper Search Tools

  • paper_relevance_search: Search for papers using relevance ranking
  • paper_bulk_search: Bulk paper search with sorting options
  • paper_title_search: Find papers by exact title match
  • paper_details: Get comprehensive details about a specific paper
  • paper_batch_details: Efficiently retrieve details for multiple papers

Citation Tools

  • paper_citations: Get papers that cite a specific paper
  • paper_references: Get papers referenced by a specific paper

Author Tools

  • author_search: Search for authors by name
  • author_details: Get detailed information about an author
  • author_papers: Get papers written by an author
  • author_batch_details: Get details for multiple authors

Recommendation Tools

  • paper_recommendations_single: Get recommendations based on a single paper
  • paper_recommendations_multi: Get recommendations based on multiple papers

Usage Examples

Basic Paper Search

results = await paper_relevance_search(
    context,
    query="machine learning",
    year="2020-2024",
    min_citation_count=50,
    fields=["title", "abstract", "authors"]
)

Paper Recommendations

# Single paper recommendation
recommendations = await paper_recommendations_single(
    context,
    paper_id="649def34f8be52c8b66281af98ae884c09aef38b",
    fields="title,authors,year"
)

# Multi-paper recommendation
recommendations = await paper_recommendations_multi(
    context,
    positive_paper_ids=["649def34f8be52c8b66281af98ae884c09aef38b", "ARXIV:2106.15928"],
    negative_paper_ids=["ArXiv:1805.02262"],
    fields="title,abstract,authors"
)

Batch Operations

# Get details for multiple papers
papers = await paper_batch_details(
    context,
    paper_ids=["649def34f8be52c8b66281af98ae884c09aef38b", "ARXIV:2106.15928"],
    fields="title,authors,year,citations"
)

# Get details for multiple authors
authors = await author_batch_details(
    context,
    author_ids=["1741101", "1780531"],
    fields="name,hIndex,citationCount,paperCount"
)

Error Handling

The server provides standardized error responses:

{
    "error": {
        "type": "error_type",  # rate_limit, api_error, validation, timeout
        "message": "Error description",
        "details": {
            # Additional context
            "authenticated": true/false  # Indicates if request was authenticated
        }
    }
}

How to add this MCP server to Cursor

There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json file so that it is available in all of your projects.

If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json file.

Adding an MCP server to Cursor globally

To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".

When you click that button the ~/.cursor/mcp.json file will be opened and you can add your server like this:

{
    "mcpServers": {
        "cursor-rules-mcp": {
            "command": "npx",
            "args": [
                "-y",
                "cursor-rules-mcp"
            ]
        }
    }
}

Adding an MCP server to a project

To add an MCP server to a project you can create a new .cursor/mcp.json file or add it to the existing one. This will look exactly the same as the global MCP server example above.

How to use the MCP server

Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.

The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.

You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.

Want to 10x your AI skills?

Get a free account and learn to code + market your apps using AI (with or without vibes!).

Nah, maybe later