home / skills / xfstudio / skills / tavily-web

tavily-web skill

/tavily-web

This skill enables web search, content extraction, and crawling via the Tavily API to enhance research and information gathering.

npx playbooks add skill xfstudio/skills --skill tavily-web

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
907 B
---
name: tavily-web
description: Web search, content extraction, crawling, and research capabilities using Tavily API
---

# tavily-web

## Overview
Web search, content extraction, crawling, and research capabilities using Tavily API

## When to Use
- When you need to search the web for current information
- When extracting content from URLs
- When crawling websites

## Installation
```bash
npx skills add -g BenedictKing/tavily-web
```

## Step-by-Step Guide
1. Install the skill using the command above
2. Configure Tavily API key
3. Use naturally in Claude Code conversations

## Examples
See [GitHub Repository](https://github.com/BenedictKing/tavily-web) for examples.

## Best Practices
- Configure API keys via environment variables

## Troubleshooting
See the GitHub repository for troubleshooting guides.

## Related Skills
- context7-auto-research, exa-search, firecrawl-scraper, codex-review

Overview

This skill provides web search, content extraction, crawling, and research features by integrating with the Tavily API. It lets an AI agent or CLI tool fetch current web results, extract page text and metadata, and crawl sites for structured research. It is aimed at developers and researchers who need automated, programmable web access from Python-based agents.

How this skill works

The skill sends queries to the Tavily API to perform live web searches and to retrieve page content. It can extract article text, links, headings, and metadata from URLs, and follow links to crawl a site within configured limits. Results are returned in structured form so downstream code or agents can summarize, index, or analyze the extracted data.

When to use it

  • Gathering up-to-date facts or news that a local model does not know
  • Extracting article text and metadata from a list of URLs
  • Crawling a website to collect pages for research or indexing
  • Automating web research as part of a larger agent workflow
  • Preprocessing web content for summarization or classification

Best practices

  • Store the Tavily API key in environment variables and never hard-code it
  • Limit crawl depth and request rate to respect site resources and robots.txt
  • Validate and normalize extracted text before feeding it to downstream models
  • Apply filters to avoid fetching large binary files and non-HTML resources
  • Cache repeated requests to reduce API usage and speed up workflows

Example use cases

  • Collecting recent press coverage and extracting article bodies for a weekly briefing
  • Crawling a docs site to build a searchable knowledge index for an agent
  • Extracting product details and prices from a set of retailer pages for comparison
  • Feeding cleaned web content into a summarization pipeline for quick research notes
  • Automating link discovery across a domain to map site structure and key pages

FAQ

How do I provide the Tavily API key?

Set the API key as an environment variable in your runtime environment and configure the skill to read that variable.

Can I control crawl limits?

Yes. Configure crawl depth, page limits, and request rate to match your needs and to respect site policies.

What output formats are available?

The skill returns structured JSON containing text, metadata, links, and status fields suitable for indexing or analysis.