home / skills / sickn33 / antigravity-awesome-skills / tavily-web

tavily-web skill

/skills/tavily-web

This skill helps you perform web search, extract content, and crawl websites using Tavily API for up-to-date information.

This is most likely a fork of the tavily-web skill from xfstudio
npx playbooks add skill sickn33/antigravity-awesome-skills --skill tavily-web

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
907 B
---
name: tavily-web
description: Web search, content extraction, crawling, and research capabilities using Tavily API
---

# tavily-web

## Overview
Web search, content extraction, crawling, and research capabilities using Tavily API

## When to Use
- When you need to search the web for current information
- When extracting content from URLs
- When crawling websites

## Installation
```bash
npx skills add -g BenedictKing/tavily-web
```

## Step-by-Step Guide
1. Install the skill using the command above
2. Configure Tavily API key
3. Use naturally in Claude Code conversations

## Examples
See [GitHub Repository](https://github.com/BenedictKing/tavily-web) for examples.

## Best Practices
- Configure API keys via environment variables

## Troubleshooting
See the GitHub repository for troubleshooting guides.

## Related Skills
- context7-auto-research, exa-search, firecrawl-scraper, codex-review

Overview

This skill provides web search, content extraction, crawling, and research capabilities via the Tavily API. It integrates with agent workflows to fetch up-to-date web results, pull structured content from URLs, and perform targeted site crawls. The skill is designed for high-throughput research and content ingestion in agentic pipelines.

How this skill works

The skill sends search queries to the Tavily API, aggregates results, and returns ranked links and snippets. For content extraction it fetches pages and parses text, metadata, and key elements (headlines, paragraphs, images, structured data). The crawler mode respects robots.txt, follows configured depth and rate limits, and yields items for downstream processing or storage.

When to use it

  • You need current web facts or news beyond the agent’s training cut-off.
  • You want to extract clean text, metadata, or structured data from specific URLs.
  • You must crawl a site to build an index, dataset, or research corpus.
  • You need to automate research tasks across multiple pages or domains.
  • You want to enrich agent context with live web content.

Best practices

  • Store the Tavily API key in environment variables or a secrets manager; avoid hard-coding keys.
  • Limit crawl depth and apply rate limits to avoid overloading target sites and to respect terms of service.
  • Filter and normalize extracted content (remove navigation chrome, deduplicate, and trim boilerplate).
  • Validate extracted data types (dates, prices, emails) before using them in pipelines.
  • Combine search and extraction steps: search to find relevant pages, then extract and summarize content for agent context.

Example use cases

  • Rapid market research: run queries for competitors, extract product pages, and summarize feature differences.
  • Content ingestion: crawl blogs and documentation to build a searchable knowledge base for an agent.
  • Fact-checking: fetch authoritative sources and extract quoted passages for verification.
  • Automated monitoring: run recurring searches for brand mentions or regulatory updates and extract supporting links.

FAQ

Do I need a Tavily account?

Yes. You must configure a Tavily API key to authenticate requests.

How does the crawler respect site rules?

The crawler reads robots.txt and honors configured rate limits and depth settings to reduce impact.

Can it extract structured data like JSON-LD?

Yes. The extractor parses common structured formats (JSON-LD, Microdata, Open Graph) when present.