home / skills / sickn33 / antigravity-awesome-skills / tavily-web
This skill helps you perform web search, extract content, and crawl websites using Tavily API for up-to-date information.
npx playbooks add skill sickn33/antigravity-awesome-skills --skill tavily-webReview the files below or copy the command above to add this skill to your agents.
---
name: tavily-web
description: Web search, content extraction, crawling, and research capabilities using Tavily API
---
# tavily-web
## Overview
Web search, content extraction, crawling, and research capabilities using Tavily API
## When to Use
- When you need to search the web for current information
- When extracting content from URLs
- When crawling websites
## Installation
```bash
npx skills add -g BenedictKing/tavily-web
```
## Step-by-Step Guide
1. Install the skill using the command above
2. Configure Tavily API key
3. Use naturally in Claude Code conversations
## Examples
See [GitHub Repository](https://github.com/BenedictKing/tavily-web) for examples.
## Best Practices
- Configure API keys via environment variables
## Troubleshooting
See the GitHub repository for troubleshooting guides.
## Related Skills
- context7-auto-research, exa-search, firecrawl-scraper, codex-review
This skill provides web search, content extraction, crawling, and research capabilities via the Tavily API. It integrates with agent workflows to fetch up-to-date web results, pull structured content from URLs, and perform targeted site crawls. The skill is designed for high-throughput research and content ingestion in agentic pipelines.
The skill sends search queries to the Tavily API, aggregates results, and returns ranked links and snippets. For content extraction it fetches pages and parses text, metadata, and key elements (headlines, paragraphs, images, structured data). The crawler mode respects robots.txt, follows configured depth and rate limits, and yields items for downstream processing or storage.
Do I need a Tavily account?
Yes. You must configure a Tavily API key to authenticate requests.
How does the crawler respect site rules?
The crawler reads robots.txt and honors configured rate limits and depth settings to reduce impact.
Can it extract structured data like JSON-LD?
Yes. The extractor parses common structured formats (JSON-LD, Microdata, Open Graph) when present.