home / skills / openclaw / skills / scrape
This skill helps you scrape legally by enforcing robots.txt, rate limits, and GDPR aware handling for public data.
npx playbooks add skill openclaw/skills --skill scrapeReview the files below or copy the command above to add this skill to your agents.
---
name: Scrape
description: Legal web scraping with robots.txt compliance, rate limiting, and GDPR/CCPA-aware data handling.
---
## Pre-Scrape Compliance Checklist
Before writing any scraping code:
1. **robots.txt** — Fetch `{domain}/robots.txt`, check if target path is disallowed. If yes, stop.
2. **Terms of Service** — Check `/terms`, `/tos`, `/legal`. Explicit scraping prohibition = need permission.
3. **Data type** — Public factual data (prices, listings) is safer. Personal data triggers GDPR/CCPA.
4. **Authentication** — Data behind login is off-limits without authorization. Never scrape protected content.
5. **API available?** — If site offers an API, use it. Always. Scraping when API exists often violates ToS.
## Legal Boundaries
- **Public data, no login** — Generally legal (hiQ v. LinkedIn 2022)
- **Bypassing barriers** — CFAA violation risk (Van Buren v. US 2021)
- **Ignoring robots.txt** — Gray area, often breaches ToS (Meta v. Bright Data 2024)
- **Personal data without consent** — GDPR/CCPA violation
- **Republishing copyrighted content** — Copyright infringement
## Request Discipline
- **Rate limit**: Minimum 2-3 seconds between requests. Faster = server strain = legal exposure.
- **User-Agent**: Real browser string + contact email: `Mozilla/5.0 ... (contact: [email protected])`
- **Respect 429**: Exponential backoff. Ignoring 429s shows intent to harm.
- **Session reuse**: Keep connections open to reduce server load.
## Data Handling
- **Strip PII immediately** — Don't collect names, emails, phones unless legally justified.
- **No fingerprinting** — Don't combine data to identify individuals indirectly.
- **Minimize storage** — Cache only what you need, delete what you don't.
- **Audit trail** — Log what, when, where. Evidence of good faith if challenged.
For code patterns and robots.txt parser, see `code.md`
This skill provides a legal-first approach to web scraping with built-in robots.txt compliance, rate limiting, and privacy-aware data handling. It helps developers collect public data while minimizing legal and ethical risk. The skill emphasizes use of official APIs, consent-aware collection, and audit trails for accountability.
The skill inspects target domains for robots.txt and checks Terms of Service pages before fetching any content. It enforces configurable request pacing, respectful User-Agent identification with contact information, and exponential backoff on 429 responses. Collected content is filtered to strip or avoid personal data, logged with an audit trail, and stored minimally to reduce exposure under GDPR/CCPA.
What happens if robots.txt disallows a path?
The skill stops and flags the path as disallowed; scraping should not proceed without explicit permission.
Can I collect email addresses or phone numbers?
Avoid collecting PII unless you have a clear lawful basis; strip personal identifiers immediately and minimize retention.