home / skills / openclaw / skills / scrape

This skill helps you scrape legally by enforcing robots.txt, rate limits, and GDPR aware handling for public data.

npx playbooks add skill openclaw/skills --skill scrape

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
1.8 KB
---
name: Scrape
description: Legal web scraping with robots.txt compliance, rate limiting, and GDPR/CCPA-aware data handling.
---

## Pre-Scrape Compliance Checklist

Before writing any scraping code:

1. **robots.txt** — Fetch `{domain}/robots.txt`, check if target path is disallowed. If yes, stop.
2. **Terms of Service** — Check `/terms`, `/tos`, `/legal`. Explicit scraping prohibition = need permission.
3. **Data type** — Public factual data (prices, listings) is safer. Personal data triggers GDPR/CCPA.
4. **Authentication** — Data behind login is off-limits without authorization. Never scrape protected content.
5. **API available?** — If site offers an API, use it. Always. Scraping when API exists often violates ToS.

## Legal Boundaries

- **Public data, no login** — Generally legal (hiQ v. LinkedIn 2022)
- **Bypassing barriers** — CFAA violation risk (Van Buren v. US 2021)
- **Ignoring robots.txt** — Gray area, often breaches ToS (Meta v. Bright Data 2024)
- **Personal data without consent** — GDPR/CCPA violation
- **Republishing copyrighted content** — Copyright infringement

## Request Discipline

- **Rate limit**: Minimum 2-3 seconds between requests. Faster = server strain = legal exposure.
- **User-Agent**: Real browser string + contact email: `Mozilla/5.0 ... (contact: [email protected])`
- **Respect 429**: Exponential backoff. Ignoring 429s shows intent to harm.
- **Session reuse**: Keep connections open to reduce server load.

## Data Handling

- **Strip PII immediately** — Don't collect names, emails, phones unless legally justified.
- **No fingerprinting** — Don't combine data to identify individuals indirectly.
- **Minimize storage** — Cache only what you need, delete what you don't.
- **Audit trail** — Log what, when, where. Evidence of good faith if challenged.

For code patterns and robots.txt parser, see `code.md`

Overview

This skill provides a legal-first approach to web scraping with built-in robots.txt compliance, rate limiting, and privacy-aware data handling. It helps developers collect public data while minimizing legal and ethical risk. The skill emphasizes use of official APIs, consent-aware collection, and audit trails for accountability.

How this skill works

The skill inspects target domains for robots.txt and checks Terms of Service pages before fetching any content. It enforces configurable request pacing, respectful User-Agent identification with contact information, and exponential backoff on 429 responses. Collected content is filtered to strip or avoid personal data, logged with an audit trail, and stored minimally to reduce exposure under GDPR/CCPA.

When to use it

  • Backing up or archiving publicly available site content where no API is provided.
  • Collecting aggregate, non-personal data such as product prices, listings, or public metadata.
  • Testing or monitoring site changes while needing proof of compliant behavior.
  • Prototyping crawlers for research where auditability and low impact are required.
  • Situations requiring demonstrable rate limiting and refusal of disallowed paths.

Best practices

  • Always fetch and respect /robots.txt and explicit Terms of Service before scraping.
  • Prefer official APIs when available; use scraping only when permitted or necessary.
  • Use a clear User-Agent including a contact email and enforce 2–3+ second request spacing.
  • Implement exponential backoff on HTTP 429/503 and reuse sessions to reduce load.
  • Strip or avoid collecting PII, keep minimal retention, and maintain an audit log of actions.

Example use cases

  • Crawling a public product catalog for price trend analysis while honoring robots.txt.
  • Archiving public blog posts for preservation, with automated stripping of commenter PII.
  • Monitoring public directory listings for changes and logging access for compliance review.
  • Building a research dataset of non-identifying public metadata with documented consent checks.

FAQ

What happens if robots.txt disallows a path?

The skill stops and flags the path as disallowed; scraping should not proceed without explicit permission.

Can I collect email addresses or phone numbers?

Avoid collecting PII unless you have a clear lawful basis; strip personal identifiers immediately and minimize retention.