home / skills / samhvw8 / dotfiles / chrome-devtools

chrome-devtools skill

/dot_ccp/hub/skills/chrome-devtools

This skill automates browser tasks with Puppeteer, capturing screenshots, scraping data, filling forms, and profiling performance for reliable web testing.

npx playbooks add skill samhvw8/dotfiles --skill chrome-devtools

Review the files below or copy the command above to add this skill to your agents.

Files (22)
SKILL.md
6.6 KB
---
name: chrome-devtools
description: "Browser automation via Puppeteer CLI scripts (JSON output). Capabilities: screenshots, PDF generation, web scraping, form automation, network monitoring, performance profiling, JavaScript debugging, headless browsing. Actions: screenshot, scrape, automate, test, profile, monitor, debug browser. Keywords: Puppeteer, headless Chrome, screenshot, PDF, web scraping, form fill, click, navigate, network traffic, performance audit, Lighthouse, console logs, DOM manipulation, element selector, wait, scroll, automation script. Use when: taking screenshots, generating PDFs from web, scraping websites, automating form submissions, monitoring network requests, profiling page performance, debugging JavaScript, testing web UIs."
license: Apache-2.0
---

# Chrome DevTools Agent Skill

Browser automation via executable Puppeteer scripts. All scripts output JSON for easy parsing.

## Quick Start

### Installation

#### Step 1: Install System Dependencies (Linux/WSL only)

On Linux/WSL, Chrome requires system libraries. Install them first:

```bash
cd .claude/skills/chrome-devtools/scripts
./install-deps.sh  # Auto-detects OS and installs required libs
```

Supports: Ubuntu, Debian, Fedora, RHEL, CentOS, Arch, Manjaro

**macOS/Windows**: Skip this step (dependencies bundled with Chrome)

#### Step 2: Install Node Dependencies

```bash
npm install  # Installs puppeteer, debug, yargs
```

### Test
```bash
node navigate.js --url https://example.com
# Output: {"success": true, "url": "https://example.com", "title": "Example Domain"}
```

## Available Scripts

All scripts are in `.claude/skills/chrome-devtools/scripts/`

### Script Usage
- `./scripts/README.md`

### Core Automation
- `navigate.js` - Navigate to URLs
- `screenshot.js` - Capture screenshots (full page or element)
- `click.js` - Click elements
- `fill.js` - Fill form fields
- `evaluate.js` - Execute JavaScript in page context

### Analysis & Monitoring
- `snapshot.js` - Extract interactive elements with metadata
- `console.js` - Monitor console messages/errors
- `network.js` - Track HTTP requests/responses
- `performance.js` - Measure Core Web Vitals + record traces

## Usage Patterns

### Single Command
```bash
cd .claude/skills/chrome-devtools/scripts
node screenshot.js --url https://example.com --output ./docs/screenshots/page.png
```
**Important**: Always save screenshots to `./docs/screenshots` directory.

### Chain Commands (reuse browser)
```bash
# Keep browser open with --close false
node navigate.js --url https://example.com/login --close false
node fill.js --selector "#email" --value "[email protected]" --close false
node fill.js --selector "#password" --value "secret" --close false
node click.js --selector "button[type=submit]"
```

### Parse JSON Output
```bash
# Extract specific fields with jq
node performance.js --url https://example.com | jq '.vitals.LCP'

# Save to file
node network.js --url https://example.com --output /tmp/requests.json
```

## Common Workflows

### Web Scraping
```bash
node evaluate.js --url https://example.com --script "
  Array.from(document.querySelectorAll('.item')).map(el => ({
    title: el.querySelector('h2')?.textContent,
    link: el.querySelector('a')?.href
  }))
" | jq '.result'
```

### Performance Testing
```bash
PERF=$(node performance.js --url https://example.com)
LCP=$(echo $PERF | jq '.vitals.LCP')
if (( $(echo "$LCP < 2500" | bc -l) )); then
  echo "✓ LCP passed: ${LCP}ms"
else
  echo "✗ LCP failed: ${LCP}ms"
fi
```

### Form Automation
```bash
node fill.js --url https://example.com --selector "#search" --value "query" --close false
node click.js --selector "button[type=submit]"
```

### Error Monitoring
```bash
node console.js --url https://example.com --types error,warn --duration 5000 | jq '.messageCount'
```

## Script Options

All scripts support:
- `--headless false` - Show browser window
- `--close false` - Keep browser open for chaining
- `--timeout 30000` - Set timeout (milliseconds)
- `--wait-until networkidle2` - Wait strategy

See `./scripts/README.md` for complete options.

## Output Format

All scripts output JSON to stdout:
```json
{
  "success": true,
  "url": "https://example.com",
  ... // script-specific data
}
```

Errors go to stderr:
```json
{
  "success": false,
  "error": "Error message"
}
```

## Finding Elements

Use `snapshot.js` to discover selectors:
```bash
node snapshot.js --url https://example.com | jq '.elements[] | {tagName, text, selector}'
```

## Troubleshooting

### Common Errors

**"Cannot find package 'puppeteer'"**
- Run: `npm install` in the scripts directory

**"error while loading shared libraries: libnss3.so"** (Linux/WSL)
- Missing system dependencies
- Fix: Run `./install-deps.sh` in scripts directory
- Manual install: `sudo apt-get install -y libnss3 libnspr4 libasound2t64 libatk1.0-0 libatk-bridge2.0-0 libcups2 libdrm2 libxkbcommon0 libxcomposite1 libxdamage1 libxfixes3 libxrandr2 libgbm1`

**"Failed to launch the browser process"**
- Check system dependencies installed (Linux/WSL)
- Verify Chrome downloaded: `ls ~/.cache/puppeteer`
- Try: `npm rebuild` then `npm install`

**Chrome not found**
- Puppeteer auto-downloads Chrome during `npm install`
- If failed, manually trigger: `npx puppeteer browsers install chrome`

### Script Issues

**Element not found**
- Get snapshot first to find correct selector: `node snapshot.js --url <url>`

**Script hangs**
- Increase timeout: `--timeout 60000`
- Change wait strategy: `--wait-until load` or `--wait-until domcontentloaded`

**Blank screenshot**
- Wait for page load: `--wait-until networkidle2`
- Increase timeout: `--timeout 30000`

**Permission denied on scripts**
- Make executable: `chmod +x *.sh`

## Reference Documentation

Detailed guides available in `./references/`:
- [CDP Domains Reference](./references/cdp-domains.md) - 47 Chrome DevTools Protocol domains
- [Puppeteer Quick Reference](./references/puppeteer-reference.md) - Complete Puppeteer API patterns
- [Performance Analysis Guide](./references/performance-guide.md) - Core Web Vitals optimization

## Advanced Usage

### Custom Scripts
Create custom scripts using shared library:
```javascript
import { getBrowser, getPage, closeBrowser, outputJSON } from './lib/browser.js';
// Your automation logic
```

### Direct CDP Access
```javascript
const client = await page.createCDPSession();
await client.send('Emulation.setCPUThrottlingRate', { rate: 4 });
```

See reference documentation for advanced patterns and complete API coverage.

## External Resources

- [Puppeteer Documentation](https://pptr.dev/)
- [Chrome DevTools Protocol](https://chromedevtools.github.io/devtools-protocol/)
- [Scripts README](./scripts/README.md)

Overview

This skill provides browser automation using Puppeteer CLI scripts that emit structured JSON. It supports screenshots, PDF generation, scraping, form automation, network monitoring, performance profiling, and headless debugging for repeatable, scriptable workflows. Use outputs for pipelines, tests, and monitoring where machine-readable results are required.

How this skill works

Scripts drive Chromium/Chrome via Puppeteer to open pages, interact with the DOM, capture assets, and record network and performance telemetry. Each script returns a JSON payload on stdout with success state and script-specific data; errors are written to stderr as JSON. Commands can run headless or with a visible browser and can keep a browser instance open to chain multiple actions efficiently.

When to use it

  • Capture full-page or element screenshots for visual regression or documentation.
  • Generate PDFs from dynamic pages or print-friendly views.
  • Scrape structured content where JavaScript rendering is required.
  • Automate form submissions, clicks, and UX flows for testing or data entry.
  • Monitor network traffic and console errors for production debugging and error hunting.
  • Profile Core Web Vitals and collect performance traces for optimization work.

Best practices

  • Save artifacts (screenshots, PDFs, traces) to a dedicated output directory for consistent pipelines.
  • Keep browser open (--close false) when running multiple related commands to avoid repeated startup cost.
  • Use snapshot or element-discovery scripts to get stable selectors before automating clicks or fills.
  • Set appropriate timeouts and wait strategies (--timeout, --wait-until) for pages with async resources.
  • Parse JSON stdout programmatically (jq, Python/Node) to integrate results into CI or monitoring systems.

Example use cases

  • Automated visual capture: snapshot landing pages nightly and compare for regressions.
  • Headless scraping: extract lists and links from JS-heavy sites and emit JSON for downstream processing.
  • Form automation: prefill and submit repetitive forms during QA or data-entry tasks.
  • Performance testing: collect LCP, FID, CLS and traces as part of CI performance gates.
  • Monitoring: stream console errors and network failures from a URL to a logging endpoint for real-time alerts.

FAQ

How do scripts return results?

All scripts print JSON to stdout with a success field and script-specific data; failures are JSON on stderr.

Can I reuse a browser session across commands?

Yes — use the option to keep the browser open (for example --close false) to chain multiple scripts without relaunching.