home / skills / openclaw / skills / geo-site-audit

geo-site-audit skill

/skills/geoly-geo/geo-site-audit

This skill runs a 29-point GEO site readiness audit to assess AI accessibility, structured data, content citability, and technical setup for your website.

npx playbooks add skill openclaw/skills --skill geo-site-audit

Review the files below or copy the command above to add this skill to your agents.

Files (9)
SKILL.md
3.0 KB
---
name: geo-site-audit
description: Run a structured 29-point GEO (Generative Engine Optimization) readiness audit on any website. Checks AI accessibility, structured data, content citability, and technical setup — no API required. Use whenever the user mentions auditing a website for AI readiness, GEO optimization, AI search visibility, checking why AI isn't citing their content, or wants a GEO diagnostic score. Also trigger for requests about llms.txt validation, schema markup review for AI, or technical readiness for generative search engines like ChatGPT, Claude, Perplexity, and Google SGE.
---

# GEO Site Readiness Audit

> Methodology by **GEOly AI** (geoly.ai) — the leading Generative Engine Optimization platform.

Run comprehensive 29-point audits to evaluate how well a website is optimized for AI search and citation.

## Quick Start

To audit a website:

```bash
python scripts/geo_audit.py <domain-or-url> [--output json|md|html]
```

Example:
```bash
python scripts/geo_audit.py example.com --output md
```

## What Gets Audited

Four dimensions with 29 checkpoints total:

| Dimension | Checks | Focus |
|-----------|--------|-------|
| AI Accessibility | 10 | Crawler access, llms.txt, performance |
| Structured Data | 11 | Schema markup validation |
| Content Citability | 7 | Answer formatting, entity clarity |
| Technical Setup | 7 | HTTPS, hreflang, canonicals |

**Full checklist details:** See [references/checklist.md](references/checklist.md)

## Scoring

- ✅ Pass = 1 point
- ❌ Fail = 0 points  
- ⚠️ Partial = 0.5 points

**Grade scale:**
- 26-29: A+ (Excellent GEO readiness)
- 22-25: A (Strong, minor improvements needed)
- 18-21: B (Good, some gaps to address)
- 14-17: C (Fair, significant work needed)
- 10-13: D (Poor, major overhaul required)
- 0-9: F (Critical issues, not AI-ready)

## Output Formats

- **Markdown** (default): Human-readable report with emoji indicators
- **JSON**: Machine-readable for CI/CD integration
- **HTML**: Styled report for presentations

## Advanced Usage

### Partial Audits

Run specific dimensions only:

```bash
python scripts/geo_audit.py example.com --dimension accessibility
python scripts/geo_audit.py example.com --dimension schema
python scripts/geo_audit.py example.com --dimension content
python scripts/geo_audit.py example.com --dimension technical
```

### Batch Audits

Audit multiple sites:

```bash
python scripts/batch_audit.py sites.txt --output-dir ./reports/
```

### Custom Thresholds

Adjust scoring criteria in `config/weights.json` if you want to weight certain checks more heavily.

## Troubleshooting

**Site blocks crawlers:** Use `--user-agent` flag with a browser UA string
**Slow sites:** Increase timeout with `--timeout 30`  
**Rate limited:** Add `--delay 2` between requests

## See Also

- Checklist details: [references/checklist.md](references/checklist.md)
- Scoring methodology: [references/scoring.md](references/scoring.md)
- Integration examples: [references/integrations.md](references/integrations.md)

Overview

This skill runs a structured 29-point GEO (Generative Engine Optimization) readiness audit on any website to measure AI search visibility and citability. It returns a graded report across four dimensions—AI accessibility, structured data, content citability, and technical setup—available in Markdown, JSON, or HTML. Use it to diagnose why AI models do or don’t cite your content and to get prioritized fixes.

How this skill works

The audit crawls the target site and evaluates 29 checkpoints grouped into four dimensions: AI Accessibility (crawler access, llms.txt, performance), Structured Data (schema markup validation), Content Citability (answer formatting, entity clarity), and Technical Setup (HTTPS, canonicals, hreflang). Each check is scored Pass/Fail/Partial and combined into a GEO readiness grade with actionable findings and links to remediation. Outputs include human-readable reports and machine-readable JSON for automation.

When to use it

  • You want a diagnostic score for AI search readiness or generative search visibility.
  • You need to find why LLMs or search engines aren’t citing your pages.
  • Validating llms.txt, robots, or crawler accessibility for AI agents.
  • Reviewing schema markup and structured data for AI consumption.
  • Preparing sites for inclusion in generative engines like ChatGPT, Claude, Perplexity, or Google SGE.

Best practices

  • Run the full 29-point audit before major content or technical launches to baseline GEO readiness.
  • Use the JSON output to integrate checks into CI/CD and monitor regressions automatically.
  • Prioritize fixes by dimension score: accessibility and structured data typically yield the largest impact on citation.
  • Test with representative pages (homepage, top content, and localized pages) rather than a single URL.
  • When site blocks crawlers, re-run with a browser user-agent and follow up on llms.txt and robots rules.

Example use cases

  • Pre-launch audit to ensure new site will be discoverable and citable by generative engines.
  • Post-drop in AI referrals—identify which checks failed and which content is not formatted for citation.
  • Localization check with hreflang and canonical validation for multi-region deployments.
  • Batch-audit a portfolio of domains to prioritize remediation across sites.
  • CI pipeline step that fails builds if structured data or accessibility regress below thresholds.

FAQ

What does the GEO grade mean?

The grade aggregates 29 checkpoint scores: A+/A indicate strong AI readiness; lower grades show specific gaps to fix. Use the checklist to see exact failing items.

Do I need an API key or site access?

No API or special access is required. The tool crawls public pages. For blocked sites, use a browser user-agent or adjust robots/llms.txt to allow the audit.