home / skills / wellapp-ai / well / competitor-scan

competitor-scan skill

/cursor-rules/skills/competitor-scan

This skill helps you benchmark competitors using Browser MCP and WebSearch to extract UI patterns and design insights.

npx playbooks add skill wellapp-ai/well --skill competitor-scan

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.7 KB
---
name: competitor-scan
description: Research best-in-class products using Browser MCP and WebSearch
---

# Competitor Scan Skill

Research how best-in-class products solve similar problems using Browser MCP for screenshots and WebSearch for teardowns.

## When to Use

- At the start of DIVERGE Loop (L1)
- When exploring new UI patterns
- When benchmarking against industry standards

## Instructions

### Phase 1: Identify Competitors

Use the domain competitor table:

| Domain | Products to Study |
|--------|-------------------|
| Workspaces/Collaboration | Notion, Linear, Slack, Figma, Attio |
| Data Tables | Airtable, Retool, Rows, Grist |
| AI Chat | ChatGPT, Claude, Gemini, Perplexity |
| Onboarding/Flows | Stripe, Plaid, Mercury, Ramp |
| Settings/Admin | Vercel, Railway, PlanetScale |
| Invitations/Team | Slack, Notion, Linear, Figma |
| Billing/Subscriptions | Stripe, Paddle, Chargebee |

### Phase 2: Screenshot Key Flows (Browser MCP)

For each relevant competitor:

```
1. browser_navigate to the product URL or relevant page
2. browser_snapshot to understand the page structure
3. browser_take_screenshot to capture the UI
4. browser_click / browser_type to navigate through flows
```

**Capture:**
- Entry points (how users start the flow)
- Key screens (main interactions)
- Edge cases (empty states, errors)
- Micro-interactions (hover states, transitions)

### Phase 3: Research Teardowns (WebSearch)

Search for existing analysis:

```
WebSearch "[Product] UI teardown [feature]"
WebSearch "[Product] UX case study [feature]"
WebSearch "[Feature] best practices design patterns"
```

### Phase 4: Extract Patterns

For each competitor, note:

| Aspect | Pattern |
|--------|---------|
| **Layout** | How is content organized? |
| **Navigation** | How do users move between states? |
| **Actions** | How are primary/secondary actions presented? |
| **Feedback** | How is success/error communicated? |
| **Copy** | What language/tone is used? |

## Output Format

After running this skill, output:

```markdown
## Competitor Scan

### Products Analyzed
1. [Product A] - [URL or feature]
2. [Product B] - [URL or feature]
3. [Product C] - [URL or feature]

### Key Patterns Observed

| Pattern | Product | Description |
|---------|---------|-------------|
| [Pattern] | [Product] | [How they do it] |

### Insights for Our Design
- [Insight 1]: [How to apply]
- [Insight 2]: [How to apply]

### Screenshots Captured
- [Description of screenshot 1]
- [Description of screenshot 2]
```

## Invocation

Invoke manually with "use competitor-scan skill" or follow Ask mode DIVERGE loop which references this skill's phases.

## Related Skills

- `problem-framing` - Define what problem to research
- `design-context` - Compare external patterns with internal

Overview

This skill researches best-in-class products to surface UI/UX patterns, flows, and teardown insights using Browser MCP for screenshots and WebSearch for analysis. It targets competitive domains relevant to FinOps, invoices, onboarding, and collaboration so teams can make informed design and product decisions. The output is a concise competitor scan with captured screenshots, pattern tables, and actionable design recommendations.

How this skill works

The skill identifies target competitors from a domain table, then walks key flows in-browser to capture entry points, screens, edge cases, and micro-interactions via Browser MCP. It augments screenshots with WebSearch-driven teardowns, case studies, and pattern research to verify why designs work. Finally, it extracts layout, navigation, action, feedback, and copy patterns and emits a structured competitor scan for product and design teams.

When to use it

  • At the start of the DIVERGE Loop (L1) to set benchmarks
  • When evaluating new UI or onboarding patterns for invoices and billing
  • Before major redesigns to avoid reinventing common solutions
  • When creating product requirements or UX specs
  • To validate micro-interactions and error handling approaches

Best practices

  • Pick 5–8 direct competitors across relevant domains and prioritize flows that mirror your product goals
  • Capture full flow context: entry point, happy path, empty/error states, and edge cases
  • Combine screenshots with external teardowns to understand intent and tradeoffs
  • Extract patterns under consistent headings: Layout, Navigation, Actions, Feedback, Copy
  • Record short notes with each screenshot: why it works, potential pitfalls, and relevance to our product

Example use cases

  • Benchmarking billing subscription flows against Stripe and Chargebee to improve conversion
  • Studying onboarding flows (Plaid, Ramp) to reduce activation time for finance users
  • Comparing invoice extraction UIs and copy across tools to improve extraction accuracy and trust
  • Capturing micro-interactions in collaboration apps (Notion, Slack) to inform notification and invitation designs
  • Auditing admin/settings pages (Vercel, PlanetScale) to simplify access controls and billing visibility

FAQ

How many competitors should I scan for a typical pass?

Scan 5–8 competitors focusing on closest feature matches and one or two aspirational products for broader ideas.

What screenshots are essential?

Always capture entry points, the main interaction screens, empty/error states, and any notable micro-interactions or transitions.