home / skills / wellapp-ai / well / competitor-scan
This skill helps you benchmark competitors using Browser MCP and WebSearch to extract UI patterns and design insights.
npx playbooks add skill wellapp-ai/well --skill competitor-scanReview the files below or copy the command above to add this skill to your agents.
---
name: competitor-scan
description: Research best-in-class products using Browser MCP and WebSearch
---
# Competitor Scan Skill
Research how best-in-class products solve similar problems using Browser MCP for screenshots and WebSearch for teardowns.
## When to Use
- At the start of DIVERGE Loop (L1)
- When exploring new UI patterns
- When benchmarking against industry standards
## Instructions
### Phase 1: Identify Competitors
Use the domain competitor table:
| Domain | Products to Study |
|--------|-------------------|
| Workspaces/Collaboration | Notion, Linear, Slack, Figma, Attio |
| Data Tables | Airtable, Retool, Rows, Grist |
| AI Chat | ChatGPT, Claude, Gemini, Perplexity |
| Onboarding/Flows | Stripe, Plaid, Mercury, Ramp |
| Settings/Admin | Vercel, Railway, PlanetScale |
| Invitations/Team | Slack, Notion, Linear, Figma |
| Billing/Subscriptions | Stripe, Paddle, Chargebee |
### Phase 2: Screenshot Key Flows (Browser MCP)
For each relevant competitor:
```
1. browser_navigate to the product URL or relevant page
2. browser_snapshot to understand the page structure
3. browser_take_screenshot to capture the UI
4. browser_click / browser_type to navigate through flows
```
**Capture:**
- Entry points (how users start the flow)
- Key screens (main interactions)
- Edge cases (empty states, errors)
- Micro-interactions (hover states, transitions)
### Phase 3: Research Teardowns (WebSearch)
Search for existing analysis:
```
WebSearch "[Product] UI teardown [feature]"
WebSearch "[Product] UX case study [feature]"
WebSearch "[Feature] best practices design patterns"
```
### Phase 4: Extract Patterns
For each competitor, note:
| Aspect | Pattern |
|--------|---------|
| **Layout** | How is content organized? |
| **Navigation** | How do users move between states? |
| **Actions** | How are primary/secondary actions presented? |
| **Feedback** | How is success/error communicated? |
| **Copy** | What language/tone is used? |
## Output Format
After running this skill, output:
```markdown
## Competitor Scan
### Products Analyzed
1. [Product A] - [URL or feature]
2. [Product B] - [URL or feature]
3. [Product C] - [URL or feature]
### Key Patterns Observed
| Pattern | Product | Description |
|---------|---------|-------------|
| [Pattern] | [Product] | [How they do it] |
### Insights for Our Design
- [Insight 1]: [How to apply]
- [Insight 2]: [How to apply]
### Screenshots Captured
- [Description of screenshot 1]
- [Description of screenshot 2]
```
## Invocation
Invoke manually with "use competitor-scan skill" or follow Ask mode DIVERGE loop which references this skill's phases.
## Related Skills
- `problem-framing` - Define what problem to research
- `design-context` - Compare external patterns with internal
This skill researches best-in-class products to surface UI/UX patterns, flows, and teardown insights using Browser MCP for screenshots and WebSearch for analysis. It targets competitive domains relevant to FinOps, invoices, onboarding, and collaboration so teams can make informed design and product decisions. The output is a concise competitor scan with captured screenshots, pattern tables, and actionable design recommendations.
The skill identifies target competitors from a domain table, then walks key flows in-browser to capture entry points, screens, edge cases, and micro-interactions via Browser MCP. It augments screenshots with WebSearch-driven teardowns, case studies, and pattern research to verify why designs work. Finally, it extracts layout, navigation, action, feedback, and copy patterns and emits a structured competitor scan for product and design teams.
How many competitors should I scan for a typical pass?
Scan 5–8 competitors focusing on closest feature matches and one or two aspirational products for broader ideas.
What screenshots are essential?
Always capture entry points, the main interaction screens, empty/error states, and any notable micro-interactions or transitions.