home / skills / steveclarke / dotfiles / i-critique

i-critique skill

/ai/skills/i-critique

This skill helps evaluate interface design from a UX perspective, offering actionable critiques on hierarchy, IA, emotion, and usability.

npx playbooks add skill steveclarke/dotfiles --skill i-critique

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
5.4 KB
---
name: i-critique
description: Evaluate design effectiveness from a UX perspective. Assesses visual hierarchy, information architecture, emotional resonance, and overall design quality with actionable feedback.
user-invokable: true
args:
  - name: area
    description: The feature or area to critique (optional)
    required: false
---

## MANDATORY PREPARATION

Use the i-frontend-design skill — it contains design principles, anti-patterns, and the **Context Gathering Protocol**. Follow the protocol before proceeding — if no design context exists yet, you MUST run teach-impeccable first. Additionally gather: what the interface is trying to accomplish.

---

Conduct a holistic design critique, evaluating whether the interface actually works—not just technically, but as a designed experience. Think like a design director giving feedback.

## Design Critique

Evaluate the interface across these dimensions:

### 1. AI Slop Detection (CRITICAL)

**This is the most important check.** Does this look like every other AI-generated interface from 2024-2025?

Review the design against ALL the **DON'T** guidelines in the i-frontend-design skill—they are the fingerprints of AI-generated work. Check for the AI color palette, gradient text, dark mode with glowing accents, glassmorphism, hero metric layouts, identical card grids, generic fonts, and all other tells.

**The test**: If you showed this to someone and said "AI made this," would they believe you immediately? If yes, that's the problem.

### 2. Visual Hierarchy
- Does the eye flow to the most important element first?
- Is there a clear primary action? Can you spot it in 2 seconds?
- Do size, color, and position communicate importance correctly?
- Is there visual competition between elements that should have different weights?

### 3. Information Architecture
- Is the structure intuitive? Would a new user understand the organization?
- Is related content grouped logically?
- Are there too many choices at once? (cognitive overload)
- Is the navigation clear and predictable?

### 4. Emotional Resonance
- What emotion does this interface evoke? Is that intentional?
- Does it match the brand personality?
- Does it feel trustworthy, approachable, premium, playful—whatever it should feel?
- Would the target user feel "this is for me"?

### 5. Discoverability & Affordance
- Are interactive elements obviously interactive?
- Would a user know what to do without instructions?
- Are hover/focus states providing useful feedback?
- Are there hidden features that should be more visible?

### 6. Composition & Balance
- Does the layout feel balanced or uncomfortably weighted?
- Is whitespace used intentionally or just leftover?
- Is there visual rhythm in spacing and repetition?
- Does asymmetry feel designed or accidental?

### 7. Typography as Communication
- Does the type hierarchy clearly signal what to read first, second, third?
- Is body text comfortable to read? (line length, spacing, size)
- Do font choices reinforce the brand/tone?
- Is there enough contrast between heading levels?

### 8. Color with Purpose
- Is color used to communicate, not just decorate?
- Does the palette feel cohesive?
- Are accent colors drawing attention to the right things?
- Does it work for colorblind users? (not just technically—does meaning still come through?)

### 9. States & Edge Cases
- Empty states: Do they guide users toward action, or just say "nothing here"?
- Loading states: Do they reduce perceived wait time?
- Error states: Are they helpful and non-blaming?
- Success states: Do they confirm and guide next steps?

### 10. Microcopy & Voice
- Is the writing clear and concise?
- Does it sound like a human (the right human for this brand)?
- Are labels and buttons unambiguous?
- Does error copy help users fix the problem?

## Generate Critique Report

Structure your feedback as a design director would:

### Anti-Patterns Verdict
**Start here.** Pass/fail: Does this look AI-generated? List specific tells from the skill's Anti-Patterns section. Be brutally honest.

### Overall Impression
A brief gut reaction—what works, what doesn't, and the single biggest opportunity.

### What's Working
Highlight 2-3 things done well. Be specific about why they work.

### Priority Issues
The 3-5 most impactful design problems, ordered by importance:

For each issue:
- **What**: Name the problem clearly
- **Why it matters**: How this hurts users or undermines goals
- **Fix**: What to do about it (be concrete)
- **Command**: Which command to use (prefer: /i-extract, /i-distill, /i-arrange, /i-harden, /i-clarify, /i-critique, /i-delight, /i-onboard, /i-colorize, /i-animate, /i-audit, /i-quieter, /i-bolder, /i-typeset, /i-polish, /i-normalize, /i-overdrive, /i-adapt, /i-optimize — or other installed skills you're sure exist)

### Minor Observations
Quick notes on smaller issues worth addressing.

### Questions to Consider
Provocative questions that might unlock better solutions:
- "What if the primary action were more prominent?"
- "Does this need to feel this complex?"
- "What would a confident version of this look like?"

**Remember**:
- Be direct—vague feedback wastes everyone's time
- Be specific—"the submit button" not "some elements"
- Say what's wrong AND why it matters to users
- Give concrete suggestions, not just "consider exploring..."
- Prioritize ruthlessly—if everything is important, nothing is
- Don't soften criticism—developers need honest feedback to ship great design

Overview

This skill evaluates product interfaces from a UX and design-director perspective, producing a prioritized, actionable critique. It focuses on whether an interface actually works for real people—visual hierarchy, information architecture, emotional resonance, microcopy, states, and the deadly signs of generic AI output.

How this skill works

Follow the i-frontend-design Context Gathering Protocol before any review; if no context exists, run teach-impeccable first to collect goals and user context. The skill runs a ten-point inspection (including an AI Slop Detection check) and generates a structured critique with an Anti-Patterns verdict, overall impression, what’s working, top priority issues with concrete fixes and commands, minor observations, and provocative questions to unblock design decisions.

When to use it

  • Before a launch to catch high-impact UX risks
  • During design reviews to benchmark quality and consistency
  • When refactoring or redesigning a feature to prioritize work
  • To audit product pages for trust and conversion issues
  • When suspecting the design feels generic or AI-generated

Best practices

  • Always run the Context Gathering Protocol (i-frontend-design) first to anchor critiques in goals
  • Start with the AI Slop Detection check—if it fails, prioritize originality and clarity
  • Give 3–5 prioritized fixes, each with a why, a concrete fix, and a command to execute
  • Favor observable outcomes (faster task completion, clearer primary action) over vague aesthetics
  • Use short, direct language and call out exact elements (e.g., ‘primary CTA’, ‘hero metric panel’)

Example use cases

  • Reviewing a signup flow that has low conversion and unclear primary action
  • Auditing a dashboard suspected of overwhelming users with metrics and cards
  • Checking a marketing landing page for trust, emotional fit, and CTA prominence
  • Evaluating mobile app screens for discoverability and affordance issues
  • Spotting AI-generated visual patterns and recommending authentic alternatives

FAQ

Do I need to provide design files or a link?

Yes—supply prototype links, screenshots, or files and the Context Gathering answers so the critique is grounded in intent and constraints.

What is AI Slop Detection?

A focused check for visual fingerprints of generic AI-generated interfaces (color palettes, gradients, glassmorphism, identical grids). If users would assume ‘AI made this’, that’s the highest-priority problem.