home / skills / coowoolf / insighthunt-skills / unstuck-scaling
This skill helps you improve AI reliability by identifying, addressing, and quantitatively tuning specific bottlenecks for faster feedback loops.
npx playbooks add skill coowoolf/insighthunt-skills --skill unstuck-scalingReview the files below or copy the command above to add this skill to your agents.
---
name: unstuck-scaling
description: Use when AI agents frequently hit dead ends, when reliability is the main constraint on scaling utility, or when general model improvements don't solve specific blockers
---
# The Unstuck Scaling Framework
## Overview
A systematic approach to improving AI reliability by treating **"getting stuck"** as the primary bottleneck. Instead of broad improvements, painstakingly identify specific failure modes and create tight feedback loops.
**Core principle:** Address specific bottlenecks, not general intelligence.
## The Cycle
```
┌─────────────────────────────────────────────────────────────────┐
│ │
│ ┌───────────────────┐ │
│ │ IDENTIFY │ │
│ │ 'Stuck' Points │ │
│ │ (auth, payments) │ │
│ └─────────┬─────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────┐ │
│ │ ADDRESS │ │
│ │ Specific │ │
│ │ Bottlenecks │ │
│ └─────────┬─────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────┐ │
│ │ QUANTITATIVELY │ │
│ │ Tune System │ │
│ │ (pass/fail rate) │ │
│ └─────────┬─────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────┐ │
│ │ FAST FEEDBACK │─────────────────────────┐ │
│ │ Loop │ │ │
│ └───────────────────┘ │ │
│ ▲ │ │
│ └───────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Key Principles
| Principle | Description |
|-----------|-------------|
| **Specific blockers** | Identify exact points where AI fails |
| **Quantitative tuning** | Measure stuck rates, not vibes |
| **Fast feedback** | Rapid iteration on fixes |
| **Bottleneck focus** | Specific roadblocks > general intelligence |
## Common Mistakes
- Focusing on general model improvements
- Failing to measure "stuck" rates quantitatively
- Slow feedback loops preventing rapid iteration
---
*Source: Anton Osika (Lovable, GPT Engineer) via Lenny's Podcast*
This skill applies a practical framework for scaling AI systems by treating ‘getting stuck’ as the main reliability bottleneck. It helps teams find exact failure points, fix them with targeted interventions, and drive improvements with tight measurement and fast iteration. Use it when reliability limits the product’s usefulness more than raw model capability.
The skill inspects execution logs, user interactions, and automated tests to identify concrete stuck points (e.g., auth flows, payment validation, API timeouts). It translates those failures into pass/fail metrics, prioritizes the highest-impact blockers, and prescribes focused fixes. Finally, it establishes short feedback cycles to quantitatively tune the system and verify that changes reduce the stuck rate.
How is this different from general model improvement?
This skill targets specific operational bottlenecks that cause agents to stop progressing, rather than making broad model changes that may not affect the measured failure modes.
What metrics should I track first?
Start with simple pass/fail rates for core tasks, mean time to recovery for stuck sessions, and frequency of each distinct stuck point. These are actionable and easy to measure.