home / skills / openclaw / skills / scale

This skill helps you scale systems and organizations by identifying bottlenecks, selecting small high-leverage moves, and validating with guardrails.

npx playbooks add skill openclaw/skills --skill scale

Review the files below or copy the command above to add this skill to your agents.

Files (9)
SKILL.md
5.7 KB
---
name: Scale Frameworks
slug: scale
version: 1.0.0
homepage: https://clawic.com/skills/scale
description: Scale systems, software architecture, and companies with bottleneck mapping, staged leverage plans, and risk-aware execution loops.
changelog: Initial release with cross-domain scaling frameworks, bottleneck diagnostics, and execution cadence playbooks.
metadata: {"clawdbot":{"emoji":"CHART","requires":{"bins":[],"config":["~/scale/"]},"os":["linux","darwin","win32"],"configPaths":["~/scale/"]}}
---

## Setup

On first use, read `setup.md` for integration and activation guidance.

## When to Use

Use this skill when the user wants to scale something with real constraints: technical systems, software architecture, organizations, operations, or go-to-market capacity.

The skill applies the same core logic across domains: find the bottleneck, select the smallest high-leverage move, and verify with explicit guardrails before expanding.

This skill is advisory and planning-focused. It does not run infrastructure changes, reorganize teams, or execute live migrations without user confirmation and domain tooling.

## Architecture

Memory lives in `~/scale/`. See `memory-template.md` for structure and status fields.

```text
~/scale/
|- memory.md                  # Durable scaling context and activation preferences
|- bottleneck-map.md          # Active constraints and bottleneck hypotheses
|- leverage-backlog.md        # Candidate changes ranked by impact and effort
`- experiment-log.md          # Outcomes, regressions, and rollout notes
```

## Quick Reference

Use the smallest relevant file for the current scaling problem.

| Topic | File |
|-------|------|
| Setup and integration | `setup.md` |
| Memory structure and states | `memory-template.md` |
| Universal intake and bottleneck diagnosis | `scale-diagnostic.md` |
| Infrastructure and platform scaling | `system-scale-framework.md` |
| Software architecture scaling | `architecture-scale-framework.md` |
| Team and business scaling | `company-scale-framework.md` |
| Cadence, metrics, and rollout control | `execution-cadence.md` |

## Core Rules

### 1. Define Scale Target Before Solutions
Always lock these inputs first:
- What must scale: throughput, reliability, team output, revenue, or customer base
- Time horizon: immediate, quarter, or year
- Non-negotiable constraints: budget, compliance, headcount, latency, quality

No target, no valid scaling plan.

### 2. Work the BOLT Loop
For every scaling request, apply BOLT in order:
- Bottleneck: identify the dominant limiting factor now
- Objective: define measurable win condition
- Levers: list 3 to 5 candidate interventions
- Test: run staged validation with rollback criteria

Do not skip directly from symptoms to large transformations.

### 3. Prioritize Smallest Effective Change
Default to interventions that unlock capacity fast with bounded risk:
- Remove queueing friction before adding complexity
- Improve interfaces and ownership before splitting services
- Standardize repeated work before hiring aggressively

Big rewrites are last resort, not default strategy.

### 4. Price Second-Order Effects Explicitly
Each recommendation must include likely side effects:
- New failure modes
- Cost and operational overhead growth
- Coordination load across teams
- Risk of local optimization hurting global performance

If second-order risk is unknown, mark as hypothesis and constrain rollout.

### 5. Pair Every KPI with a Guardrail
Never scale on a single growth metric. Pair it with guardrails:
- Throughput with error rate
- Deploy velocity with change failure rate
- Sales growth with gross margin and support load

If guardrails degrade, pause expansion and stabilize.

### 6. Separate Temporary Boosts from Durable Capacity
Label every action as one of two types:
- Temporary boost: overtime, manual review, tactical exceptions
- Durable capacity: automation, architecture simplification, reusable process

Use temporary boosts only to buy time for durable capacity.

### 7. Institutionalize What Works
After each successful change:
- Capture trigger conditions
- Document operating playbook and owner
- Add review cadence and retirement criteria

Scaling compounds only when wins become repeatable systems.

## Common Traps

- Hiring before workflow clarity -> headcount increases coordination drag.
- Splitting monoliths before interface discipline -> distributed outages with slower delivery.
- Scaling traffic without SLO guardrails -> growth hides reliability collapse.
- Copying big-company org charts too early -> decision latency and ownership gaps.
- Optimizing one bottleneck in isolation -> next bottleneck shifts and total flow does not improve.
- Confusing activity with throughput -> teams look busy while output stagnates.

## Security & Privacy

**Data that leaves your machine:**
- None by default from this skill itself.

**Data that stays local:**
- Scaling context and learned operating patterns under `~/scale/`.

**This skill does NOT:**
- Execute undeclared network requests automatically.
- Apply irreversible technical or organizational changes without explicit user approval.
- Store secrets, credentials, or payment data in local memory files.
- Modify files outside `~/scale/` for memory storage.

## Related Skills
Install with `clawhub install <slug>` if user confirms:
- `architecture` - Architectural fundamentals and constraints that shape scaling decisions.
- `systems-architect` - Reliability, infrastructure, and platform tradeoff patterns.
- `startup` - Stage-aware startup execution and prioritization logic.
- `growth` - Demand generation and growth loops once capacity is ready.
- `strategy` - Strategic framing and tradeoff analysis across long horizons.

## Feedback

- If useful: `clawhub star scale`
- Stay updated: `clawhub sync`

Overview

This skill helps scale systems, software architecture, and organizations by mapping bottlenecks, proposing staged high-leverage moves, and running risk-aware execution loops. It focuses on safe, measurable growth: define targets, find the dominant constraint, pick the smallest effective intervention, and verify with guardrails before expanding.

How this skill works

The skill stores durable scaling context and hypotheses in a local memory directory and uses a repeatable BOLT loop: identify the Bottleneck, set an Objective, list Levers, and run Tests with rollback criteria. It produces prioritized candidate changes, documents second-order effects, and logs experiments and outcomes so wins can be institutionalized. It is advisory only and does not perform live changes without explicit user approval.

When to use it

  • You need to increase throughput, reliability, team output, or revenue under real constraints.
  • Diagnosing why recent scale efforts didn't improve overall flow.
  • Planning staged rollouts where safety and rollback rules matter.
  • Choosing between architecture changes, process fixes, or headcount for capacity.
  • Designing KPIs with paired guardrails before fast expansion.

Best practices

  • Always define what must scale, the time horizon, and hard constraints before proposing solutions.
  • Apply the BOLT loop in order and avoid jumping from symptoms to big rewrites.
  • Prefer the smallest effective change that unlocks capacity with bounded risk.
  • Explicitly call out second-order effects and treat unknown effects as hypotheses.
  • Pair every growth metric with guardrails and stop expansion if guardrails degrade.
  • Record triggers, owners, and playbooks so successful changes become repeatable.

Example use cases

  • Diagnose a product team's delivery slowdown and choose between tooling, process, or hiring.
  • Reduce request queueing in a platform by identifying the dominant latency source and testing a small cache or batching change.
  • Scale a sales pipeline by pairing lead volume targets with support capacity and margin guardrails.
  • Plan a service split only after strengthening interfaces and ownership to avoid distributed outages.
  • Run a staged capacity increase with rollback criteria logged and automated metrics checks.

FAQ

Does the skill make live infrastructure changes?

No. The skill provides plans and staged validation steps; it will not execute changes without explicit user approval and domain tooling.

Where is state stored?

All durable scaling context, bottleneck maps, leverage backlogs, and experiment logs are kept locally under ~/scale/.