home / skills / benchflow-ai / skillsbench / mpc-horizon-tuning
/tasks/r2r-mpc-control/environment/skills/mpc-horizon-tuning
This skill helps you tune MPC horizons and cost matrices for tension control, balancing tracking performance and actuator effort.
npx playbooks add skill benchflow-ai/skillsbench --skill mpc-horizon-tuningReview the files below or copy the command above to add this skill to your agents.
---
name: mpc-horizon-tuning
description: Selecting MPC prediction horizon and cost matrices for web handling.
---
# MPC Tuning for Tension Control
## Prediction Horizon Selection
**Horizon N** affects performance and computation:
- Too short (N < 5): Poor disturbance rejection
- Too long (N > 20): Excessive computation
- Rule of thumb: N ≈ 2-3× settling time / dt
For R2R systems with dt=0.01s: **N = 5-15** typical
## Cost Matrix Design
**State cost Q**: Emphasize tension tracking
```python
Q_tension = 100 / T_ref² # High weight on tensions
Q_velocity = 0.1 / v_ref² # Lower weight on velocities
Q = diag([Q_tension × 6, Q_velocity × 6])
```
**Control cost R**: Penalize actuator effort
```python
R = 0.01-0.1 × eye(n_u) # Smaller = more aggressive
```
## Trade-offs
| Higher Q | Effect |
|----------|--------|
| Faster tracking | More control effort |
| Lower steady-state error | More aggressive transients |
| Higher R | Effect |
|----------|--------|
| Smoother control | Slower response |
| Less actuator wear | Higher tracking error |
## Terminal Cost
Use LQR solution for terminal cost to guarantee stability:
```python
P = solve_continuous_are(A, B, Q, R)
```
This skill helps select an MPC prediction horizon and design state/control cost matrices specifically for web-handling tension control. It provides practical rules of thumb for horizon sizing, concrete Q/R scaling suggestions, and guidance on terminal cost selection to ensure stability. The guidance is tuned for roll-to-roll (R2R) systems and fast sampling rates.
The skill evaluates system sampling time and typical settling behavior to recommend a prediction horizon that balances disturbance rejection and computation. It prescribes how to scale state cost Q to prioritize tension tracking and how to choose control cost R to trade off aggression versus actuator wear. It also recommends computing an LQR terminal cost (P) to guarantee stability at the horizon.
How do I choose the initial prediction horizon N?
Start with N ≈ 2–3 × (settling time / dt). For typical R2R with dt=0.01 s use N = 5–15 and increase if disturbance rejection is insufficient.
What if actuators saturate with my chosen Q?
Increase R to penalize actuator effort more, or reduce Q on less critical states; also test slower horizon growth or add actuator constraints and re‑tune.