home / skills / orchestra-research / ai-research-skills / simpo

simpo skill

/06-post-training/simpo

This skill helps you optimize alignment training with SimPO, delivering faster, reference-free preference optimization for better model alignment.

npx playbooks add skill orchestra-research/ai-research-skills --skill simpo

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
5.8 KB
---
name: simpo-training
description: Simple Preference Optimization for LLM alignment. Reference-free alternative to DPO with better performance (+6.4 points on AlpacaEval 2.0). No reference model needed, more efficient than DPO. Use for preference alignment when want simpler, faster training than DPO/PPO.
version: 1.0.0
author: Orchestra Research
license: MIT
tags: [Post-Training, SimPO, Preference Optimization, Alignment, DPO Alternative, Reference-Free, LLM Alignment, Efficient Training]
dependencies: [torch, transformers, datasets, trl, accelerate]
---

# SimPO - Simple Preference Optimization

## Quick start

SimPO is a reference-free preference optimization method that outperforms DPO without needing a reference model.

**Installation**:
```bash
# Create environment
conda create -n simpo python=3.10 && conda activate simpo

# Install PyTorch 2.2.2
# Visit: https://pytorch.org/get-started/locally/

# Install alignment-handbook
git clone https://github.com/huggingface/alignment-handbook.git
cd alignment-handbook
python -m pip install .

# Install Flash Attention 2
python -m pip install flash-attn --no-build-isolation
```

**Training** (Mistral 7B):
```bash
ACCELERATE_LOG_LEVEL=info accelerate launch \
  --config_file accelerate_configs/deepspeed_zero3.yaml \
  scripts/run_simpo.py \
  training_configs/mistral-7b-base-simpo.yaml
```

## Common workflows

### Workflow 1: Train from base model (Mistral 7B)

**Config** (`mistral-7b-base-simpo.yaml`):
```yaml
# Model
model_name_or_path: mistralai/Mistral-7B-v0.1
torch_dtype: bfloat16

# Dataset
dataset_mixer:
  HuggingFaceH4/ultrafeedback_binarized: 1.0
dataset_splits:
  - train_prefs
  - test_prefs

# SimPO hyperparameters
beta: 2.0                  # Reward scaling (2.0-10.0)
gamma_beta_ratio: 0.5       # Target margin (0-1)
loss_type: sigmoid          # sigmoid or hinge
sft_weight: 0.0             # Optional SFT regularization

# Training
learning_rate: 5e-7         # Critical: 3e-7 to 1e-6
num_train_epochs: 1
per_device_train_batch_size: 1
gradient_accumulation_steps: 8

# Output
output_dir: ./outputs/mistral-7b-simpo
```

**Launch training**:
```bash
accelerate launch --config_file accelerate_configs/deepspeed_zero3.yaml \
  scripts/run_simpo.py training_configs/mistral-7b-base-simpo.yaml
```

### Workflow 2: Fine-tune instruct model (Llama 3 8B)

**Config** (`llama3-8b-instruct-simpo.yaml`):
```yaml
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct

dataset_mixer:
  argilla/ultrafeedback-binarized-preferences-cleaned: 1.0

beta: 2.5
gamma_beta_ratio: 0.5
learning_rate: 5e-7
sft_weight: 0.1             # Add SFT loss to preserve capabilities

num_train_epochs: 1
per_device_train_batch_size: 2
gradient_accumulation_steps: 4
output_dir: ./outputs/llama3-8b-simpo
```

**Launch**:
```bash
accelerate launch --config_file accelerate_configs/deepspeed_zero3.yaml \
  scripts/run_simpo.py training_configs/llama3-8b-instruct-simpo.yaml
```

### Workflow 3: Reasoning-intensive tasks (lower LR)

**For math/code tasks**:
```yaml
model_name_or_path: deepseek-ai/deepseek-math-7b-base

dataset_mixer:
  argilla/distilabel-math-preference-dpo: 1.0

beta: 5.0                   # Higher for stronger signal
gamma_beta_ratio: 0.7       # Larger margin
learning_rate: 3e-7         # Lower LR for reasoning
sft_weight: 0.0

num_train_epochs: 1
per_device_train_batch_size: 1
gradient_accumulation_steps: 16
```

## When to use vs alternatives

**Use SimPO when**:
- Want simpler training than DPO (no reference model)
- Have preference data (chosen/rejected pairs)
- Need better performance than DPO
- Limited compute resources
- Single-node training sufficient

**Algorithm selection**:
- **SimPO**: Simplest, best performance, no reference model
- **DPO**: Need reference model baseline, more conservative
- **PPO**: Maximum control, need reward model, complex setup
- **GRPO**: Memory-efficient RL, no critic

**Use alternatives instead**:
- **OpenRLHF**: Multi-node distributed training, PPO/GRPO
- **TRL**: Need multiple methods in one framework
- **DPO**: Established baseline comparison

## Common issues

**Issue: Loss divergence**

Reduce learning rate:
```yaml
learning_rate: 3e-7  # Reduce from 5e-7
```

Reduce beta:
```yaml
beta: 1.0  # Reduce from 2.0
```

**Issue: Model forgets capabilities**

Add SFT regularization:
```yaml
sft_weight: 0.1  # Add SFT loss component
```

**Issue: Poor preference separation**

Increase beta and margin:
```yaml
beta: 5.0            # Increase from 2.0
gamma_beta_ratio: 0.8  # Increase from 0.5
```

**Issue: OOM during training**

Reduce batch size:
```yaml
per_device_train_batch_size: 1
gradient_accumulation_steps: 16  # Maintain effective batch
```

Enable gradient checkpointing:
```yaml
gradient_checkpointing: true
```

## Advanced topics

**Loss functions**: See [references/loss-functions.md](references/loss-functions.md) for sigmoid vs hinge loss, mathematical formulations, and when to use each.

**Hyperparameter tuning**: See [references/hyperparameters.md](references/hyperparameters.md) for beta, gamma, learning rate selection guide, and model-size-specific recommendations.

**Dataset preparation**: See [references/datasets.md](references/datasets.md) for preference data formats, quality filtering, and custom dataset creation.

## Hardware requirements

- **GPU**: NVIDIA A100/H100 recommended
- **VRAM**:
  - 7B model: 1× A100 40GB (DeepSpeed ZeRO-3)
  - 8B model: 2× A100 40GB
  - 70B model: 8× A100 80GB
- **Single-node**: DeepSpeed ZeRO-3 sufficient
- **Mixed precision**: BF16 recommended

**Memory optimization**:
- DeepSpeed ZeRO-3 (default config)
- Gradient checkpointing
- Flash Attention 2

## Resources

- Paper: https://arxiv.org/abs/2405.14734 (NeurIPS 2024)
- GitHub: https://github.com/princeton-nlp/SimPO
- Models: https://huggingface.co/princeton-nlp
- Alignment Handbook: https://github.com/huggingface/alignment-handbook



Overview

This skill implements Simple Preference Optimization (SimPO) for aligning large language models using preference data. It provides a reference-free, efficient alternative to DPO with reported improvements on AlpacaEval 2.0 and lower compute overhead. The package includes training recipes, configs for common models (7B–70B), and practical guidance for hyperparameters and memory optimizations.

How this skill works

SimPO trains directly from preference pairs (chosen vs rejected) without a separate reference model, using a scaled margin loss (sigmoid or hinge) and optional SFT regularization to preserve capabilities. It exposes tunable hyperparameters like beta (reward scaling), gamma_beta_ratio (target margin), learning rate, and sft_weight, and integrates with DeepSpeed, FlashAttention, and accelerate for single-node training. The skill bundles example configs and scripts for training from base models, fine-tuning instruct models, and tuning for reasoning tasks.

When to use it

  • You have preference data (chosen/rejected pairs) and want a simple alignment pipeline.
  • You need better empirical performance than DPO without maintaining a reference model.
  • You want faster, lower-overhead training than DPO or PPO on a single node.
  • You have limited compute and prefer single-node DeepSpeed ZeRO-3 setups.
  • You want straightforward hyperparameter recipes for 7B–70B model families.

Best practices

  • Start with beta in 2.0–5.0 range and gamma_beta_ratio around 0.5; tune beta up for stronger separation and down if loss diverges.
  • Use learning rates between 3e-7 and 1e-6; reduce to ~3e-7 for reasoning-intensive tasks.
  • Add sft_weight (e.g., 0.1) if the model forgets base capabilities after preference tuning.
  • Use DeepSpeed ZeRO-3, gradient checkpointing, and FlashAttention to reduce memory; adjust batch size and gradient accumulation to avoid OOM.
  • Prefer bfloat16/BF16 mixed precision and recommended GPU types (A100/H100) for stable training.

Example use cases

  • Train a Mistral-7B model on web-scale preference data to improve helpfulness and alignment.
  • Fine-tune an instruct-tuned Llama 3 8B with SFT regularization to retain capabilities while improving preferences.
  • Optimize a math/code model with lower learning rate and higher beta for reasoning-heavy preferences.
  • Quickly iterate on preference hyperparameters on a single-node DeepSpeed setup before scaling distributed runs.

FAQ

Do I need a reference model for SimPO?

No. SimPO is reference-free and intentionally removes the need for a separate reference model used by DPO.

What loss should I choose: sigmoid or hinge?

Both work; sigmoid is common for smooth gradients, hinge can yield stronger margins. Test both on validation preferences and follow the loss guidance in the references.

How do I stop the model from forgetting capabilities?

Add an SFT regularization term (sft_weight, e.g., 0.1) to preserve base-model behaviors during preference training.