home / skills / davila7 / claude-code-templates / post-training-simpo

This skill helps you optimize alignment training with SimPO by providing a simpler, reference-free preference optimization workflow for faster, better

This is most likely a fork of the simpo skill from orchestra-research
npx playbooks add skill davila7/claude-code-templates --skill post-training-simpo

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
5.8 KB
---
name: simpo-training
description: Simple Preference Optimization for LLM alignment. Reference-free alternative to DPO with better performance (+6.4 points on AlpacaEval 2.0). No reference model needed, more efficient than DPO. Use for preference alignment when want simpler, faster training than DPO/PPO.
version: 1.0.0
author: Orchestra Research
license: MIT
tags: [Post-Training, SimPO, Preference Optimization, Alignment, DPO Alternative, Reference-Free, LLM Alignment, Efficient Training]
dependencies: [torch, transformers, datasets, trl, accelerate]
---

# SimPO - Simple Preference Optimization

## Quick start

SimPO is a reference-free preference optimization method that outperforms DPO without needing a reference model.

**Installation**:
```bash
# Create environment
conda create -n simpo python=3.10 && conda activate simpo

# Install PyTorch 2.2.2
# Visit: https://pytorch.org/get-started/locally/

# Install alignment-handbook
git clone https://github.com/huggingface/alignment-handbook.git
cd alignment-handbook
python -m pip install .

# Install Flash Attention 2
python -m pip install flash-attn --no-build-isolation
```

**Training** (Mistral 7B):
```bash
ACCELERATE_LOG_LEVEL=info accelerate launch \
  --config_file accelerate_configs/deepspeed_zero3.yaml \
  scripts/run_simpo.py \
  training_configs/mistral-7b-base-simpo.yaml
```

## Common workflows

### Workflow 1: Train from base model (Mistral 7B)

**Config** (`mistral-7b-base-simpo.yaml`):
```yaml
# Model
model_name_or_path: mistralai/Mistral-7B-v0.1
torch_dtype: bfloat16

# Dataset
dataset_mixer:
  HuggingFaceH4/ultrafeedback_binarized: 1.0
dataset_splits:
  - train_prefs
  - test_prefs

# SimPO hyperparameters
beta: 2.0                  # Reward scaling (2.0-10.0)
gamma_beta_ratio: 0.5       # Target margin (0-1)
loss_type: sigmoid          # sigmoid or hinge
sft_weight: 0.0             # Optional SFT regularization

# Training
learning_rate: 5e-7         # Critical: 3e-7 to 1e-6
num_train_epochs: 1
per_device_train_batch_size: 1
gradient_accumulation_steps: 8

# Output
output_dir: ./outputs/mistral-7b-simpo
```

**Launch training**:
```bash
accelerate launch --config_file accelerate_configs/deepspeed_zero3.yaml \
  scripts/run_simpo.py training_configs/mistral-7b-base-simpo.yaml
```

### Workflow 2: Fine-tune instruct model (Llama 3 8B)

**Config** (`llama3-8b-instruct-simpo.yaml`):
```yaml
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct

dataset_mixer:
  argilla/ultrafeedback-binarized-preferences-cleaned: 1.0

beta: 2.5
gamma_beta_ratio: 0.5
learning_rate: 5e-7
sft_weight: 0.1             # Add SFT loss to preserve capabilities

num_train_epochs: 1
per_device_train_batch_size: 2
gradient_accumulation_steps: 4
output_dir: ./outputs/llama3-8b-simpo
```

**Launch**:
```bash
accelerate launch --config_file accelerate_configs/deepspeed_zero3.yaml \
  scripts/run_simpo.py training_configs/llama3-8b-instruct-simpo.yaml
```

### Workflow 3: Reasoning-intensive tasks (lower LR)

**For math/code tasks**:
```yaml
model_name_or_path: deepseek-ai/deepseek-math-7b-base

dataset_mixer:
  argilla/distilabel-math-preference-dpo: 1.0

beta: 5.0                   # Higher for stronger signal
gamma_beta_ratio: 0.7       # Larger margin
learning_rate: 3e-7         # Lower LR for reasoning
sft_weight: 0.0

num_train_epochs: 1
per_device_train_batch_size: 1
gradient_accumulation_steps: 16
```

## When to use vs alternatives

**Use SimPO when**:
- Want simpler training than DPO (no reference model)
- Have preference data (chosen/rejected pairs)
- Need better performance than DPO
- Limited compute resources
- Single-node training sufficient

**Algorithm selection**:
- **SimPO**: Simplest, best performance, no reference model
- **DPO**: Need reference model baseline, more conservative
- **PPO**: Maximum control, need reward model, complex setup
- **GRPO**: Memory-efficient RL, no critic

**Use alternatives instead**:
- **OpenRLHF**: Multi-node distributed training, PPO/GRPO
- **TRL**: Need multiple methods in one framework
- **DPO**: Established baseline comparison

## Common issues

**Issue: Loss divergence**

Reduce learning rate:
```yaml
learning_rate: 3e-7  # Reduce from 5e-7
```

Reduce beta:
```yaml
beta: 1.0  # Reduce from 2.0
```

**Issue: Model forgets capabilities**

Add SFT regularization:
```yaml
sft_weight: 0.1  # Add SFT loss component
```

**Issue: Poor preference separation**

Increase beta and margin:
```yaml
beta: 5.0            # Increase from 2.0
gamma_beta_ratio: 0.8  # Increase from 0.5
```

**Issue: OOM during training**

Reduce batch size:
```yaml
per_device_train_batch_size: 1
gradient_accumulation_steps: 16  # Maintain effective batch
```

Enable gradient checkpointing:
```yaml
gradient_checkpointing: true
```

## Advanced topics

**Loss functions**: See [references/loss-functions.md](references/loss-functions.md) for sigmoid vs hinge loss, mathematical formulations, and when to use each.

**Hyperparameter tuning**: See [references/hyperparameters.md](references/hyperparameters.md) for beta, gamma, learning rate selection guide, and model-size-specific recommendations.

**Dataset preparation**: See [references/datasets.md](references/datasets.md) for preference data formats, quality filtering, and custom dataset creation.

## Hardware requirements

- **GPU**: NVIDIA A100/H100 recommended
- **VRAM**:
  - 7B model: 1× A100 40GB (DeepSpeed ZeRO-3)
  - 8B model: 2× A100 40GB
  - 70B model: 8× A100 80GB
- **Single-node**: DeepSpeed ZeRO-3 sufficient
- **Mixed precision**: BF16 recommended

**Memory optimization**:
- DeepSpeed ZeRO-3 (default config)
- Gradient checkpointing
- Flash Attention 2

## Resources

- Paper: https://arxiv.org/abs/2405.14734 (NeurIPS 2024)
- GitHub: https://github.com/princeton-nlp/SimPO
- Models: https://huggingface.co/princeton-nlp
- Alignment Handbook: https://github.com/huggingface/alignment-handbook



Overview

This skill implements Simple Preference Optimization (SimPO), a reference-free method for aligning LLMs with human preferences. It provides a lightweight, fast CLI workflow and training configs to fine-tune models using preference pairs, delivering better empirical performance than DPO while avoiding a reference model.

How this skill works

The tool trains models directly from chosen/rejected response pairs using a reward-scaled preference loss (sigmoid or hinge) controlled by beta and margin hyperparameters. It integrates with Accelerate/DeepSpeed and supports SFT regularization, gradient checkpointing, and Flash Attention to optimize memory and speed on single-node GPU setups.

When to use it

  • You have preference data (chosen/rejected pairs) and want to align a model without a reference model.
  • You need a simpler and faster alternative to DPO or PPO for single-node training.
  • You want better empirical alignment performance than DPO on instruction-following tasks.
  • You are constrained by compute and prefer memory-optimized training (DeepSpeed ZeRO-3).
  • You want to fine-tune instruct models (e.g., Llama 3) while preserving capabilities with SFT regularization.

Best practices

  • Start with learning_rate in 3e-7–1e-6 and lower it if loss diverges; 3e-7 is safe for reasoning models.
  • Tune beta (reward scale) and gamma_beta_ratio (margin) together; increase both if preference separation is poor.
  • Use sft_weight > 0 to preserve base capabilities when fine-tuning instruct models.
  • Enable DeepSpeed ZeRO-3, gradient checkpointing, and Flash Attention to avoid OOM on limited VRAM.
  • Use small per-device batch sizes with gradient_accumulation_steps to keep effective batch size stable.

Example use cases

  • Fine-tune a 7B model on human preference data to improve instruction following without a reference model.
  • Adapt an instruct model (Llama 3 8B) while preserving capabilities by adding SFT regularization.
  • Train math or code-specialized models with lower learning rates and higher beta for stronger signals.
  • Quickly iterate on preference alignment on a single A100 GPU using DeepSpeed ZeRO-3.
  • Compare SimPO against DPO or PPO when evaluating alignment efficiency and final performance.

FAQ

Do I need a reference model to run SimPO?

No. SimPO is reference-free and does not require a separate reference model.

What to change if training diverges or model forgets skills?

Reduce the learning rate or beta for divergence. Add SFT regularization (sft_weight) if the model forgets capabilities.