home / skills / arize-ai / phoenix / phoenix-evals
This skill helps you build and run AI evaluators with Phoenix, combining code-first evaluators and LLM nuance for robust validation.
npx playbooks add skill arize-ai/phoenix --skill phoenix-evalsReview the files below or copy the command above to add this skill to your agents.
---
name: phoenix-evals
description: Build and run evaluators for AI/LLM applications using Phoenix.
license: Apache-2.0
metadata:
author: [email protected]
version: "1.0.0"
languages: Python, TypeScript
---
# Phoenix Evals
Build evaluators for AI/LLM applications. Code first, LLM for nuance, validate against humans.
## Quick Reference
| Task | Files |
| ---- | ----- |
| Setup | `setup-python`, `setup-typescript` |
| Build code evaluator | `evaluators-code-{python\|typescript}` |
| Build LLM evaluator | `evaluators-llm-{python\|typescript}`, `evaluators-custom-templates` |
| Run experiment | `experiments-running-{python\|typescript}` |
| Create dataset | `experiments-datasets-{python\|typescript}` |
| Validate evaluator | `validation`, `validation-calibration-{python\|typescript}` |
| Analyze errors | `error-analysis`, `axial-coding` |
| RAG evals | `evaluators-rag` |
| Production | `production-overview`, `production-guardrails` |
## Workflows
**Starting Fresh:**
`observe-tracing-setup` → `error-analysis` → `axial-coding` → `evaluators-overview`
**Building Evaluator:**
`fundamentals` → `evaluators-{code\|llm}-{python\|typescript}` → `validation-calibration-{python\|typescript}`
**RAG Systems:**
`evaluators-rag` → `evaluators-code-*` (retrieval) → `evaluators-llm-*` (faithfulness)
**Production:**
`production-overview` → `production-guardrails` → `production-continuous`
## Rule Categories
| Prefix | Description |
| ------ | ----------- |
| `fundamentals-*` | Types, scores, anti-patterns |
| `observe-*` | Tracing, sampling |
| `error-analysis-*` | Finding failures |
| `axial-coding-*` | Categorizing failures |
| `evaluators-*` | Code, LLM, RAG evaluators |
| `experiments-*` | Datasets, running experiments |
| `validation-*` | Calibrating judges |
| `production-*` | CI/CD, monitoring |
## Key Principles
| Principle | Action |
| --------- | ------ |
| Error analysis first | Can't automate what you haven't observed |
| Custom > generic | Build from your failures |
| Code first | Deterministic before LLM |
| Validate judges | >80% TPR/TNR |
| Binary > Likert | Pass/fail, not 1-5 |
This skill helps you build, run, and validate evaluators for AI/LLM applications using the Phoenix approach. It emphasizes code-first evaluation, LLM nuance checks, and human validation to create reliable automated judges. The goal is to turn observed failures into targeted, production-ready evaluators and monitoring pipelines.
The skill provides modular workflows: setup, evaluator construction (code and LLM), dataset creation, experiment runs, validation/calibration, and error analysis. It guides you from tracing and sampling failures through axial coding to build custom evaluators, then validates those evaluators against human labels to reach robust decision thresholds. It includes patterns for RAG systems, production guardrails, and continuous monitoring.
Why start with error analysis?
Observing real failures ensures evaluators target actual issues rather than hypothetical edge cases, making automation effective.
When should I use an LLM judge versus code checks?
Use deterministic code checks for clear, rule-based failures; use LLM judges for subjective or contextual judgments that code can't capture.