home / skills / arize-ai / phoenix / phoenix-evals

phoenix-evals skill

/skills/phoenix-evals

This skill helps you build and run AI evaluators with Phoenix, combining code-first evaluators and LLM nuance for robust validation.

npx playbooks add skill arize-ai/phoenix --skill phoenix-evals

Review the files below or copy the command above to add this skill to your agents.

Files (33)
SKILL.md
2.1 KB
---
name: phoenix-evals
description: Build and run evaluators for AI/LLM applications using Phoenix.
license: Apache-2.0
metadata:
  author: [email protected]
  version: "1.0.0"
  languages: Python, TypeScript
---

# Phoenix Evals

Build evaluators for AI/LLM applications. Code first, LLM for nuance, validate against humans.

## Quick Reference

| Task | Files |
| ---- | ----- |
| Setup | `setup-python`, `setup-typescript` |
| Build code evaluator | `evaluators-code-{python\|typescript}` |
| Build LLM evaluator | `evaluators-llm-{python\|typescript}`, `evaluators-custom-templates` |
| Run experiment | `experiments-running-{python\|typescript}` |
| Create dataset | `experiments-datasets-{python\|typescript}` |
| Validate evaluator | `validation`, `validation-calibration-{python\|typescript}` |
| Analyze errors | `error-analysis`, `axial-coding` |
| RAG evals | `evaluators-rag` |
| Production | `production-overview`, `production-guardrails` |

## Workflows

**Starting Fresh:**
`observe-tracing-setup` → `error-analysis` → `axial-coding` → `evaluators-overview`

**Building Evaluator:**
`fundamentals` → `evaluators-{code\|llm}-{python\|typescript}` → `validation-calibration-{python\|typescript}`

**RAG Systems:**
`evaluators-rag` → `evaluators-code-*` (retrieval) → `evaluators-llm-*` (faithfulness)

**Production:**
`production-overview` → `production-guardrails` → `production-continuous`

## Rule Categories

| Prefix | Description |
| ------ | ----------- |
| `fundamentals-*` | Types, scores, anti-patterns |
| `observe-*` | Tracing, sampling |
| `error-analysis-*` | Finding failures |
| `axial-coding-*` | Categorizing failures |
| `evaluators-*` | Code, LLM, RAG evaluators |
| `experiments-*` | Datasets, running experiments |
| `validation-*` | Calibrating judges |
| `production-*` | CI/CD, monitoring |

## Key Principles

| Principle | Action |
| --------- | ------ |
| Error analysis first | Can't automate what you haven't observed |
| Custom > generic | Build from your failures |
| Code first | Deterministic before LLM |
| Validate judges | >80% TPR/TNR |
| Binary > Likert | Pass/fail, not 1-5 |

Overview

This skill helps you build, run, and validate evaluators for AI/LLM applications using the Phoenix approach. It emphasizes code-first evaluation, LLM nuance checks, and human validation to create reliable automated judges. The goal is to turn observed failures into targeted, production-ready evaluators and monitoring pipelines.

How this skill works

The skill provides modular workflows: setup, evaluator construction (code and LLM), dataset creation, experiment runs, validation/calibration, and error analysis. It guides you from tracing and sampling failures through axial coding to build custom evaluators, then validates those evaluators against human labels to reach robust decision thresholds. It includes patterns for RAG systems, production guardrails, and continuous monitoring.

When to use it

  • You need reproducible, deterministic checks before adding LLM-based nuance.
  • You want to convert observed errors into automated evaluators and monitoring rules.
  • You must validate evaluators against human labels to ensure reliability in production.
  • You are evaluating retrieval-augmented generation (RAG) systems for faithfulness and relevance.
  • You are designing CI/CD and continuous observability for LLM-driven features.

Best practices

  • Start with error analysis and sampling before writing any automated judge.
  • Prefer code-based, deterministic checks; add LLM evaluators only for nuanced cases.
  • Validate evaluators to target >80% true positive and true negative rates.
  • Favor binary pass/fail judgments over Likert scales for clearer automated actions.
  • Calibrate and periodically revalidate judges with fresh human-labeled datasets.

Example use cases

  • Create a code evaluator that verifies API responses and edge-case handling deterministically.
  • Build an LLM evaluator for nuanced content quality aspects like tone or politeness.
  • Run experiments to compare evaluators across datasets and measure calibration.
  • Evaluate a RAG pipeline for hallucination and retrieval relevance using combined code and LLM checks.
  • Implement production guardrails and monitoring to trigger alerts on evaluator regressions.

FAQ

Why start with error analysis?

Observing real failures ensures evaluators target actual issues rather than hypothetical edge cases, making automation effective.

When should I use an LLM judge versus code checks?

Use deterministic code checks for clear, rule-based failures; use LLM judges for subjective or contextual judgments that code can't capture.