home / skills / a5c-ai / babysitter / langfuse-integration
This skill helps you implement LangFuse tracing, analytics, and cost tracking for LLM calls, enabling observability and optimized spending.
npx playbooks add skill a5c-ai/babysitter --skill langfuse-integrationReview the files below or copy the command above to add this skill to your agents.
---
name: langfuse-integration
description: LangFuse LLM observability integration for tracing, analytics, and cost tracking
allowed-tools:
- Read
- Write
- Edit
- Bash
- Glob
- Grep
---
# LangFuse Integration Skill
## Capabilities
- Set up LangFuse tracing for LLM calls
- Configure cost tracking and analytics
- Implement prompt management
- Set up evaluation datasets
- Design custom trace metadata
- Create dashboards and alerts
## Target Processes
- llm-observability-monitoring
- cost-optimization-llm
## Implementation Details
### Core Features
1. **Tracing**: Track LLM calls, chains, and agents
2. **Prompts**: Version and manage prompts
3. **Analytics**: Usage, latency, cost metrics
4. **Datasets**: Evaluation and testing data
5. **Scores**: Track output quality
### Integration Methods
- LangChain callback handler
- Direct SDK integration
- OpenAI drop-in replacement
- Decorator-based tracing
### Configuration Options
- Public/secret keys
- Host URL (cloud or self-hosted)
- Sampling rate
- Metadata configuration
- User tracking
### Best Practices
- Consistent trace naming
- Meaningful metadata
- Regular prompt versioning
- Set up alerting
### Dependencies
- langfuse
- langchain (for callback integration)
This skill integrates LangFuse LLM observability into your agent workflows for tracing, analytics, and cost tracking. It enables structured tracing of LLM calls, prompt versioning, and custom metadata to support debugging and optimization. The integration supports both cloud and self-hosted LangFuse deployments and common attachment points like LangChain callbacks and direct SDK calls.
It instruments LLM calls, chains, and agents to emit traces that include latency, tokens, and cost metrics. Prompts are versioned and recorded alongside model outputs so you can correlate input changes with quality and cost. The integration exposes configuration for keys, host URL, sampling rate, and custom metadata, and can attach via LangChain callback handlers, decorators, or the LangFuse SDK.
Which integration methods are supported?
Supports LangChain callback handlers, direct LangFuse SDK use, decorator-based tracing, and OpenAI drop-in replacement patterns.
What configuration options are required?
Provide LangFuse public/secret keys, host URL (cloud or self-hosted), and set sampling rate and metadata mapping; user tracking is optional but recommended.