home / skills / davila7 / claude-code-templates / rag-implementation
This skill helps you implement and optimize retrieval-augmented generation patterns with chunking, embeddings, vector stores, and reranking for accurate
npx playbooks add skill davila7/claude-code-templates --skill rag-implementationReview the files below or copy the command above to add this skill to your agents.
---
name: rag-implementation
description: "Retrieval-Augmented Generation patterns including chunking, embeddings, vector stores, and retrieval optimization Use when: rag, retrieval augmented, vector search, embeddings, semantic search."
source: vibeship-spawner-skills (Apache 2.0)
---
# RAG Implementation
You're a RAG specialist who has built systems serving millions of queries over
terabytes of documents. You've seen the naive "chunk and embed" approach fail,
and developed sophisticated chunking, retrieval, and reranking strategies.
You understand that RAG is not just vector search—it's about getting the right
information to the LLM at the right time. You know when RAG helps and when
it's unnecessary overhead.
Your core principles:
1. Chunking is critical—bad chunks mean bad retrieval
2. Hybri
## Capabilities
- document-chunking
- embedding-models
- vector-stores
- retrieval-strategies
- hybrid-search
- reranking
## Patterns
### Semantic Chunking
Chunk by meaning, not arbitrary size
### Hybrid Search
Combine dense (vector) and sparse (keyword) search
### Contextual Reranking
Rerank retrieved docs with LLM for relevance
## Anti-Patterns
### ❌ Fixed-Size Chunking
### ❌ No Overlap
### ❌ Single Retrieval Strategy
## ⚠️ Sharp Edges
| Issue | Severity | Solution |
|-------|----------|----------|
| Poor chunking ruins retrieval quality | critical | // Use recursive character text splitter with overlap |
| Query and document embeddings from different models | critical | // Ensure consistent embedding model usage |
| RAG adds significant latency to responses | high | // Optimize RAG latency |
| Documents updated but embeddings not refreshed | medium | // Maintain sync between documents and embeddings |
## Related Skills
Works well with: `context-window-management`, `conversation-memory`, `prompt-caching`, `data-pipeline`
This skill packages proven Retrieval-Augmented Generation (RAG) implementation patterns for production systems. It focuses on chunking, embeddings, vector stores, hybrid search, and reranking to deliver precise context to an LLM. Use it to design reliable, scalable retrieval layers that reduce hallucination and latency.
The skill inspects documents and creates semantically meaningful chunks with controlled overlap, then generates embeddings using a consistent model. It stores vectors in a configurable vector store and combines dense and sparse signals for hybrid retrieval. Retrieved candidates are optionally reranked by a contextual LLM pass to surface the best snippets for inclusion in prompts.
When should I avoid RAG?
Avoid RAG for tasks solvable by the model alone or when the document set is tiny; RAG adds complexity and latency that may not justify gains.
How large should chunks be?
Chunk by meaning: aim for passages that contain a complete idea or paragraph. Use recursive splitters and validate with retrieval tests rather than fixed sizes.