home / skills / davila7 / claude-code-templates / rag-implementation

This skill helps you implement and optimize retrieval-augmented generation patterns with chunking, embeddings, vector stores, and reranking for accurate

npx playbooks add skill davila7/claude-code-templates --skill rag-implementation

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.8 KB
---
name: rag-implementation
description: "Retrieval-Augmented Generation patterns including chunking, embeddings, vector stores, and retrieval optimization Use when: rag, retrieval augmented, vector search, embeddings, semantic search."
source: vibeship-spawner-skills (Apache 2.0)
---

# RAG Implementation

You're a RAG specialist who has built systems serving millions of queries over
terabytes of documents. You've seen the naive "chunk and embed" approach fail,
and developed sophisticated chunking, retrieval, and reranking strategies.

You understand that RAG is not just vector search—it's about getting the right
information to the LLM at the right time. You know when RAG helps and when
it's unnecessary overhead.

Your core principles:
1. Chunking is critical—bad chunks mean bad retrieval
2. Hybri

## Capabilities

- document-chunking
- embedding-models
- vector-stores
- retrieval-strategies
- hybrid-search
- reranking

## Patterns

### Semantic Chunking

Chunk by meaning, not arbitrary size

### Hybrid Search

Combine dense (vector) and sparse (keyword) search

### Contextual Reranking

Rerank retrieved docs with LLM for relevance

## Anti-Patterns

### ❌ Fixed-Size Chunking

### ❌ No Overlap

### ❌ Single Retrieval Strategy

## ⚠️ Sharp Edges

| Issue | Severity | Solution |
|-------|----------|----------|
| Poor chunking ruins retrieval quality | critical | // Use recursive character text splitter with overlap |
| Query and document embeddings from different models | critical | // Ensure consistent embedding model usage |
| RAG adds significant latency to responses | high | // Optimize RAG latency |
| Documents updated but embeddings not refreshed | medium | // Maintain sync between documents and embeddings |

## Related Skills

Works well with: `context-window-management`, `conversation-memory`, `prompt-caching`, `data-pipeline`

Overview

This skill packages proven Retrieval-Augmented Generation (RAG) implementation patterns for production systems. It focuses on chunking, embeddings, vector stores, hybrid search, and reranking to deliver precise context to an LLM. Use it to design reliable, scalable retrieval layers that reduce hallucination and latency.

How this skill works

The skill inspects documents and creates semantically meaningful chunks with controlled overlap, then generates embeddings using a consistent model. It stores vectors in a configurable vector store and combines dense and sparse signals for hybrid retrieval. Retrieved candidates are optionally reranked by a contextual LLM pass to surface the best snippets for inclusion in prompts.

When to use it

  • Implementing RAG for knowledge-grounded generation
  • Building semantic search over large document collections
  • Optimizing retrieval latency and relevance in chat assistants
  • Syncing document updates with embeddings and vector indexes
  • Combining keyword and semantic signals for better recall

Best practices

  • Chunk by semantic boundaries rather than fixed byte sizes; include overlap to preserve context
  • Always use a single embedding model family for both documents and queries
  • Combine dense vectors with sparse keyword signals for hybrid search to improve recall
  • Add a lightweight LLM-based reranker for precision on top-N candidates
  • Monitor and refresh embeddings when documents change; automate indexing pipelines
  • Measure retrieval latency and tune vector-store parameters to meet SLOs

Example use cases

  • Customer support agent that retrieves policy and product docs with low hallucination
  • Internal knowledge base search combining full-text matches and semantic recall
  • Compliance assistant that surfaces precise contract clauses with reranked excerpts
  • Large-scale Q&A over terabytes where chunk quality and overlap preserve answers
  • CLI tooling to configure and monitor RAG components, embeddings, and index health

FAQ

When should I avoid RAG?

Avoid RAG for tasks solvable by the model alone or when the document set is tiny; RAG adds complexity and latency that may not justify gains.

How large should chunks be?

Chunk by meaning: aim for passages that contain a complete idea or paragraph. Use recursive splitters and validate with retrieval tests rather than fixed sizes.