home / skills / a5c-ai / babysitter / llamaindex-agent
This skill helps you set up LlamaIndex query engines and configure ReAct agents for RAG-powered workflows across knowledge bases.
npx playbooks add skill a5c-ai/babysitter --skill llamaindex-agentReview the files below or copy the command above to add this skill to your agents.
---
name: llamaindex-agent
description: LlamaIndex agent and query engine setup for RAG-powered agents
allowed-tools:
- Read
- Write
- Edit
- Bash
- Glob
- Grep
---
# LlamaIndex Agent Skill
## Capabilities
- Set up LlamaIndex query engines
- Configure ReAct agents with tools
- Implement OpenAI function calling agents
- Design sub-question query engines
- Set up multi-document agents
- Implement chat engines with memory
## Target Processes
- rag-pipeline-implementation
- knowledge-base-qa
## Implementation Details
### Agent Types
1. **ReActAgent**: Reasoning and acting agent
2. **OpenAIAgent**: Function calling agent
3. **StructuredPlannerAgent**: Plan-and-execute style
4. **SubQuestionQueryEngine**: Complex query decomposition
### Query Engine Types
- VectorStoreIndex query engine
- Summary index query engine
- Knowledge graph query engine
- SQL query engine
### Configuration Options
- LLM selection
- Tool definitions
- Memory configuration
- Verbose/debug settings
- Query transform modules
### Best Practices
- Appropriate index selection
- Clear tool descriptions
- Memory for multi-turn
- Monitor query performance
### Dependencies
- llama-index
- llama-index-agent-openai
This skill sets up a LlamaIndex-powered agent and query engine for retrieval-augmented generation (RAG) workflows. It wires LLMs, vector and summary indexes, and agent types into deterministic, resumable orchestration. The result is a reusable agent template for knowledge-base QA and multi-turn conversational tasks.
The skill configures query engines (vector, summary, graph, SQL) and mounts them as tools for different agent classes like ReAct, OpenAI function-calling, and StructuredPlanner. It also supports sub-question decomposition engines and chat engines with memory so multi-step queries are split, executed, and stitched back into a final response. Configuration options let you pick LLMs, define tools, set memory and verbosity, and plug query transforms for pre- or post-processing.
Which agent type should I choose for planning vs. function calling?
Use StructuredPlannerAgent for plan-and-execute workflows, ReActAgent for reasoning with tool use, and OpenAIAgent when you need explicit function calling integration.
What indexes are recommended for large document sets?
Start with a vector store for semantic coverage; add a summary index for long documents and a knowledge-graph or SQL index for structured relationships and precise lookups.