home / skills / questnova502 / claude-skills-sync / senior-prompt-engineer

senior-prompt-engineer skill

/skills/senior-prompt-engineer

This skill delivers production-grade prompt engineering and system design expertise to optimize LLM performance, architecture, and agentic workflows for AI

npx playbooks add skill questnova502/claude-skills-sync --skill senior-prompt-engineer

Review the files below or copy the command above to add this skill to your agents.

Files (7)
SKILL.md
5.4 KB
---
name: senior-prompt-engineer
description: World-class prompt engineering skill for LLM optimization, prompt patterns, structured outputs, and AI product development. Expertise in Claude, GPT-4, prompt design patterns, few-shot learning, chain-of-thought, and AI evaluation. Includes RAG optimization, agent design, and LLM system architecture. Use when building AI products, optimizing LLM performance, designing agentic systems, or implementing advanced prompting techniques.
---

# Senior Prompt Engineer

World-class senior prompt engineer skill for production-grade AI/ML/Data systems.

## Quick Start

### Main Capabilities

```bash
# Core Tool 1
python scripts/prompt_optimizer.py --input data/ --output results/

# Core Tool 2  
python scripts/rag_evaluator.py --target project/ --analyze

# Core Tool 3
python scripts/agent_orchestrator.py --config config.yaml --deploy
```

## Core Expertise

This skill covers world-class capabilities in:

- Advanced production patterns and architectures
- Scalable system design and implementation
- Performance optimization at scale
- MLOps and DataOps best practices
- Real-time processing and inference
- Distributed computing frameworks
- Model deployment and monitoring
- Security and compliance
- Cost optimization
- Team leadership and mentoring

## Tech Stack

**Languages:** Python, SQL, R, Scala, Go
**ML Frameworks:** PyTorch, TensorFlow, Scikit-learn, XGBoost
**Data Tools:** Spark, Airflow, dbt, Kafka, Databricks
**LLM Frameworks:** LangChain, LlamaIndex, DSPy
**Deployment:** Docker, Kubernetes, AWS/GCP/Azure
**Monitoring:** MLflow, Weights & Biases, Prometheus
**Databases:** PostgreSQL, BigQuery, Snowflake, Pinecone

## Reference Documentation

### 1. Prompt Engineering Patterns

Comprehensive guide available in `references/prompt_engineering_patterns.md` covering:

- Advanced patterns and best practices
- Production implementation strategies
- Performance optimization techniques
- Scalability considerations
- Security and compliance
- Real-world case studies

### 2. Llm Evaluation Frameworks

Complete workflow documentation in `references/llm_evaluation_frameworks.md` including:

- Step-by-step processes
- Architecture design patterns
- Tool integration guides
- Performance tuning strategies
- Troubleshooting procedures

### 3. Agentic System Design

Technical reference guide in `references/agentic_system_design.md` with:

- System design principles
- Implementation examples
- Configuration best practices
- Deployment strategies
- Monitoring and observability

## Production Patterns

### Pattern 1: Scalable Data Processing

Enterprise-scale data processing with distributed computing:

- Horizontal scaling architecture
- Fault-tolerant design
- Real-time and batch processing
- Data quality validation
- Performance monitoring

### Pattern 2: ML Model Deployment

Production ML system with high availability:

- Model serving with low latency
- A/B testing infrastructure
- Feature store integration
- Model monitoring and drift detection
- Automated retraining pipelines

### Pattern 3: Real-Time Inference

High-throughput inference system:

- Batching and caching strategies
- Load balancing
- Auto-scaling
- Latency optimization
- Cost optimization

## Best Practices

### Development

- Test-driven development
- Code reviews and pair programming
- Documentation as code
- Version control everything
- Continuous integration

### Production

- Monitor everything critical
- Automate deployments
- Feature flags for releases
- Canary deployments
- Comprehensive logging

### Team Leadership

- Mentor junior engineers
- Drive technical decisions
- Establish coding standards
- Foster learning culture
- Cross-functional collaboration

## Performance Targets

**Latency:**
- P50: < 50ms
- P95: < 100ms
- P99: < 200ms

**Throughput:**
- Requests/second: > 1000
- Concurrent users: > 10,000

**Availability:**
- Uptime: 99.9%
- Error rate: < 0.1%

## Security & Compliance

- Authentication & authorization
- Data encryption (at rest & in transit)
- PII handling and anonymization
- GDPR/CCPA compliance
- Regular security audits
- Vulnerability management

## Common Commands

```bash
# Development
python -m pytest tests/ -v --cov
python -m black src/
python -m pylint src/

# Training
python scripts/train.py --config prod.yaml
python scripts/evaluate.py --model best.pth

# Deployment
docker build -t service:v1 .
kubectl apply -f k8s/
helm upgrade service ./charts/

# Monitoring
kubectl logs -f deployment/service
python scripts/health_check.py
```

## Resources

- Advanced Patterns: `references/prompt_engineering_patterns.md`
- Implementation Guide: `references/llm_evaluation_frameworks.md`
- Technical Reference: `references/agentic_system_design.md`
- Automation Scripts: `scripts/` directory

## Senior-Level Responsibilities

As a world-class senior professional:

1. **Technical Leadership**
   - Drive architectural decisions
   - Mentor team members
   - Establish best practices
   - Ensure code quality

2. **Strategic Thinking**
   - Align with business goals
   - Evaluate trade-offs
   - Plan for scale
   - Manage technical debt

3. **Collaboration**
   - Work across teams
   - Communicate effectively
   - Build consensus
   - Share knowledge

4. **Innovation**
   - Stay current with research
   - Experiment with new approaches
   - Contribute to community
   - Drive continuous improvement

5. **Production Excellence**
   - Ensure high availability
   - Monitor proactively
   - Optimize performance
   - Respond to incidents

Overview

This skill delivers world-class prompt engineering and LLM system design guidance for production AI products. It combines advanced prompt patterns, few-shot and chain-of-thought techniques, RAG optimization, agent orchestration, and evaluation frameworks. Use it to optimize LLM performance, design agentic systems, and scale inference in real-world environments.

How this skill works

The skill inspects prompts, model interactions, retrieval pipelines, and system architecture to propose concrete optimizations and structured output templates. It analyzes few-shot examples, chain-of-thought strategies, and RAG configurations, then recommends changes to prompts, data flows, and deployment settings. It also provides scripts and commands to run prompt optimization, RAG evaluation, and agent orchestration workflows in a production pipeline. Results include measurable performance targets, monitoring recommendations, and deployment-safe patterns.

When to use it

  • Designing or refactoring prompts for Claude, GPT-4, or similar LLMs
  • Building retrieval-augmented generation (RAG) pipelines or improving retrieval quality
  • Designing agentic systems or orchestrating multiple LLMs and tools
  • Optimizing latency, throughput, and cost for production inference
  • Establishing evaluation frameworks and automated testing for LLM behavior

Best practices

  • Use few-shot examples and explicit output schemas to enforce structured outputs
  • Instrument models and pipelines with monitoring (latency, error rates, drift) before rollout
  • Adopt TDD and CI for prompt logic and evaluation scripts to prevent regressions
  • Isolate sensitive data, apply PII anonymization, and enforce encryption in transit and at rest
  • Start with canary or feature-flagged releases and ramp traffic after validating metrics

Example use cases

  • Optimize customer-support completion prompts to reduce hallucinations and lower response latency
  • Design an agent orchestrator that composes retrieval, tool calls, and multi-step reasoning for automation tasks
  • Implement RAG with vector stores and fine-tuned retriever for domain-specific QA
  • Set up evaluation pipelines to measure P50/P95 latency, throughput, and answer quality during CI runs
  • Mentor engineering teams on prompt patterns, production architecture, and model governance

FAQ

Which models and frameworks does this skill target?

Primary focus is on Claude and GPT-family models, with guidance for LangChain, LlamaIndex, and common deployment frameworks.

Will this provide deployment-ready code?

The skill includes production-oriented scripts and patterns for optimization, testing, and deployment; adapt them to your infra and security constraints.