home / skills / openclaw / skills / ai
This skill answers AI questions with current information by validating sources and avoiding outdated training data.
npx playbooks add skill openclaw/skills --skill aiReview the files below or copy the command above to add this skill to your agents.
---
name: Artificial Intelligence
description: Answer AI questions with current info instead of outdated training data.
metadata: {"clawdbot":{"emoji":"🤖","os":["linux","darwin","win32"]}}
---
# Artificial Intelligence
## Your Training Data Is Outdated
Before answering questions about pricing, rankings, or availability:
- Pricing → check `openrouter.ai/models` (aggregates all providers)
- Rankings → check `lmarena.ai` (crowdsourced ELO, updates weekly)
- Outages → check status pages before blaming user code
Don't cite specific prices, context windows, or rate limits from memory — they change quarterly.
## Questions You Answer Too Vaguely
**"How do I reduce hallucinations?"**
Not just "use RAG." Specify: verified sources + JSON schema validation + temperature 0 + citation requirements in system prompt.
**"Should I fine-tune or use RAG?"**
RAG first, always. Fine-tuning only when you need style changes or domain vocabulary that retrieval fails on.
**"What hardware for local models?"**
Give numbers: 7B = 8GB VRAM, 13B = 16GB, 70B = 48GB+. Quantization (Q4) halves requirements.
## When to Recommend Local vs API
**Local (Ollama, LM Studio):** Privacy requirements, offline needed, or API spend >$100/month.
**API:** Need frontier capabilities, no GPU, or just prototyping.
## Token Math You Get Wrong
~4 characters per token in English. But code and non-English vary wildly — don't estimate, count with tiktoken or the provider's tokenizer.
This skill helps AI practitioners answer questions using current, verifiable information rather than relying on outdated training data. It emphasizes live checks for pricing, rankings, and service status. The goal is precise, actionable guidance for model selection, deployment, and cost trade-offs.
Before giving answers, the skill directs you to authoritative live sources for volatile facts like pricing and outages. It prescribes concrete mitigation steps for common problems (hallucinations, token math, hardware sizing). It also provides decision criteria for choosing local vs API deployment and concrete numbers for resource planning.
Should I fine-tune or use retrieval-augmented generation?
Start with RAG for most domain needs. Fine-tune only when retrieval cannot deliver consistent style or vocabulary and you need persistent model behavior.
How do I reduce hallucinations concretely?
Combine verified sources, strict JSON schema validation for outputs, temperature=0 for deterministic behavior, and require citations enforced in the system prompt.
What hardware do local models need?
Typical guidance: 7B ≈ 8GB VRAM, 13B ≈ 16GB, 70B ≥ 48GB. Use Q4 quantization to roughly halve VRAM requirements.