home / skills / openclaw / skills / model-intel-pro

model-intel-pro skill

/skills/aiwithabidi/model-intel-pro

This skill provides up-to-date OpenRouter model pricing, capabilities, and comparisons to help you pick the best model for any use case.

npx playbooks add skill openclaw/skills --skill model-intel-pro

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
2.2 KB
---
name: model-intel
version: 1.0.0
description: >
  Live LLM model pricing and capabilities from OpenRouter. List top models, search by name,
  compare side-by-side, find best model for a use case, check pricing. Always up-to-date
  from the OpenRouter API. Triggers: model pricing, compare models, best model for,
  cheapest model, model cost, LLM comparison, what models are available.
license: MIT
compatibility:
  openclaw: ">=0.10"
metadata:
  openclaw:
    requires:
      bins: ["python3"]
      env: ["OPENROUTER_API_KEY"]
---

# Model Intel šŸ§ šŸ’°

Live LLM model intelligence — pricing, capabilities, and comparisons from OpenRouter.

## When to Use

- Finding the best model for a specific task (coding, reasoning, creative, fast, cheap)
- Comparing model pricing and capabilities
- Checking current model availability and context lengths
- Answering "what's the cheapest model that can do X?"

## Usage

```bash
# List top models by provider
python3 {baseDir}/scripts/model_intel.py list

# Search by name
python3 {baseDir}/scripts/model_intel.py search "claude"

# Side-by-side comparison
python3 {baseDir}/scripts/model_intel.py compare "claude-opus" "gpt-4o"

# Best model for a use case
python3 {baseDir}/scripts/model_intel.py best fast
python3 {baseDir}/scripts/model_intel.py best code
python3 {baseDir}/scripts/model_intel.py best reasoning
python3 {baseDir}/scripts/model_intel.py best cheap
python3 {baseDir}/scripts/model_intel.py best vision

# Pricing details
python3 {baseDir}/scripts/model_intel.py price "gemini-flash"
```

## Use Cases

| Command | When |
|---------|------|
| `best fast` | Need lowest latency |
| `best cheap` | Budget-constrained |
| `best code` | Programming tasks |
| `best reasoning` | Complex logic/math |
| `best vision` | Image understanding |
| `best long-context` | Large document processing |

## Credits

Built by [M. Abidi](https://www.linkedin.com/in/mohammad-ali-abidi) | [agxntsix.ai](https://www.agxntsix.ai)
[YouTube](https://youtube.com/@aiwithabidi) | [GitHub](https://github.com/aiwithabidi)
Part of the **AgxntSix Skill Suite** for OpenClaw agents.

šŸ“… **Need help setting up OpenClaw for your business?** [Book a free consultation](https://cal.com/agxntsix/abidi-openclaw)

Overview

This skill provides live LLM model intelligence sourced from the OpenRouter API, including pricing, capabilities, and availability. It lists top models, lets you search by name, compare models side-by-side, and recommends the best model for specific tasks. Data is always up-to-date from OpenRouter so you get current context lengths and cost estimates.

How this skill works

The skill queries the OpenRouter API in real time to retrieve model metadata, pricing per token, context window sizes, and capability tags. It can filter and rank models by criteria like latency, cost, capability (code, reasoning, vision), and maximum context. Comparison outputs show side-by-side specs and cost tradeoffs to help choose the right model quickly.

When to use it

  • When you need the cheapest model capable of a task
  • When comparing capabilities and token pricing across models
  • When selecting a model for coding, reasoning, or vision tasks
  • When you need current context length and availability info
  • When evaluating tradeoffs between latency, cost, and quality

Best practices

  • Define the primary objective (cost, speed, capability) before searching
  • Use 'best' filters (cheap, fast, code, reasoning, vision) to narrow choices quickly
  • Check context window limits for long-document use cases
  • Compare both accuracy-related tags and per-token pricing to estimate real cost
  • Re-run lookups before deployment to capture recent price or availability changes

Example use cases

  • Find the cheapest model that handles code-completion reliably
  • Compare two candidate models side-by-side for pricing and context length
  • Choose the best low-latency model for interactive chat
  • Select a model with large context for document summarization
  • Verify current pricing before estimating production inference costs

FAQ

How fresh is the pricing and capability data?

Data is pulled live from the OpenRouter API so pricing, availability, and context sizes reflect current values at the time of the query.

Can it recommend a model for a mixed task like code plus long context?

Yes. The 'best' command accepts task labels and ranks models by combined suitability (capabilities, context length, and cost) so you get a balanced recommendation.