home / skills / aster110 / mycc / cc-usage

cc-usage skill

/.claude/skills/cc-usage

This skill analyzes local Claude Code logs to summarize token usage by date and model, returning readable Markdown tables.

npx playbooks add skill aster110/mycc --skill cc-usage

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
1.9 KB
---
name: cc-usage
description: 查看 Claude Code 的 token 用量统计。按日期×模型维度拆分,支持按天数、项目过滤。触发词:"/cc-usage"、"看看用量"、"token 消耗"、"用量统计"
---

# cc-usage — Token 用量统计

扫描本地 Claude Code 日志(`~/.claude/projects/`),按 **日期 × 模型** 维度统计 token 消耗和 API 等价费用。

纯 Python 3 脚本,无需安装任何依赖,跨平台(Mac / Linux / Windows)。

## 触发词

- "/cc-usage"
- "看看用量"
- "token 消耗"
- "用量统计"

## 执行步骤

1. 根据用户需求确定参数(天数、项目、输出格式)
2. 运行分析脚本
3. 把结果整理成**易读的 Markdown 表格**返回给用户

## 脚本位置

```
.claude/skills/cc-usage/scripts/analyzer.py
```

## 用法

```bash
# 默认:全部历史,所有项目
python3 .claude/skills/cc-usage/scripts/analyzer.py

# 最近 N 天
python3 .claude/skills/cc-usage/scripts/analyzer.py --days 7

# 只看某项目(模糊匹配目录名)
python3 .claude/skills/cc-usage/scripts/analyzer.py --project mylife

# 输出 CSV(可导入 Excel)
python3 .claude/skills/cc-usage/scripts/analyzer.py --csv

# 只看模型汇总
python3 .claude/skills/cc-usage/scripts/analyzer.py --summary
```

## 默认行为

用户没指定天数时,默认跑 `--days 7`(最近 7 天)。

## 输出要求

脚本跑完后,AI 应该:
1. 把关键数据整理成 Markdown 表格(按天 × 模型)
2. 给出日小计和总计
3. 附上模型汇总(哪个模型最费钱)
4. 如有异常(某天突然暴涨),主动指出

## 跨平台说明

- 路径:使用 `os.path.expanduser('~')` 自动适配
- 时区:使用 `datetime.astimezone()` 自动检测系统本地时区
- 依赖:仅 Python 3 标准库,无需 pip install

## 维护提示

新模型上线时需更新脚本里的 `MODEL_SHORT` 和 `PRICING` 字典。

Overview

This skill analyzes local Claude Code logs to produce token usage and API-equivalent cost statistics broken down by date and model. It runs a standalone Python script that scans ~/.claude/projects/, aggregates usage, and returns readable summaries and tables. The output highlights daily subtotals, totals, model cost rankings, and flags unusual spikes.

How this skill works

The skill runs a pure Python 3 analyzer that reads Claude Code log files under the user's home directory and groups tokens by date and model. It converts token counts into API-equivalent costs using configured pricing, computes daily and overall totals, and detects anomalies such as sudden usage spikes. Results are formatted as a Markdown table and can be exported as CSV or a model-only summary.

When to use it

  • When you want a quick breakdown of Claude Code token consumption by date and model
  • To review recent usage for the last N days (default: 7 days)
  • When auditing cost drivers to identify the most expensive models
  • Before adjusting usage policies or debugging unexpected billing increases
  • When you need CSV output to analyze further in Excel

Best practices

  • Run the analyzer from an account that owns the ~/.claude/projects/ logs to ensure full access
  • Use the --days option to limit scope for faster results and focused troubleshooting
  • Filter by --project with a fuzzy name to isolate specific project usage
  • Keep MODEL_SHORT and PRICING dictionaries updated when new models are added
  • Check flagged anomalies promptly—large one-day spikes often indicate a runaway job

Example use cases

  • Get a 7-day token and cost report to share with your team after a development sprint
  • Filter usage to a single project to attribute costs for internal chargebacks
  • Export CSV for finance to import into a cost-tracking spreadsheet
  • Run a summary to quickly identify which model is driving the highest API-equivalent spend
  • Detect an unexpected spike on a specific date and trace it back to recent commits or jobs

FAQ

What triggers start the skill?

Invoke the analyzer with phrases like "/cc-usage", "look at usage", "token consumption", or "usage stats".

Do I need to install dependencies?

No. The script uses only the Python 3 standard library and is cross-platform (macOS, Linux, Windows).