home / skills / tkersey / dotfiles / logophile

logophile skill

/codex/skills/logophile

This skill tightens wording and compresses text while preserving meaning, boosting clarity and scan-ability across prompts, docs, and emails.

npx playbooks add skill tkersey/dotfiles --skill logophile

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.5 KB
---
name: logophile
description: "Editing mode for clarity and semantic density. Trigger cues/keywords: `$logophile`, tighten wording, rewrite for brevity, sharpen prompts/docs/specs/emails, compress verbose text, improve naming/title/label options, and keep intent/tone while making copy faster to scan."
---

# Logophile

## Intent
Maximize semantic density: fewer tokens, same meaning.

## Use when
- User asks to tighten/clarify/compress.
- Text is verbose/repetitive or slow to scan (<30s).
- Names/titles/labels/headings feel weak.

## Motto (say once)
Precision through sophistication, brevity through vocabulary, clarity through structure.

## Output (default: fast)
- fast: revised text only (no preamble, no recap).
- annotated: revised text + `Edits:` (lexical; structural).
- delta: only if asked or reduction > 40% (words/chars).

## Loop (Distill -> Densify -> Shape -> Verify)
- Distill: 1-sentence intent; must-keep tokens (numbers/keywords/quotes/code).
- Densify: delete filler/hedges; verbify nouns; precision ladder (axis -> exact verb/property); reuse terms.
- Shape: lead with action; keep sentences atomic; parallelize lists; punctuation > scaffolding.
- Verify: intent/obligations/risks/scope unchanged; required tokens + format preserved.

## Guardrails
- Don't change intent.
- Don't compress away obligations, risks, or scope.
- Don't "upgrade" vocabulary if it changes meaning or increases tokens without adding precision.
- Don't add meta-text ("here's", "I think", "note that") unless requested.

## Constraints (ask only if blocked)
Fields: must_keep; must_not_change; tone; length_target; format; keywords_include; keywords_avoid.
Defaults: must_keep=all facts/numbers/quotes/code; tone=original; format=preserve; length_target=min safe.

## Elevation (precision ladder)
- Step: vague word -> axis (what changes?) -> exact verb/property -> object.
- Axis: correctness | speed | cost | reliability | security | UX | scope | consistency.
- Swaps (only if true): improve->simplify/stabilize/accelerate/harden; handle->parse/validate/normalize/route/retry/throttle; robust->bounded/idempotent/deterministic/fail-closed.
- Term-of-art rule: use 1 domain term if it replaces >=3 tokens and fits audience; else keep phrase.

## High-ROI compressions
- in order to -> to; due to the fact that -> because; is able to -> can.
- there is/are -> concrete subject + verb.
- nominalization -> verb (conduct an analysis -> analyze).
- delete: throat-clearing, apologies, self-reference, empty intensifiers.
- token-aware: prefer short, common words; digits + contractions if tone allows.

Overview

This skill edits copy for clarity and semantic density, tightening wording while preserving intent and tone. It compresses verbose text, sharpens names and labels, and produces faster-to-scan output. Default mode returns the revised text only; optional modes add inline edits or reduction deltas.

How this skill works

On request, it distills the source into a one-sentence intent and preserves required tokens (numbers, code, quotes). It removes filler, verbifies nominalizations, and applies a precision ladder to replace vague words with exact actions or properties. The output is reshaped to lead with actions, keep sentences atomic, and parallelize lists; verification ensures obligations, risks, and facts remain unchanged.

When to use it

  • Tighten long emails, docs, prompts, or specs that are slow to scan
  • Compress repetitive or filler-heavy paragraphs without losing meaning
  • Refine names, titles, labels, or headings for clarity and impact
  • Sharpen prompts for LLMs to increase signal per token
  • Produce compact versions for UIs with space constraints

Best practices

  • Provide must_keep tokens (numbers, code, exact phrasing) when critical
  • Specify desired tone or length target if different from the source
  • Use the default fast mode for quick rewrites, annotated for review
  • Avoid asking for creative rewording that would change technical meaning
  • Request format constraints (bullets, table, single sentence) up front

Example use cases

  • Shrink a three-paragraph feature spec into a one-paragraph summary while keeping acceptance criteria
  • Rewrite onboarding UI labels and headings to fit mobile space without losing clarity
  • Condense a long support email into a concise response that preserves required steps and deadlines
  • Compress a verbose prompt into a token-efficient version for cost-sensitive model calls
  • Generate multiple short title options from a long article headline

FAQ

Will this change technical facts or obligations?

No. The process verifies that facts, obligations, risks, and required tokens remain intact; it only trims wording and reshapes structure.

How much reduction can I expect?

Typical reductions vary; the skill yields modest shrinkage for short copy and larger compression for filler-heavy text. Request a delta mode or a length_target for explicit thresholds.