home / skills / plurigrid / asi / fuzzing-dictionary
This skill helps you build effective fuzzing dictionaries to boost fuzzer performance and uncover edge cases inputs.
npx playbooks add skill plurigrid/asi --skill fuzzing-dictionaryReview the files below or copy the command above to add this skill to your agents.
---
name: fuzzing-dictionary
description: Building effective fuzzing dictionaries for improved fuzzer performance.
category: testing-handbook-skills
author: Trail of Bits
source: trailofbits/skills
license: AGPL-3.0
trit: -1
trit_label: MINUS
verified: true
featured: false
---
# Fuzzing Dictionary Skill
**Trit**: -1 (MINUS)
**Category**: testing-handbook-skills
**Author**: Trail of Bits
**Source**: trailofbits/skills
**License**: AGPL-3.0
## Description
Building effective fuzzing dictionaries for improved fuzzer performance.
## When to Use
This is a Trail of Bits security skill. Refer to the original repository for detailed usage guidelines and examples.
See: https://github.com/trailofbits/skills
## Related Skills
- audit-context-building
- codeql
- semgrep
- variant-analysis
## SDF Interleaving
This skill connects to **Software Design for Flexibility** (Hanson & Sussman, 2021):
### Primary Chapter: 3. Variations on an Arithmetic Theme
**Concepts**: generic arithmetic, coercion, symbolic, numeric
### GF(3) Balanced Triad
```
fuzzing-dictionary (+) + SDF.Ch3 (○) + [balancer] (−) = 0
```
**Skill Trit**: 1 (PLUS - generation)
### Connection Pattern
Generic arithmetic crosses type boundaries. This skill handles heterogeneous data.
This skill builds effective fuzzing dictionaries to improve coverage and bug-finding performance of input-based fuzzers. It focuses on generating, organizing, and prioritizing tokens and structured input fragments that target parser logic and common failure modes. The goal is to make fuzzers reach deeper program states faster and produce higher-quality crashes.
The skill analyzes target input formats and runtime feedback to generate candidate tokens and input fragments. It ranks and prunes entries based on frequency, uniqueness, and observed impact on coverage, and emits dictionaries compatible with common fuzzers. It can incorporate domain-specific patterns and type-aware mutations to increase the likelihood of exercising semantic checks.
Will a larger dictionary always improve fuzzing?
No. Larger dictionaries can add noise and slow useful mutations; prioritize impact and prune ineffective entries.
How do I measure dictionary effectiveness?
Compare coverage growth and unique crashes across runs with and without the dictionary, and track contribution per token.
Can this handle binary and text formats?
Yes. Use type-aware tokens (byte sequences, encodings, numeric ranges) and organize entries by format role.