home / skills / lihaoze123 / my-claude-code / acm-icpc-problem-setting
This skill helps you design high-quality ACM-ICPC style problems by guiding statement writing, test data generation, and contest organization end-to-end.
npx playbooks add skill lihaoze123/my-claude-code --skill acm-icpc-problem-settingReview the files below or copy the command above to add this skill to your agents.
---
name: acm-icpc-problem-setting
description: Use when preparing algorithm competition problems for ACM-ICPC, CCPC, Codeforces, or similar contests, including problem creation, statement writing, test data generation, and contest organization
---
# ACM-ICPC Problem Setting Best Practices
## Overview
Comprehensive guidance for creating high-quality algorithm competition problems, covering the full workflow from idea conception to publication.
## When to Use
- Creating problems for ACM-ICPC, CCPC, Codeforces, or similar contests
- Writing problem statements with LaTeX
- Generating test data using testlib
- Setting up validators and checkers
- Organizing algorithm competitions
## Quick Reference
| Topic | Detailed Rules |
|-------|----------------|
| Problem conception | [rules/problem-conception.md](rules/problem-conception.md) |
| Statement writing | [rules/statement-writing.md](rules/statement-writing.md) |
| Test data generation | [rules/test-data-generation.md](rules/test-data-generation.md) |
| Special Judge | [rules/spj-checker.md](rules/spj-checker.md) |
| Time/memory limits | [rules/limits-subtasks.md](rules/limits-subtasks.md) |
| Contest organization | [rules/contest-organization.md](rules/contest-organization.md) |
## Core Principles
### Problem Quality
- **Original idea** - No duplicates or trivial enhancements
- **Clear statement** - Every concept defined, no ambiguity
- **Complete constraints** - All variable ranges specified
- **Strong samples** - Catch wrong interpretations
### Data Quality
- **Edge cases** - Min/max values, boundary conditions
- **Diverse constructions** - Random + handcrafted
- **Format compliance** - Linux line endings, no trailing whitespace
### Platform-Specific
- **Polygon** - Integrated workflow for teams
- **Codeforces** - Requires 5-25 rated contests
- **Luogu** - Requires competition awards
## Red Flags - STOP
| Anti-pattern | Fix |
|--------------|-----|
| Undefined terms in statement | Add definitions in problem description |
| Inconsistent terminology | Use same word for same concept |
| Weak samples | Include edge cases and wrong interpretations |
| Incomplete data ranges | Specify ALL variables' ranges |
| Wrong time limit | Test std × 2 minimum |
| Poor subtask design | Use clear structure, avoid percentages |
## Essential Code
### testlib Generator
```cpp
#include "testlib.h"
int main(int argc, char* argv[]) {
registerGen(argc, argv, 1);
int n = opt<int>("n");
vector<int> a(n);
for (int i = 0; i < n; i++)
a[i] = rnd.next(1, 1000000);
println(n);
println(a);
}
```
### Basic Checker
```cpp
#include "testlib.h"
int main(int argc, char* argv[]) {
registerTestlibCmd(argc, argv);
int jans = ans.readInt();
int pans = ouf.readInt();
if (pans == jans) quitf(_ok, "%d", pans);
else quitf(_wa, "expected %d, found %d", jans, pans);
}
```
## References
- [OI Wiki - Problem Setting](https://oi-wiki.org/contest/problemsetting/)
- [testlib Documentation](https://github.com/MikeMirzayanov/testlib)
- [Polygon Platform](https://polygon.codeforces.com/)
This skill helps craft and deliver high-quality algorithm competition problems for ACM-ICPC, CCPC, Codeforces and similar contests. It guides you from idea conception through statement writing, test data generation, validators, and contest organization. The focus is practical: clarity, correctness, and reproducible test infrastructure.
The skill inspects every phase of problem preparation: idea originality, formal statement completeness, constraint and sample coverage, and test data quality. It provides templates and examples for testlib-based generators and basic checkers, plus guidance on time/memory limits and subtask design. It also highlights platform-specific workflows and common anti-patterns to stop and fix.
How do I choose time limits reliably?
Benchmark a correct reference implementation on the judge machine or a comparable environment, then set the limit to roughly twice the observed run time to allow language and I/O variance.
What makes a sample set strong?
Include normal cases, boundary values, and examples that expose common misinterpretations. Add at least one sample that would fail naive or greedy approaches.