home / skills / terrylica / cc-skills / impl-standards

impl-standards skill

/plugins/itp/skills/impl-standards

This skill enforces implementation standards for error handling, constants management, and progress logging to improve code quality and maintainability.

npx playbooks add skill terrylica/cc-skills --skill impl-standards

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
5.4 KB
---
name: impl-standards
description: Core engineering standards for implementation. TRIGGERS - error handling, constants management, progress logging, code quality.
---

# Implementation Standards

Apply these standards during implementation to ensure consistent, maintainable code.

## When to Use This Skill

- During `/itp:go` Phase 1
- When writing new production code
- User mentions "error handling", "constants", "magic numbers", "progress logging"
- Before release to verify code quality

## Quick Reference

| Standard         | Rule                                                                     |
| ---------------- | ------------------------------------------------------------------------ |
| **Errors**       | Raise + propagate; no fallback/default/retry/silent                      |
| **Constants**    | Abstract magic numbers into semantic, version-agnostic dynamic constants |
| **Dependencies** | Prefer OSS libs over custom code; no backward-compatibility needed       |
| **Progress**     | Operations >1min: log status every 15-60s                                |
| **Logs**         | `logs/{adr-id}-YYYYMMDD_HHMMSS.log` (nohup)                              |
| **Metadata**     | Optional: `catalog-info.yaml` for service discovery                      |

---

## Error Handling

**Core Rule**: Raise + propagate; no fallback/default/retry/silent

```python
# ✅ Correct - raise with context
def fetch_data(url: str) -> dict:
    response = requests.get(url)
    if response.status_code != 200:
        raise APIError(f"Failed to fetch {url}: {response.status_code}")
    return response.json()

# ❌ Wrong - silent catch
try:
    result = fetch_data()
except Exception:
    pass  # Error hidden
```

See [Error Handling Reference](./references/error-handling.md) for detailed patterns.

---

## Constants Management

**Core Rule**: Abstract magic numbers into semantic constants

```python
# ✅ Correct - named constant
DEFAULT_API_TIMEOUT_SECONDS = 30
response = requests.get(url, timeout=DEFAULT_API_TIMEOUT_SECONDS)

# ❌ Wrong - magic number
response = requests.get(url, timeout=30)
```

See [Constants Management Reference](./references/constants-management.md) for patterns.

---

## Progress Logging

For operations taking more than 1 minute, log status every 15-60 seconds:

```python
import logging
from datetime import datetime

logger = logging.getLogger(__name__)

def long_operation(items: list) -> None:
    total = len(items)
    last_log = datetime.now()

    for i, item in enumerate(items):
        process(item)

        # Log every 30 seconds
        if (datetime.now() - last_log).seconds >= 30:
            logger.info(f"Progress: {i+1}/{total} ({100*(i+1)//total}%)")
            last_log = datetime.now()

    logger.info(f"Completed: {total} items processed")
```

---

## Log File Convention

Save logs to: `logs/{adr-id}-YYYYMMDD_HHMMSS.log`

```bash
# Running with nohup
nohup python script.py > logs/2025-12-01-my-feature-20251201_143022.log 2>&1 &
```

---

---

## Data Processing

**Core Rule**: Prefer Polars over Pandas for dataframe operations.

| Scenario           | Recommendation                     |
| ------------------ | ---------------------------------- |
| New data pipelines | Use Polars (30x faster, lazy eval) |
| ML feature eng     | Polars → Arrow → NumPy (zero-copy) |
| MLflow logging     | Pandas OK (add exception comment)  |
| Legacy code fixes  | Keep existing library              |

**Exception mechanism**: Add at file top:

```python
# polars-exception: MLflow requires Pandas DataFrames
import pandas as pd
```

See [ml-data-pipeline-architecture](/plugins/devops-tools/skills/ml-data-pipeline-architecture/SKILL.md) for decision tree and benchmarks.

---

## Related Skills

| Skill                                                                                                  | Purpose                                   |
| ------------------------------------------------------------------------------------------------------ | ----------------------------------------- |
| [`adr-code-traceability`](../adr-code-traceability/SKILL.md)                                           | Add ADR references to code                |
| [`code-hardcode-audit`](../code-hardcode-audit/SKILL.md)                                               | Detect hardcoded values before release    |
| [`semantic-release`](../semantic-release/SKILL.md)                                                     | Version management and release automation |
| [`ml-data-pipeline-architecture`](/plugins/devops-tools/skills/ml-data-pipeline-architecture/SKILL.md) | Polars/Arrow efficiency patterns          |

---

## Reference Documentation

- [Error Handling](./references/error-handling.md) - Raise + propagate patterns
- [Constants Management](./references/constants-management.md) - Magic number abstraction

---

## Troubleshooting

| Issue                  | Cause                | Solution                                   |
| ---------------------- | -------------------- | ------------------------------------------ |
| Silent failures        | Bare except blocks   | Catch specific exceptions, log or re-raise |
| Magic numbers in code  | Missing constants    | Extract to named constants with context    |
| Error swallowed        | except: pass pattern | Log error before continuing or re-raise    |
| Type errors at runtime | Missing validation   | Add input validation at boundaries         |
| Config not loading     | Hardcoded paths      | Use environment variables with defaults    |

Overview

This skill codifies core implementation standards to keep TypeScript services consistent, observable, and maintainable. It focuses on error handling, constants management, progress logging, data-processing choices, and log conventions used across releases. Use it to enforce predictable behavior and simplify code reviews and operational troubleshooting.

How this skill works

The skill prescribes concrete rules: always raise and propagate errors rather than silently swallowing them; replace magic numbers with named, version-agnostic constants; log long-running operations at regular intervals; and follow a standardized log file naming convention. It also recommends preferred libraries and exceptions for data pipelines and gives simple patterns developers can copy into production code. Apply the rules during implementation and before release to verify compliance.

When to use it

  • When starting Phase 1 of a feature (/itp:go) or writing new production code
  • If a PR touches error handling, constants, or progress logging
  • Before release to audit code quality and operational readiness
  • When adding or modifying data pipelines or dependencies
  • When a service needs standardized logs for debugging or automation

Best practices

  • Raise errors with contextual messages and let callers decide handling — avoid silent catches or default-swallowing
  • Extract magic numbers into well-named constants with clear semantics and version-agnostic names
  • For operations >1 minute, emit progress logs every 15–60 seconds to surface liveness and percent complete
  • Prefer robust OSS libraries for common tasks; add documented exceptions where legacy or ecosystem constraints require alternatives
  • Write logs to logs/{adr-id}-YYYYMMDD_HHMMSS.log when running background jobs (nohup style) for traceability

Example use cases

  • A data ingestion job that must report progress and fail fast on upstream API errors
  • Refactoring a module to remove magic numbers and centralize timeouts and thresholds
  • Choosing Polars for a new ETL pipeline while documenting any Pandas exceptions required by downstream tooling
  • Auditing a release branch to ensure no silent error handling or hidden retries remain
  • Standardizing log filenames for automated collection and long-term retention

FAQ

What if a transient error should be retried automatically?

Implement retries at a clearly defined retry layer with explicit backoff and type-checked conditions; do not silently hide errors at lower layers—raise with context and let the retry policy decide.

When is it acceptable to keep legacy libraries like Pandas?

Keep existing libraries for legacy code unless you are building a new pipeline. If an ecosystem tool requires Pandas, add a file-level exception comment and document why.