home / skills / proffesor-for-testing / agentic-qe / qe-quality-assessment

qe-quality-assessment skill

/v3/assets/skills/qe-quality-assessment

This skill scans code quality, enforces gates, and delivers actionable reports to speed reliable deployments and continuous improvement.

npx playbooks add skill proffesor-for-testing/agentic-qe --skill qe-quality-assessment

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
4.5 KB
---
name: "QE Quality Assessment"
description: "Comprehensive quality gates, metrics analysis, and deployment readiness assessment for continuous quality assurance."
---

# QE Quality Assessment

## Purpose

Guide the use of v3's quality assessment capabilities including automated quality gates, metrics aggregation, trend analysis, and deployment readiness evaluation.

## Activation

- When evaluating code quality
- When setting up quality gates
- When assessing deployment readiness
- When tracking quality metrics
- When generating quality reports

## Quick Start

```bash
# Run quality assessment
aqe quality assess --scope src/ --gates all

# Check deployment readiness
aqe quality deploy-ready --environment production

# Generate quality report
aqe quality report --format dashboard --period 30d

# Compare quality between releases
aqe quality compare --from v1.0 --to v2.0
```

## Agent Workflow

```typescript
// Comprehensive quality assessment
Task("Assess code quality", `
  Evaluate quality for src/:
  - Code complexity (cyclomatic, cognitive)
  - Test coverage and mutation score
  - Security vulnerabilities
  - Code smells and technical debt
  - Documentation coverage
  Generate quality score and recommendations.
`, "qe-quality-analyzer")

// Deployment readiness check
Task("Check deployment readiness", `
  Evaluate if release v2.1.0 is ready for production:
  - All tests passing
  - Coverage thresholds met
  - No critical vulnerabilities
  - Performance benchmarks passed
  - Documentation updated
  Provide go/no-go recommendation.
`, "qe-deployment-advisor")
```

## Quality Dimensions

### 1. Code Quality Metrics

```typescript
await qualityAnalyzer.assessCode({
  scope: 'src/**/*.ts',
  metrics: {
    complexity: {
      cyclomatic: { max: 15, warn: 10 },
      cognitive: { max: 20, warn: 15 }
    },
    maintainability: {
      index: { min: 65 },
      duplication: { max: 3 }  // percent
    },
    documentation: {
      publicAPIs: { min: 80 },
      complexity: { min: 70 }
    }
  }
});
```

### 2. Quality Gates

```typescript
await qualityGate.evaluate({
  gates: {
    coverage: { min: 80, blocking: true },
    complexity: { max: 15, blocking: false },
    vulnerabilities: { critical: 0, high: 0, blocking: true },
    duplications: { max: 3, blocking: false },
    techDebt: { maxRatio: 5, blocking: false }
  },
  action: {
    onPass: 'proceed',
    onFail: 'block-merge',
    onWarn: 'notify'
  }
});
```

### 3. Deployment Readiness

```typescript
await deploymentAdvisor.assess({
  release: 'v2.1.0',
  criteria: {
    testing: {
      unitTests: 'all-pass',
      integrationTests: 'all-pass',
      e2eTests: 'critical-pass',
      performanceTests: 'baseline-met'
    },
    quality: {
      coverage: 80,
      noNewVulnerabilities: true,
      noRegressions: true
    },
    documentation: {
      changelog: true,
      apiDocs: true,
      releaseNotes: true
    }
  }
});
```

## Quality Score Calculation

```yaml
quality_score:
  components:
    test_coverage:
      weight: 0.25
      metrics: [statement, branch, function]

    code_quality:
      weight: 0.20
      metrics: [complexity, maintainability, duplication]

    security:
      weight: 0.25
      metrics: [vulnerabilities, dependencies]

    reliability:
      weight: 0.20
      metrics: [bug_density, flaky_tests, error_rate]

    documentation:
      weight: 0.10
      metrics: [api_coverage, readme, changelog]

  scoring:
    A: 90-100
    B: 80-89
    C: 70-79
    D: 60-69
    F: 0-59
```

## Quality Dashboard

```typescript
interface QualityDashboard {
  overallScore: number;  // 0-100
  grade: 'A' | 'B' | 'C' | 'D' | 'F';
  dimensions: {
    name: string;
    score: number;
    trend: 'improving' | 'stable' | 'declining';
    issues: Issue[];
  }[];
  gates: {
    name: string;
    status: 'pass' | 'fail' | 'warn';
    value: number;
    threshold: number;
  }[];
  trends: {
    period: string;
    scores: number[];
    alerts: Alert[];
  };
  recommendations: Recommendation[];
}
```

## CI/CD Integration

```yaml
# Quality gate in pipeline
quality_check:
  stage: verify
  script:
    - aqe quality assess --gates all --output report.json
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
  artifacts:
    reports:
      quality: report.json
  allow_failure:
    exit_codes:
      - 1  # Warnings only
```

## Coordination

**Primary Agents**: qe-quality-analyzer, qe-deployment-advisor, qe-metrics-collector
**Coordinator**: qe-quality-coordinator
**Related Skills**: qe-coverage-analysis, qe-security-compliance

Overview

This skill provides comprehensive quality gates, metrics aggregation, trend analysis, and deployment readiness evaluation for continuous quality assurance. It combines automated checks, weighted quality scoring, and actionable recommendations to inform go/no-go decisions across the SDLC. Use it to enforce standards, monitor trends, and produce CI/CD friendly reports.

How this skill works

The skill inspects codebase metrics (complexity, duplication, maintainability), test metrics (coverage, mutation score, flaky tests), and security findings to compute a weighted quality score. It evaluates configurable quality gates and emits pass/warn/fail outcomes, then runs deployment readiness checks against testing, performance, and documentation criteria. Results are aggregated into a dashboard with trends, alerts, and prioritized recommendations.

When to use it

  • Before merging a release to enforce blocking quality gates
  • During CI/CD pipelines to generate automated quality reports
  • When evaluating deployment readiness for production releases
  • To track quality trends across releases and spot regressions
  • When setting or tuning thresholds for code complexity and coverage

Best practices

  • Define blocking gates for coverage and critical vulnerabilities to prevent risky merges
  • Weight quality score components to match org priorities (security, coverage, reliability)
  • Run assessments on pull requests and full-scope scans on scheduled intervals
  • Treat warnings as signals for continuous improvement, not immediate failures
  • Integrate reports as CI artifacts and notify relevant stakeholders on regressions

Example use cases

  • Block a merge when coverage drops below 80% or a critical vulnerability appears
  • Generate a 30-day dashboard showing improving or declining maintainability trends
  • Compare quality scores between two releases to identify regression sources
  • Assess v2.1.0 deployment readiness: tests passing, no critical vulnerabilities, docs updated
  • Enforce complexity limits on critical modules and notify developers when thresholds are exceeded

FAQ

Can I customize which gates are blocking versus advisory?

Yes. Gates are configurable per project; you can mark coverage or vulnerability gates as blocking while keeping others advisory.

How is the overall quality score calculated?

The score is a weighted aggregation across components like coverage, code quality, security, reliability, and documentation, mapped to grade bands (A–F).