home / skills / petekp / agent-skills / multi-model-meta-analysis

multi-model-meta-analysis skill

/skills/multi-model-meta-analysis

This skill synthesizes outputs from multiple AI models, verifies claims against the codebase, and produces a reliable, evidence-supported assessment.

This is most likely a fork of the multi-model-meta-analysis skill from petekp
npx playbooks add skill petekp/agent-skills --skill multi-model-meta-analysis

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
4.6 KB
---
name: multi-model-meta-analysis
description: |
  Synthesize outputs from multiple AI models into a comprehensive, verified assessment. Use when: (1) User pastes feedback/analysis from multiple LLMs (Claude, GPT, Gemini, etc.) about code or a project, (2) User wants to consolidate model outputs into a single reliable document, (3) User needs conflicting model claims resolved against actual source code. This skill verifies model claims against the codebase, resolves contradictions with evidence, and produces a more reliable assessment than any single model.
---

# Multi-Model Synthesis

Combine outputs from multiple AI models into a verified, comprehensive assessment by cross-referencing claims against the actual codebase.

## Core Principle

Models hallucinate and contradict each other. The source code is the source of truth. Every significant claim must be verified before inclusion in the final assessment.

## Process

### 1. Extract Claims

Parse each model's output and extract discrete claims:
- Factual assertions about the code ("function X does Y", "there's no error handling in Z")
- Recommendations ("should add validation", "refactor this pattern")
- Identified issues ("bug in line N", "security vulnerability")

Tag each claim with its source model.

### 2. Deduplicate

Group semantically equivalent claims:
- "Lacks input validation" = "No sanitization" = "User input not checked"
- "Should use async/await" = "Convert to promises" = "Make asynchronous"

Create canonical phrasing. Track which models mentioned each.

### 3. Verify Against Source

For each factual claim or identified issue:

```
CLAIM: "The auth middleware doesn't check token expiry"
VERIFY: Read the auth middleware file
FINDING: [Confirmed | Refuted | Partially true | Cannot verify]
EVIDENCE: [Quote relevant code or explain why claim is wrong]
```

Use Grep, Glob, and Read tools to locate and examine relevant code. Do not trust model claims without verification.

### 4. Resolve Conflicts

When models contradict each other:

1. Identify the specific disagreement
2. Examine the actual code
3. Determine which model (if any) is correct
4. Document the resolution with evidence

```
CONFLICT: Model A says "uses SHA-256", Model B says "uses MD5"
INVESTIGATION: Read crypto.js lines 45-60
RESOLUTION: Model B is correct - line 52 shows MD5 usage
EVIDENCE: `const hash = crypto.createHash('md5')`
```

### 5. Synthesize Assessment

Produce a final document that:
- States verified facts (not model opinions)
- Cites evidence for significant claims
- Notes where verification wasn't possible
- Preserves valuable insights that don't require verification (e.g., design suggestions)

## Output Format

```markdown
# Synthesized Assessment: [Topic]

## Summary
[2-3 sentences describing the verified findings]

## Verified Findings

### Confirmed Issues
| Issue | Severity | Evidence | Models |
|-------|----------|----------|--------|
| [Issue] | High/Med/Low | [file:line or quote] | Claude, GPT |

### Refuted Claims
| Claim | Source | Reality |
|-------|--------|---------|
| [What model said] | GPT-4 | [What code actually shows] |

### Unverifiable Claims
| Claim | Source | Why Unverifiable |
|-------|--------|------------------|
| [Claim] | Claude | [Requires runtime testing / external system / etc.] |

## Consensus Recommendations
[Items where 2+ models agree AND verification supports the suggestion]

## Unique Insights Worth Considering
[Valuable suggestions from single models that weren't contradicted]

## Conflicts Resolved
| Topic | Model A | Model B | Verdict | Evidence |
|-------|---------|---------|---------|----------|
| [Topic] | [Position] | [Position] | [Which is correct] | [Code reference] |

## Action Items

### Critical (Verified, High Impact)
- [ ] [Item] — Evidence: [file:line]

### Important (Verified, Medium Impact)
- [ ] [Item] — Evidence: [file:line]

### Suggested (Unverified but Reasonable)
- [ ] [Item] — Source: [Models]
```

## Verification Guidelines

**Always verify:**
- Bug reports and security issues
- Claims about what code does or doesn't do
- Assertions about missing functionality
- Performance or complexity claims

**Trust but note source:**
- Style and readability suggestions
- Architectural recommendations
- Best practice suggestions

**Mark as unverifiable:**
- Runtime behavior claims (without tests)
- Performance benchmarks (without profiling)
- External API behavior
- User experience claims

## Anti-Patterns

- Blindly merging model outputs without checking code
- Treating model consensus as proof (all models can be wrong)
- Omitting refuted claims (document what was wrong - it's valuable)
- Skipping verification because claims "sound right"

Overview

This skill synthesizes outputs from multiple AI models into a single, verified assessment by cross-referencing their claims against the actual codebase. It resolves contradictions, confirms or refutes factual statements with evidence, and produces a reliable consolidated document that is more trustworthy than any individual model output. The result highlights verified findings, unresolved items, and actionable recommendations.

How this skill works

The skill parses each model's output to extract discrete claims, recommendations, and reported issues, tagging them by source. It deduplicates semantically equivalent statements, then verifies factual claims and bug/security assertions by reading relevant files and quoting code as evidence. Conflicts are resolved by inspecting source lines and documenting which model (if any) was correct. Finally, it synthesizes a structured assessment with confirmed findings, refuted claims, unverifiable items, and prioritized action items.

When to use it

  • You have feedback from multiple LLMs about code, architecture, or design and need a single authoritative assessment.
  • Models disagree about behavior, dependencies, or vulnerabilities and you must resolve contradictions against source code.
  • You want verified bug reports and security findings before opening issues or creating a remediation plan.
  • You need a consolidated list of recommendations with evidence and clear prioritization.
  • You want to preserve helpful model suggestions that cannot be fully verified but are worth considering.

Best practices

  • Always attach the model source to each claim so provenance is clear.
  • Prioritize verifying bug/security claims and statements about code behavior.
  • Use simple grep/glob/read operations to quote exact lines as evidence for each verification.
  • Deduplicate similar claims into canonical phrasing and list all supporting models.
  • Mark runtime, performance, and external-API claims as unverifiable without tests or logs.

Example use cases

  • Consolidate code review comments from GPT, Claude, and Gemini into one evidence-backed report.
  • Resolve conflicting vulnerability reports from multiple models by inspecting the implementation files.
  • Produce prioritized action items for a repo after synthesizing varied model suggestions.
  • Generate a short verified summary for stakeholders that cites exact files/lines for major issues.

FAQ

What kinds of claims must be verified?

Any factual assertion about code behavior, missing functionality, bugs, or security issues must be verified against source files.

Which claims can remain as suggestions?

Style, architecture, and readability suggestions can be preserved without direct verification but should be labeled as unverified recommendations.