home / skills / multiversx / mx-ai-skills / fix_verification
This skill verifies that a reported vulnerability is fully fixed by reproducing, applying the fix, and validating no regressions.
npx playbooks add skill multiversx/mx-ai-skills --skill fix_verificationReview the files below or copy the command above to add this skill to your agents.
---
name: fix_verification
description: Verifying if a reported bug is truly fixed.
---
# Fix Verification
This skill helps you rigorous verify that a reported vulnerability has been eliminated without introducing regressions.
## 1. The Verification Loop
1. **Reproduce**: Create a `mandos` scenario that fails (demonstrates the bug).
2. **Apply Fix**: Modification to Rust code.
3. **Verify**: Run the failing mandos. It MUST pass now.
4. **Regression Check**: Run ALL other mandos. They MUST still pass.
## 2. Common Fix Failures
- **Partial Fix**: Fixing one path but missing a variant (e.g., fixed `deposit` but not `transfer`).
- **Moved Bug**: The fix prevents the exploit but creates a DoS vector (e.g., adding a lock that never unlocks).
## 3. Deliverable
A "Verification Report" stating:
- Commit ID of the fix.
- Test case used to verify.
- Confirmation of regression suite success.
This skill guides a rigorous process to verify that a reported bug or vulnerability is truly fixed and that no regressions were introduced. It focuses on reproducing the failure, applying the fix, confirming the failing test now passes, and validating the full regression suite. The outcome is a concise Verification Report that documents confidence in the fix.
Start by reproducing the bug with a deterministic failing test scenario (for example, a mandos file that demonstrates the exploit). Apply the code change and rerun that failing scenario; it must pass. Next run the entire regression suite to ensure no other tests are broken. Produce a Verification Report that records the fix commit, the exact test used, and the regression results.
What belongs in the Verification Report?
Include the fix commit ID, the exact test scenario file and invocation, environment details, and the regression suite results (pass/fail and logs).
What if the failing test still fails after the change?
Revert and re-evaluate the fix approach: confirm the test reproduces the original bug, inspect code paths you may have missed, and add targeted tests for uncovered variants.