home / skills / zpankz / mcp-skillset / pex-lo-skill-creator

pex-lo-skill-creator skill

/pex-lo-skill-creator

This skill helps you architect and optimize a modular PEX LO workflow by integrating tools, validating insights, and iterating toward excellence.

npx playbooks add skill zpankz/mcp-skillset --skill pex-lo-skill-creator

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
6.2 KB
---
name: pex-lo-skill-creator
description: |
  Leverages the multiple CLIs including PEX, pdf-search, hge, researcher, limitless, pieces, screenapp and more, to extract all relevant LOs across both CICM and ANZCA PEX exams, cross reference them with past SAQ questions, extract examiner comments, cross reference this with recommended textbooks via pdf-search, generate a hypergraph using hge, validate insights with researcher, evaluate against pkm using limitless and pieces, and look for additional context to add polish with screenapp and others. Then leverages the PEX and SAQ skill, as well as the metaskills that exist in .claudeate a scaffolding for a modular, programmatic, state of the art skill that integrates quantitative physiological parameters using the qp skill, teleologically grounded framing using the physiology-style and telos skill and follows all of the principles of the code, mega, and agency skills. Finally it should be recursively optimised with the abduct, critique and refactoring agent skillls. Ergo, this skill makes SOTA PEX LO skills.
version: "1.0.0"
category: First-Principles Biophysical Critical Care Medical Reasoning To Emergent Systems Thinking
tags:
  - primary-exam
  - CICM
  - ANZCA
  - first-principles
  - systems-thinking
  - homoiconic metagraph
author: mcp-skillset
license: MIT
created: 2026-01-15
last_updated: 2026-01-15
---

# pex-lo-skill-creator

## Overview

This skill provides guidance for first-principles biophysical critical care medical reasoning to emergent systems thinking tasks using modern best practices and proven patterns.

## When to Use This Skill

Use this skill when:
- Working with first-principles biophysical critical care medical reasoning to emergent systems thinking projects
- Implementing first-principles biophysical critical care medical reasoning to emergent systems thinking-related features
- Following best practices for first-principles biophysical critical care medical reasoning to emergent systems thinking

## Core Principles

### 1. Follow Industry Standards

**Always adhere to established conventions and best practices**

```
# Example: Follow naming conventions and structure
# Adapt this to your specific domain and language
```

### 2. Prioritize Code Quality

**Write clean, maintainable, and well-documented code**

- Use consistent formatting and style
- Add meaningful comments for complex logic
- Follow SOLID principles where applicable

### 3. Test-Driven Approach

**Write tests to validate functionality**

- Unit tests for individual components
- Integration tests for system interactions
- End-to-end tests for critical workflows

## Best Practices

### Structure and Organization

- Organize code into logical modules and components
- Use clear and descriptive naming conventions
- Keep files focused on single responsibilities
- Limit file size to maintain readability (< 500 lines)

### Error Handling

- Implement comprehensive error handling
- Use specific exception types
- Provide actionable error messages
- Log errors with appropriate context

### Performance Considerations

- Optimize for readability first, performance second
- Profile before optimizing
- Use appropriate data structures and algorithms
- Consider memory usage for large datasets

### Security

- Validate all inputs
- Sanitize outputs to prevent injection
- Use secure defaults
- Keep dependencies updated

## Common Patterns

### Pattern 1: Configuration Management

```
# Separate configuration from code
# Use environment variables for sensitive data
# Provide sensible defaults
```

### Pattern 2: Dependency Injection

```
# Inject dependencies rather than hardcoding
# Makes code testable and flexible
# Reduces coupling between components
```

### Pattern 3: Error Recovery

```
# Implement graceful degradation
# Use retry logic with exponential backoff
# Provide fallback mechanisms where appropriate
```

## Anti-Patterns

### ❌ Avoid: Hardcoded Values

**Don't hardcode configuration, credentials, or magic numbers**

```
# BAD: Hardcoded values
API_TOKEN = "hardcoded-value-bad"  # Never do this!
max_retries = 3
```

✅ **Instead: Use configuration management**

```
# GOOD: Configuration-driven
API_TOKEN = os.getenv("API_TOKEN")  # Get from environment
max_retries = config.get("max_retries", 3)
```

### ❌ Avoid: Silent Failures

**Don't catch exceptions without logging or handling**

```
# BAD: Silent failure
try:
    risky_operation()
except Exception:
    pass
```

✅ **Instead: Explicit error handling**

```
# GOOD: Explicit handling
try:
    risky_operation()
except SpecificError as e:
    logger.error(f"Operation failed: {e}")
    raise
```

### ❌ Avoid: Premature Optimization

**Don't optimize without measurements**

✅ **Instead: Profile first, then optimize**

- Measure performance with realistic workloads
- Identify actual bottlenecks
- Optimize the critical paths only
- Validate improvements with benchmarks

## Testing Strategy

### Unit Tests

- Test individual functions and classes
- Mock external dependencies
- Cover edge cases and error conditions
- Aim for >80% code coverage

### Integration Tests

- Test component interactions
- Use test databases or services
- Validate data flow across boundaries
- Test error propagation

### Best Practices for Tests

- Make tests independent and repeatable
- Use descriptive test names
- Follow AAA pattern: Arrange, Act, Assert
- Keep tests simple and focused

## Debugging Techniques

### Common Issues and Solutions

**Issue**: Unexpected behavior in production

**Solution**:
1. Enable detailed logging
2. Reproduce in staging environment
3. Use debugger to inspect state
4. Add assertions to catch assumptions

**Issue**: Performance degradation

**Solution**:
1. Profile the application
2. Identify bottlenecks with metrics
3. Optimize critical paths
4. Monitor improvements with benchmarks

## Related Skills
- **test-driven-development**: Write tests before implementation
- **systematic-debugging**: Debug issues methodically
- **code-review**: Review code for quality and correctness

## References

- Industry documentation and best practices
- Official framework/library documentation
- Community resources and guides
- Code examples and patterns

## Version History

- **1.0.0** (2026-01-15): Initial version

Overview

This skill designs state-of-the-art PEX learning-objective (LO) generator workflows for CICM and ANZCA exam preparation. It combines multiple CLIs and metaskills to extract, cross-reference, validate, and optimize exam LOs, examiner commentary, and recommended learning resources. The output is programmatic, modular PEX LO skills that integrate quantitative physiology and teleological framing.

How this skill works

The skill orchestrates a pipeline that extracts LOs and examiner comments from past PEX and SAQ sources, cross-references those with recommended textbooks via PDF search, and builds a hypergraph representation of concepts using the hge tool. It validates insights with a researcher agent, evaluates knowledge management against PKM tools (limitless, pieces), and polishes contextual framing with screenapp and other utilities. Finally it applies abductive critique and refactoring loops to recursively optimise the generated LO skills and integrate quantitative physiology and telos-style framing.

When to use it

  • Creating comprehensive PEX LO sets for CICM or ANZCA exam preparation
  • Automating cross-referencing between past SAQs, examiner comments, and textbooks
  • Building modular, testable LO skills that embed quantitative physiology and teleological framing
  • Validating and documenting learning objectives against PKM and research evidence
  • Iteratively refining LO skills with automated critique and refactoring agents

Best practices

  • Structure the pipeline as independent, testable modules with clear interfaces
  • Use environment-driven configuration and dependency injection for CLI integrations
  • Persist intermediate artifacts (LOs, hypergraphs, mappings) for reproducibility and auditability
  • Write unit and integration tests for each extraction, cross-reference, and validation step
  • Log actionable errors and include contextual metadata for traceability
  • Run recursive critique and refactoring cycles with defined stop criteria to avoid endless loops

Example use cases

  • Generate a validated LO map for hypotension management with linked textbook pages and past SAQs
  • Produce a hypergraph showing concept dependencies across physiology, pharmacology, and clinical scenarios for a PEX topic
  • Create modular LO skill packages that include quantitative parameter checks from the qp skill and telos framing for examiner-style answers
  • Automate extraction of examiner comments from past SAQs and map them to targeted remediation resources
  • Run an iterative optimisation pass that improves LO clarity and alignment with exam patterns based on researcher validation

FAQ

What inputs are required?

Primary inputs are PEX and SAQ archives, recommended textbook PDFs, and access credentials/config for the integrated CLIs and metaskills.

How is quality assured?

Quality comes from multi-stage validation: cross-referencing with source materials, researcher validation, PKM alignment checks, and automated critique/refactoring cycles.