home / skills / dkyazzentwatwa / chatgpt-skills / ml-model-explainer
This skill explains ML model predictions using SHAP values, feature importance, and decision paths with clear visualizations.
npx playbooks add skill dkyazzentwatwa/chatgpt-skills --skill ml-model-explainerReview the files below or copy the command above to add this skill to your agents.
---
name: ml-model-explainer
description: Explain ML model predictions using SHAP values, feature importance, and decision paths with visualizations.
---
# ML Model Explainer
Explain machine learning model predictions using SHAP and feature importance.
## Features
- **SHAP Values**: Explain individual predictions
- **Feature Importance**: Global feature rankings
- **Decision Paths**: Trace prediction logic
- **Visualizations**: Waterfall, force plots, summary plots
- **Multiple Models**: Support for tree-based, linear, neural networks
- **Batch Explanations**: Explain multiple predictions
## Quick Start
```python
from ml_model_explainer import MLModelExplainer
explainer = MLModelExplainer()
explainer.load_model(model, X_train)
# Explain single prediction
explanation = explainer.explain(X_test[0])
explainer.plot_waterfall('explanation.png')
# Feature importance
importance = explainer.feature_importance()
```
## CLI Usage
```bash
python ml_model_explainer.py --model model.pkl --data test.csv --output explanations/
```
## Dependencies
- shap>=0.42.0
- scikit-learn>=1.3.0
- pandas>=2.0.0
- numpy>=1.24.0
- matplotlib>=3.7.0
This skill explains machine learning model predictions using SHAP values, feature importance, and decision-path tracing, with built-in visualization support. It supports tree-based, linear, and neural models and produces both single-prediction and batch-level explanations for practical model introspection. The tool is intended for data scientists and engineers who need transparent, reproducible explanations for model outputs.
Load your trained model and representative training data; the skill fits SHAP explainers appropriate to the model type and computes local and global attribution scores. It exposes methods to extract per-instance SHAP values, aggregate feature importance, and trace decision paths for tree models. Visual helpers generate waterfall, force, and summary plots and can save figures for reporting or embedding in dashboards.
Which model types are supported?
Tree-based models, linear models, and neural networks are supported via appropriate SHAP explainers; behavior differs slightly by model type.
Do I need special preprocessing to get correct explanations?
Explanations are most meaningful when you apply the same preprocessing used in training and provide matching background data to the explainer.