Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model.
8.7
Rating
0
Installs
Machine Learning
Category
Exceptional SHAP skill with comprehensive coverage of model interpretability workflows. The description is crystal clear with specific trigger phrases that enable easy CLI invocation. Task knowledge is outstanding - provides concrete code examples, decision trees for explainer selection, multiple workflow patterns, and practical troubleshooting. Structure is excellent with a well-organized main file and logical separation into reference documents (explainers, plots, workflows, theory). The skill demonstrates high novelty as SHAP implementation requires significant domain knowledge about choosing correct explainers, interpreting outputs, creating appropriate visualizations, and understanding theoretical foundations - tasks that would consume many tokens for a CLI agent to accomplish from scratch. Minor point: while structure is very good, the main SKILL.md is quite detailed (could be slightly more concise), though this is balanced by the clear section headers and reference file organization.
Loading SKILL.md…