Data & ReproScientific VisualizationK-Dense-AI/claude-scientific-skillsData & Reproduction
SH

SHAP

Maintainer K-Dense Inc. · Last updated April 1, 2026

SHAP is a unified approach to explain machine learning model outputs using Shapley values from cooperative game theory. This skill provides comprehensive guidance for: - Computing SHAP values for any model type - Creating visuali.

Claude CodeOpenClawNanoClawAnalysisWritingshapmachine-learningpackagemachine learning & deep learning

Original source

K-Dense-AI/claude-scientific-skills

https://github.com/K-Dense-AI/claude-scientific-skills/tree/main/scientific-skills/shap

Maintainer
K-Dense Inc.
License
MIT license
Last updated
April 1, 2026

Skill Snapshot

Key Details From SKILL.md

2 min

Key Notes

  • Computing SHAP values for any model type.
  • Creating visualizations to understand feature importance.
  • Debugging and validating model behavior.
  • Analyzing fairness and bias.
  • Implementing explainable AI in production.

Source Doc

Excerpt From SKILL.md

When to Use This Skill

Trigger this skill when users ask about:

  • "Explain which features are most important in my model"
  • "Generate SHAP plots" (waterfall, beeswarm, bar, scatter, force, heatmap, etc.)
  • "Why did my model make this prediction?"
  • "Calculate SHAP values for my model"
  • "Visualize feature importance using SHAP"
  • "Debug my model's behavior" or "validate my model"
  • "Check my model for bias" or "analyze fairness"
  • "Compare feature importance across models"
  • "Implement explainable AI" or "add explanations to my model"
  • "Understand feature interactions"
  • "Create model interpretation dashboard"

Step 1: Select the Right Explainer

Decision Tree:

  1. Tree-based model? (XGBoost, LightGBM, CatBoost, Random Forest, Gradient Boosting)

    • Use shap.TreeExplainer (fast, exact)
  2. Deep neural network? (TensorFlow, PyTorch, Keras, CNNs, RNNs, Transformers)

    • Use shap.DeepExplainer or shap.GradientExplainer
  3. Linear model? (Linear/Logistic Regression, GLMs)

    • Use shap.LinearExplainer (extremely fast)
  4. Any other model? (SVMs, custom functions, black-box models)

    • Use shap.KernelExplainer (model-agnostic but slower)
  5. Unsure?

    • Use shap.Explainer (automatically selects best algorithm)

See references/explainers.md for detailed information on all explainer types.

Create explainer

explainer = shap.TreeExplainer(model)

Use cases

  • Explain which features are most important in my model.
  • Generate SHAP plots" (waterfall, beeswarm, bar, scatter, force, heatmap, etc.).
  • Why did my model make this prediction?
  • Calculate SHAP values for my model.

Not for

  • Do not rely on this catalog entry alone for installation or maintenance details.

Related skills

Related skills

Back to directory
BI
Data & ReproScientific Visualization

bio-chipseq-visualization

Visualize ChIP-seq data using deepTools, Gviz, and ChIPseeker. Create heatmaps, profile plots, and genome browser tracks. Visualize signal a…

OpenClawNanoClawAnalysis
FreedomIntelligence/OpenClaw-Medical-SkillsView
BI
Data & ReproScientific Visualization

bio-consensus-sequences

Generate consensus FASTA sequences by applying VCF variants to a reference using bcftools consensus. Use when creating sample-specific refer…

OpenClawNanoClawAnalysis
FreedomIntelligence/OpenClaw-Medical-SkillsView
BI
Data & ReproScientific Visualization

bio-copy-number-cnv-visualization

Visualize copy number profiles, segments, and compare across samples. Create publication-quality plots of CNV data from CNVkit, GATK, or oth…

OpenClawNanoClawAnalysis
FreedomIntelligence/OpenClaw-Medical-SkillsView