数据与复现科研绘图与可视化K-Dense-AI/claude-scientific-skills数据与复现
SH

SHAP

维护者 K-Dense Inc. · 最近更新 2026年4月1日

SHAP是一个unified approach to explain 机器学习 model outputs ,使用 Shapley values ,面向 cooperative game theory。 This skill provides comprehensive guidance ,用于:- Computing SHAP values ,用于 any model type - Creating visuali。

Claude CodeOpenClawNanoClaw分析处理写作整理shapmachine-learningpackagemachine learning & deep learning

原始来源

K-Dense-AI/claude-scientific-skills

https://github.com/K-Dense-AI/claude-scientific-skills/tree/main/scientific-skills/shap

维护者
K-Dense Inc.
许可
MIT license
最近更新
2026年4月1日

技能摘要

来自 SKILL.md 的关键信息

2 min

核心说明

  • Computing SHAP values ,用于 any model type。
  • Creating visualizations to understand feature importance。
  • Debugging 、 validating model behavior。
  • Analyzing fairness 、 bias。
  • Implementing explainable AI in production。

原始文档

SKILL.md 摘录

When to Use This Skill

Trigger this skill when users ask about:

  • "Explain which features are most important in my model"
  • "Generate SHAP plots" (waterfall, beeswarm, bar, scatter, force, heatmap, etc.)
  • "Why did my model make this prediction?"
  • "Calculate SHAP values for my model"
  • "Visualize feature importance using SHAP"
  • "Debug my model's behavior" or "validate my model"
  • "Check my model for bias" or "analyze fairness"
  • "Compare feature importance across models"
  • "Implement explainable AI" or "add explanations to my model"
  • "Understand feature interactions"
  • "Create model interpretation dashboard"

Step 1: Select the Right Explainer

Decision Tree:

  1. Tree-based model? (XGBoost, LightGBM, CatBoost, Random Forest, Gradient Boosting)

    • Use shap.TreeExplainer (fast, exact)
  2. Deep neural network? (TensorFlow, PyTorch, Keras, CNNs, RNNs, Transformers)

    • Use shap.DeepExplainer or shap.GradientExplainer
  3. Linear model? (Linear/Logistic Regression, GLMs)

    • Use shap.LinearExplainer (extremely fast)
  4. Any other model? (SVMs, custom functions, black-box models)

    • Use shap.KernelExplainer (model-agnostic but slower)
  5. Unsure?

    • Use shap.Explainer (automatically selects best algorithm)

See references/explainers.md for detailed information on all explainer types.

Create explainer

explainer = shap.TreeExplainer(model)

适用场景

  • Explain which features are most important in my model。
  • 生成 SHAP plots" (waterfall,beeswarm,bar,scatter,force,heatmap,etc.)。
  • Why did my model make this prediction。
  • Calculate SHAP values ,用于 my model。

不适用场景

  • Do not rely on this catalog entry alone ,用于 installation 或 maintenance details。

相关技能

相关技能

返回目录
BI
数据与复现科研绘图与可视化

bio-chipseq-visualization

bio-chipseq-visualization:可视化 ChIP-seq data ,使用 deepTools,Gviz,、 ChIPseeker。 创建 heatmaps,profile plots,、 genome browser…

OpenClawNanoClaw分析处理
FreedomIntelligence/OpenClaw-Medical-Skills查看
BI
数据与复现科研绘图与可视化

bio-consensus-sequences

bio-consensus-sequences:生成 consensus FASTA sequences by applying VCF variants to reference ,使用 bcftools consensus。 适合在cr…

OpenClawNanoClaw分析处理
FreedomIntelligence/OpenClaw-Medical-Skills查看
BI
数据与复现科研绘图与可视化

bio-copy-number-cnv-visualization

bio-copy-number-cnv-visualization:可视化 copy number profiles,segments,、 compare across samples。 创建 publication-quality plo…

OpenClawNanoClaw分析处理
FreedomIntelligence/OpenClaw-Medical-Skills查看
BI
数据与复现科研绘图与可视化

bio-data-visualization-ggplot2-fundamentals

bio-data-visualization-ggplot2-fundamentals:R ggplot2 ,用于 publication-quality genomics 、 omics figures。

OpenClawNanoClaw分析处理
FreedomIntelligence/OpenClaw-Medical-Skills查看