bio-chipseq-visualization:可视化 ChIP-seq data ,使用 deepTools,Gviz,、 ChIPseeker。 创建 heatmaps,profile plots,、 genome browser…
SHAP
维护者 K-Dense Inc. · 最近更新 2026年4月1日
SHAP是一个unified approach to explain 机器学习 model outputs ,使用 Shapley values ,面向 cooperative game theory。 This skill provides comprehensive guidance ,用于:- Computing SHAP values ,用于 any model type - Creating visuali。
原始来源
K-Dense-AI/claude-scientific-skills
https://github.com/K-Dense-AI/claude-scientific-skills/tree/main/scientific-skills/shap
- 维护者
- K-Dense Inc.
- 许可
- MIT license
- 最近更新
- 2026年4月1日
技能摘要
来自 SKILL.md 的关键信息
核心说明
- Computing SHAP values ,用于 any model type。
- Creating visualizations to understand feature importance。
- Debugging 、 validating model behavior。
- Analyzing fairness 、 bias。
- Implementing explainable AI in production。
原始文档
SKILL.md 摘录
When to Use This Skill
Trigger this skill when users ask about:
- "Explain which features are most important in my model"
- "Generate SHAP plots" (waterfall, beeswarm, bar, scatter, force, heatmap, etc.)
- "Why did my model make this prediction?"
- "Calculate SHAP values for my model"
- "Visualize feature importance using SHAP"
- "Debug my model's behavior" or "validate my model"
- "Check my model for bias" or "analyze fairness"
- "Compare feature importance across models"
- "Implement explainable AI" or "add explanations to my model"
- "Understand feature interactions"
- "Create model interpretation dashboard"
Step 1: Select the Right Explainer
Decision Tree:
-
Tree-based model? (XGBoost, LightGBM, CatBoost, Random Forest, Gradient Boosting)
- Use
shap.TreeExplainer(fast, exact)
- Use
-
Deep neural network? (TensorFlow, PyTorch, Keras, CNNs, RNNs, Transformers)
- Use
shap.DeepExplainerorshap.GradientExplainer
- Use
-
Linear model? (Linear/Logistic Regression, GLMs)
- Use
shap.LinearExplainer(extremely fast)
- Use
-
Any other model? (SVMs, custom functions, black-box models)
- Use
shap.KernelExplainer(model-agnostic but slower)
- Use
-
Unsure?
- Use
shap.Explainer(automatically selects best algorithm)
- Use
See references/explainers.md for detailed information on all explainer types.
Create explainer
explainer = shap.TreeExplainer(model)
适用场景
- Explain which features are most important in my model。
- 生成 SHAP plots" (waterfall,beeswarm,bar,scatter,force,heatmap,etc.)。
- Why did my model make this prediction。
- Calculate SHAP values ,用于 my model。
不适用场景
- Do not rely on this catalog entry alone ,用于 installation 或 maintenance details。
相关技能
相关技能
bio-consensus-sequences:生成 consensus FASTA sequences by applying VCF variants to reference ,使用 bcftools consensus。 适合在cr…
bio-copy-number-cnv-visualization:可视化 copy number profiles,segments,、 compare across samples。 创建 publication-quality plo…
bio-data-visualization-ggplot2-fundamentals:R ggplot2 ,用于 publication-quality genomics 、 omics figures。