Use a Custom Explainer¶
First, we'll initialize a client with our server credentials and store it in the variable dai
.
import driverlessai
dai = driverlessai.Client(address='http://localhost:12345', username='py', password='py')
Add Explainer to the Server¶
Here we load a custom recipe from our recipe repo (https://github.com/h2oai/driverlessai-recipes) and upload it to the Driverless AI server.
dai.recipes.create("https://github.com/h2oai/driverlessai-recipes/raw/master/explainers/explainers/ale_explainer.py")
Complete 100.00%
It's also possible to use the same dai.recipes.create()
function to upload recipes that we have written locally.
dai.recipes.create("ale_explainer.py")
Complete 100.00%
We can see the Accumulated Local Effects explainer is now available on the server.
dai.recipes.explainers.list()
| Type | Key | Name ----+-----------------+-------+------------------------------------------------------------------- 0 | ExplainerRecipe | | Absolute Permutation Feature Importance 1 | ExplainerRecipe | | Accumulated Local Effects 2 | ExplainerRecipe | | AutoDoc 3 | ExplainerRecipe | | Disparate Impact Analysis 4 | ExplainerRecipe | | Interpretability Data Zip (Surrogate and Shapley Techniques) 5 | ExplainerRecipe | | NLP Leave-one-covariate-out (LOCO) 6 | ExplainerRecipe | | NLP Partial Dependence Plot 7 | ExplainerRecipe | | NLP Tokenizer 8 | ExplainerRecipe | | NLP Vectorizer + Linear Model (VLM) Text Feature Importance 9 | ExplainerRecipe | | Original Feature Importance 10 | ExplainerRecipe | | Partial Dependence Plot 11 | ExplainerRecipe | | Relative Permutation Feature Importance 12 | ExplainerRecipe | | Sensitivity Analysis 13 | ExplainerRecipe | | Shapley Summary Plot for Original Features (Naive Shapley Method) 14 | ExplainerRecipe | | Shapley Values for Original Features (Kernel SHAP Method) 15 | ExplainerRecipe | | Shapley Values for Original Features (Naive Method) 16 | ExplainerRecipe | | Shapley Values for Transformed Features 17 | ExplainerRecipe | | Surrogate Decision Tree 18 | ExplainerRecipe | | Surrogate Random Forest Feature Importance 19 | ExplainerRecipe | | Surrogate Random Forest Leave-one-covariate-out (LOCO) 20 | ExplainerRecipe | | Surrogate Random Forest Partial Dependence Plot 21 | ExplainerRecipe | | Transformed Feature Importance 22 | ExplainerRecipe | | k-LIME/LIME-SUP 23 | ExplainerRecipe | | Time series explainer
We'll save the explainer object to a variable for further use.
ale_explainer = dai.recipes.explainers.list()[1]
Explainer Settings¶
Some explainers have additional settings that can be changed. To see the possible settings and their descriptions, we run the following:
ale_explainer.search_settings(show_valid_values=True, show_description=True)
Name | Default Value | Valid Values | Description |
---|---|---|---|
bins | 10 | An integer values greater than or equal to 1. | Maximum number of bins to use if not specified for feature. |
feature_bins | {} | A dictionary | Mapping of feature name to maximum number of bins. |
We can then change settings by passing kwargs to the with_settings
method each explainer has.
ale_explainer.with_settings(bins=8)
<class 'ExplainerRecipe'> Accumulated Local Effects
Then, view the non-default settings for the explainer.
ale_explainer.show_settings()
Name | Value |
---|---|
bins | 8 |
Note that every call of with_settings
resets all settings to their defaults except for kwargs.
ale_explainer.with_settings()
<class 'ExplainerRecipe'> Accumulated Local Effects
ale_explainer.show_settigns()
Create Interpretation¶
For example purposes, we'll just grab the most recent experiment on the server.
experiment = dai.experiments.list()[0]
Then, run an interpretation with just the Accumulated Local Effects explainer, setting the explainer bins
to 8.
interpretation = dai.mli.create(
experiment=experiment,
explainers=[ale_explainer.with_settings(bins=8)]
)
Complete 100.00% - Interpretation successfully finished.
To view the explainer graphs, we have to go to the GUI.
interpretation.gui()
Display a list of all the explainers that were executed.
interpretation.explainers
/usr/local/Caskroom/miniconda/base/envs/dai-py/lib/python3.7/site-packages/ipykernel_launcher.py:1: UserWarning: 'Interpretation.explainers' is a beta API that is subject to future changes. """Entry point for launching an IPython kernel.
Key | Name | |
---|---|---|
0 | c5fb696c-0b90-11ee-a33d-ac1f6b643c68 | Accumulated Local Effects |