Use a Custom Explainer¶
First, we’ll initialize a client with our server credentials and store it in the variable dai
.
[1]:
import driverlessai
dai = driverlessai.Client(address='http://localhost:12345', username='py', password='py')
Add Explainer to the Server¶
Here we load a custom recipe from our recipe repo (https://github.com/h2oai/driverlessai-recipes) and upload it to the Driverless AI server.
[3]:
dai.recipes.create("https://github.com/h2oai/driverlessai-recipes/raw/master/explainers/explainers/ale_explainer.py")
Complete 100.00%
It’s also possible to use the same dai.recipes.create()
function to upload recipes that we have written locally.
[4]:
dai.recipes.create("ale_explainer.py")
Complete 100.00%
We can see the Accumulated Local Effects explainer is now available on the server.
[6]:
dai.recipes.explainers.list()
[6]:
| Type | Key | Name
----+-----------------+-------+-------------------------------------------------------------------
0 | ExplainerRecipe | | Absolute Permutation Feature Importance
1 | ExplainerRecipe | | Accumulated Local Effects
2 | ExplainerRecipe | | AutoDoc
3 | ExplainerRecipe | | Disparate Impact Analysis
4 | ExplainerRecipe | | Interpretability Data Zip (Surrogate and Shapley Techniques)
5 | ExplainerRecipe | | NLP Leave-one-covariate-out (LOCO)
6 | ExplainerRecipe | | NLP Tokenizer
7 | ExplainerRecipe | | Original Feature Importance
8 | ExplainerRecipe | | Partial Dependence Plot
9 | ExplainerRecipe | | Relative Permutation Feature Importance
10 | ExplainerRecipe | | Sensitivity Analysis
11 | ExplainerRecipe | | Shapley Summary Plot for Original Features (Naive Shapley Method)
12 | ExplainerRecipe | | Shapley Transformed Feature Importance
13 | ExplainerRecipe | | Shapley Values for Original Features (Kernel SHAP Method)
14 | ExplainerRecipe | | Shapley Values for Original Features (Naive Method)
15 | ExplainerRecipe | | Surrogate Decision Tree
16 | ExplainerRecipe | | Surrogate Random Forest Importance
17 | ExplainerRecipe | | Surrogate Random Forest Leave-one-covariate-out (LOCO)
18 | ExplainerRecipe | | Surrogate Random Forest Partial Dependence Plot
19 | ExplainerRecipe | | Transformed Feature Importance
20 | ExplainerRecipe | | k-LIME/LIME-SUP
We’ll save the explainer object to a variable for further use.
[7]:
ale_explainer = dai.recipes.explainers.list()[1]
Explainer Settings¶
Some explainers have additional settings that can be changed. To see the possible settings and their descriptions, we run the following:
[25]:
ale_explainer.search_settings(search_term="", show_description=True)
bins | default_value: 10 | Maximum number of bins to use if not specified for feature.
feature_bins | default_value: {} | Mapping of feature name to maximum number of bins.
We can then change settings by passing kwargs to the with_settings
method each explainer has.
[12]:
ale_explainer.with_settings(bins=8)
[12]:
<class 'ExplainerRecipe'> Accumulated Local Effects
Then, view the non-default settings for the explainer.
[13]:
ale_explainer.settings
[13]:
{'bins': 8}
Note that every call of with_settings
resets all settings to their defaults except for kwargs.
[14]:
ale_explainer.with_settings()
[14]:
<class 'ExplainerRecipe'> Accumulated Local Effects
[15]:
ale_explainer.settings
[15]:
{}
Create Interpretation¶
For example purposes, we’ll just grab the most recent experiment on the server.
[17]:
experiment = dai.experiments.list()[0]
Then, run an interpretation with just the Accumulated Local Effects explainer, setting the explainer bins
to 8.
[22]:
interpretation = dai.mli.create(
experiment=experiment,
explainers=[ale_explainer.with_settings(bins=8)]
)
Complete 100.00% - Interpretation successfully finished.
To view the explainer graphs, we have to go to the GUI.
[ ]:
interpretation.gui()