Metrics: Adversarial similarity
H2O Model Validation offers an array of metrics to understand an adversarial similarity test. Below, each metric is described in turn.
Histogram: Similar-to-secondary probabilities histogram
The histogram displays the row probabilities of similar primary and secondary dataset rows. In particular, the histogram displays the probabilities of the rows of the primary dataset being similar to the secondary dataset.
- X-axis: Probabilities of being similar to the secondary dataset
- Y-axis: The relative frequency of the probability intervals between 0 and 1
- Furthermore, the relative frequency (Y-axis) refers to the percent-wise frequencies or number of occurrences of values in the datasets. H2O Model Validation performs this relative or percent-wise calculation to compare numbers between the primary and secondary datasets. Since H2O Model Validation expects each dataset to have a different size, making a fair comparison on direct frequencies is complex. Therefore, H2O Model Validation divides the number of occurrences by the length of the dataset
- In the adversarial similarity test, predictions are generated to determine how much a dataset’s rows differ from those belonging to the training dataset
- The target column is called Adv-IsSecondary, and the predictions generated by the adversarial model are called Similar-to-secondary probability
- Higher values to the histogram's right indicate a greater difference between the primary dataset and the secondary dataset
- A balance of values on both sides of the histogram indicates a 1.0 AUC score (a perfect separation between the two datasets)
Card: Dissimilarity score
The Dissimilarity score card displays the general area under the receiver operating characteristic (AUC) value for the secondary dataset compared to the primary dataset.
- A higher AUC value indicates a higher dissimilarity
Chart: Feature importance
The chart displays the top features that contributed to a higher area under the receiver operating characteristic (AUC) value during the adversarial similarity test. The chart displays the features from top to bottom.
- X-axis: Gain value (the feature's importance to the test model)
- Y-axis: Dataset variables (features)
Clicking on the bar of a feature triggers the display of the following plot: Plot: Feature particial dependency plot (PDP).
Plot: Feature particial dependency plot (PDP)
The plot displays the impact the different values of the selected feature have on the predicted values. The selected feature refers to the feature clicked on the Feature importance chart.
- X-axis: A variable's (feature's) values
- Y-axis: Probabilities of being similar to the secondary dataset
The following histogram and graph are available for the observed feature in the Feature Partial Dependence Plot (PDP):
To access either the histogram or graph, consider the following instructions when viewing the Feature partial dependence plot (PDP):
Click
Kebab menu.- To view the Feature histogram, click Histogram.
- To view the Feature missing ratios graph, click Missing ratios.
Histogram: Feature histogram
The histogram displays the selected feature on the Feature importance chart.
- X-axis: A variable's (feature's) values
- Y-axis: The relative frequency of the variable's values
Graph: Feature missing ratios
The graph displays any missing ratio values of the selected feature in the primary and secondary dataset. The selected feature refers to the feature selected on the Feature importance chart.
- X-axis: A variable's (feature's) values
- Y-axis: Missing ratio values
Table: Shapley
The Shapley table contains adversarial model results from a Shapley perspective. Each row in the table represents a row of the Secondary Dataset (being compared to the primary dataset). Rows are ordered from the highest to lowest predictive value. In other words, rows higher in the table represent the rows of the primary dataset most dissimilar to the secondary dataset.
H2O Model Validation in the Shapley table displays the top N highest (dissimilar) or lowest (similar) results (prediction values), where N=40.
- To display the top N highest (dissimilar) results, see Show dissimilar.
- To display the top N lowest (similar) results, see Show similar.
- To download the Shapley results of the test dataset, see Download Shapley values.
The first two columns of the Shapley table refer to the row's ID, and the target column, while the rest of the columns refer to the train columns of the model.
Name Description Row-Nr.
Row ID Adv-IsSecondary
Target column Clicking on one row ID displays a Global/local Shapley chart that highlights the top most global and local features increasing the average model predictions that drive the most dissimilarity between the datasets. To learn more, see Chart: Global/local Shapley.
Show similar
By default, rows in the Shapley table are ordered from highest to lowest predictive values. To switch the default order and view rows from lowest to highest predictive values, consider the following instructions:
- In the Shapley table, select Similar.
Show dissimilar
To revert to the default order of the rows in the Shapley table, where test rows are ordered from highest to lowest predictive values, consider the following instructions:
- In the Shapley table, select Dissimilar.
Download Shapley values
To download the Shapley values of the secondary dataset, consider the following instructions when viewing the Shapley table:
- Click Download Shapley values for the whole secondary dataset.
The .csv
file contains adversarial model results from a Shapley perspective for the secondary dataset (reference dataset).
Chart: Global/local Shapley
The Global/local Shapley chart displays:
- X-axis: Global/local Shapley values
- Y-axis: Dataset variables (features)
- Global features: The top global features increasing the average model predictions the most while driving the most dissimilarity between the datasets
- Local features: The top local features for a particular row that increase the average model prediction the most while driving the most dissimilarity between the datasets
- Submit and view feedback for this page
- Send feedback about H2O Model Validation to cloud-feedback@h2o.ai