Settings: Adversarial similarity
Overview
H2O Model Validation offers an array of settings for an adversarial similarity test. Below, each setting is described in turn.
Settings
Test name
This setting defines the name of the validation test. By default, H2O Model Validation assigns a name to the test that you can rewrite.
Driverless AI instance to run the adversarial similarity models on
This setting defines the location (Connection) where H2O Model Validation creates the appropriate models to run the adversarial similarity test.
Primary dataset
This setting defines one of the two datasets H2O Model Validation utilizes during the validation test to observe similar or dissimilar rows between the secondary dataset and this defined dataset (referred to as the primary dataset).
Models: Within the context of validating a model, the defined primary dataset needs to follow the structure of the model's training dataset.
Secondary dataset
This setting defines one of the two datasets H2O Model Validation utilizes during the validation test to observe similar or dissimilar rows between the primary dataset and this defined dataset (referred to as the secondary dataset).
- The defined primary dataset dictates the required format of the secondary dataset (similar columns)
- H2O Model Validation drops a particular column in the secondary dataset if that column is not present in the defined primary dataset
Columns to drop
This setting defines the columns H2O Model Validation drops during model training.
This setting is proper when you want to drop columns that cause high dissimilarity (for example, a time column).
Compute Shapley values
This setting determines if H2O Model Validation computes Shapley values for the model utilized to analyze the similarity between the primary and secondary datasets. H2O Model Validation uses the generated Shapley values to create an array of visual metrics that provide valuable insights into the contribution of individual features to the overall model performance.
- Generating Shapley values for the model can lead to a significant impact on the runtime
- Generated Shapley value-visual metrics can help understand what might cause a higher degree of dissimilarity between the primary and secondary datasets. To learn more, see Shapley table
Delete test models and datasets from the DAI instance after finish
This setting determines if H2O Model Validation should delete the artifacts created on the Connection. In this case, artifacts refer to experiments and datasets generated during the adversarial similarity validation test. By default, H2O Model Validation checks this setting (enables it), and accordingly, H2O Model Validation deletes all artifacts because they are no longer needed after the validation test is complete.
- Submit and view feedback for this page
- Send feedback about H2O Model Validation to cloud-feedback@h2o.ai