Driverless AI with H2O-3 Algorithms¶
Driverless AI already supports a variety of algorithms. This example shows how you can use our h2o-3-models-py recipe to include H2O-3 supervised learning algorithms in your experiment. The available H2O-3 algorithms in the recipe include:
Naive Bayes
GBM
Random Forest
Deep Learning
GLM
AutoML
Caution: Because AutoML is treated as a regular ML algorithm here, the runtime requirements can be large. We recommend that you adjust the max_runtime_secs
parameters as suggested here: https://github.com/h2oai/driverlessai-recipes/blob/rel-1.8.10/models/algorithms/h2o-3-models.py#L41
Start an experiment in Driverless AI by selecting your training dataset along with (optionally) validation and testing datasets and then specifying a Target Column. Notice the list of algorithms that will be used in the Feature evolution section of the experiment summary. In the example below, the experiment will use LightGBM and XGBoostGBM.
Click on Expert Settings.
Specify the custom recipe using one of the following methods:
On your local machine, clone the https://github.com/h2oai/driverlessai-recipes for this release branch. Then use the Upload Custom Recipe button to upload the driverlessai-recipes/models/h2o-3-models.py file.
Click the Load Custom Recipe from URL button, then enter the URL for the raw h2o-3-models.py file (for example, https://raw.githubusercontent.com/h2oai/driverlessai-recipes/rel-1.8.10/models/algorithms/h2o-3-models.py).
Note: Click the Official Recipes (External) button to browse the driverlessai-recipes repository.
Driverless AI will begin uploading and verifying the new custom recipe.
In the Expert Settings page, specify any additional settings and then click Save. This returns you to the experiment summary.
To include each of the new models in your experiment, return to the Expert Settings option. Click the Recipes > Include Specific Models option. Select the algorithm(s) that you want to include. Click Done to return to the experiment summary.
Notice the updated list of available algorithms in the experiment.
Edit any additional experiment settings, and then click Launch Experiment.
Upon completion, you can download the Experiment Summary and review the Model Tuning section of the report.docx file to see how each of the algorithms compare.