Driverless AI: Using the Python API

This notebook provides an H2OAI Client workflow, of model building and scoring, that parallels the Driverless AI workflow.

Notes:

  • This is an early release of the Driverless AI Python client.
  • Python 3.6 is the only supported version.
  • You must install the h2oai_client wheel to your local Python. This is available from the PY_CLIENT link in the top menu of the UI.
py-client

py-client

Workflow Steps

Build an Experiment with Python API:

  1. Sign in
  2. Import train & test set/new data
  3. Specify experiment parameters
  4. Launch Experiement
  5. Examine Experiment
  6. Download Predictions

Build an Experiment in Web UI and Access Through Python:

  1. Get pointer to experiment

Score on New Data:

  1. Score on new data with H2OAI model

Run Model Interpretation

  1. Run model interpretation on the raw features
  2. Run Model Interpretation on External Model Predictions

Build Scoring Pipelines

  1. Build Python Scoring Pipeline
  2. Build MOJO Scoring Pipeline

Build an Experiment with Python API

1. Sign In

Import the required modules and log in.

Pass in your credentials through the Client class which creates an authentication token to send to the Driverless AI Server. In plain english: to sign into the Driverless AI webage (which then sends requests to the Driverless Server) instantiate the Client class with your Driverless AI address and login credentials.

In [2]:
import h2oai_client
import numpy as np
import pandas as pd
# import h2o
import requests
import math
from h2oai_client import Client, ModelParameters, InterpretParameters
In [3]:
address = 'http://ip_where_driverless_is_running:12345'
username = 'username'
password = 'password'
h2oai = Client(address = address, username = username, password = password)
# make sure to use the same user name and password when signing in through the GUI

Equivalent Steps in Driverless: Signing In

Equivalent Steps in Driverless: Signing In

Equivalent Steps in Driverless: Signing In

Equivalent Steps in Driverless: Signing In

Equivalent Steps in Driverless: Signing In

2. Upload Datasets

Upload training and testing datasets from the Driverless AI /data folder.

You can provide a training, validation, and testing dataset for an experiment. The validation and testing dataset are optional. In this example, we will provide only training and testing.

In [4]:
train_path = '/data/Kaggle/CreditCard/CreditCard-train.csv'
test_path = '/data/Kaggle/CreditCard/CreditCard-test.csv'

train = h2oai.create_dataset_sync(train_path)
test = h2oai.create_dataset_sync(test_path)

Equivalent Steps in Driverless: Uploading Train & Test CSV Files

Equivalent Steps in Driverless: Upload Train & Test CSV Files

Equivalent Steps in Driverless: Upload Train & Test CSV Files

3. Set Experiment Parameters

We will now set the parameters of our experiment. Some of the parameters include:

  • Target Column: The column we are trying to predict.
  • Dropped Columns: The columns we do not want to use as predictors such as ID columns, columns with data leakage, etc.
  • Weight Column: The column that indicates the per row observation weights. If None, each row will have an observation weight of 1.
  • Fold Column: The column that indicates the fold. If None, the folds will be determined by Driverless AI.
  • Time Column: The column that provides a time order, if applicable.
    • if [AUTO], Driverless AI will auto-detect a potential time order
    • if [OFF], auto-detection is disabled

For information on the experiment settings, refer to the Experiment Settings.

For this example, we will be predicting ``default payment next month``. The parameters that control the experiment process are: accuracy, time, and interpretability. We can use the get_experiment_preview_sync function to get a sense of what will happen during the experiment.

We will start out by seeing what the experiment will look like with accuracy, time, and interpretability all set to 5.

In [8]:
target="default payment next month"
exp_preview = h2oai.get_experiment_preview_sync(dataset_key= train.key, validset_key='', classification=True,
                                                dropped_cols = [], target_col=target, time_col = '', enable_gpus = True,
                                                accuracy = 5, time = 5, interpretability = 5)
exp_preview
Out[8]:
['ACCURACY [5/10]:',
 '- Training data size: *23,999 rows, 25 cols*',
 '- Feature evolution: *XGBoost*, *time-based validation*',
 '- Final pipeline: *XGBoost*',
 '',
 'TIME [5/10]:',
 '- Feature evolution: *4 individuals*, up to *54 iterations*',
 '- Early stopping: After *10* iterations of no improvement',
 '',
 'INTERPRETABILITY [5/10]:',
 '- Feature pre-pruning strategy: None',
 '- Monotonicity constraints: disabled',
 "- Feature engineering search space (where applicable): ['Clustering', 'Date', 'FrequencyEncoding', 'Identity', 'Interactions', 'Lags', 'TargetEncoding', 'Text', 'TruncatedSVD', 'WeightOfEvidence']",
 '',
 'XGBoost models to train:',
 '- Model and feature tuning: *8*',
 '- Feature evolution: *102*',
 '- Final pipeline: *1*',
 '',
 'Estimated max. total memory usage:',
 '- Feature engineering: *56.0MB*',
 '- GPU XGBoost: *360.0MB*',
 '',
 'Estimated runtime: *6 minutes*']

With these settings, the Driverless AI experiment should take around 6 minutes to run and will train about 111 models: * 8 for parameter tuning * 102 for feature engineering * 1 for the final model

Driverless AI can suggest the parameters based on the dataset and target column. Below we will use the get_experiment_tuning_suggestion to see what settings Driverless AI suggests.

In [9]:
# let Driverless suggest parameters for experiment
params = h2oai.get_experiment_tuning_suggestion(dataset_key = train.key, target_col = target,
                                                is_classification = True, is_time_series = False)
In [10]:
params.dump()
Out[10]:
{'dataset_key': 'bucifidu',
 'target_col': 'default payment next month',
 'weight_col': '',
 'fold_col': '',
 'orig_time_col': '',
 'time_col': '',
 'is_classification': True,
 'cols_to_drop': [],
 'validset_key': '',
 'testset_key': '',
 'enable_gpus': True,
 'seed': False,
 'accuracy': 6,
 'time': 3,
 'interpretability': 6,
 'scorer': 'AUC',
 'time_groups_columns': [],
 'time_period_in_seconds': None,
 'num_prediction_periods': None,
 'num_gap_periods': None,
 'is_timeseries': False}

Driverless AI has found that the best parameters are to set accuracy = 6, time = 3, interpretability = 6. It has selected AUC as the scorer (this is the default scorer for binomial problems).

We will add our test data to the parameters and add a seed to make the experiment reproducible.

In [11]:
params.testset_key = test.key
params.seed = 1234

We can see our experiment preview with the suggested settings below.

In [13]:
exp_preview = h2oai.get_experiment_preview_sync(dataset_key= train.key, validset_key='', classification=True,
                                                dropped_cols = [], target_col=target, time_col = '', enable_gpus = True,
                                                accuracy = params.accuracy, time = params.time,
                                                interpretability = params.interpretability)
exp_preview
Out[13]:
['ACCURACY [6/10]:',
 '- Training data size: *23,999 rows, 25 cols*',
 '- Feature evolution: *XGBoost*, *time-based validation*',
 '- Final pipeline: *XGBoost*',
 '',
 'TIME [3/10]:',
 '- Feature evolution: *4 individuals*, up to *38 iterations*',
 '- Early stopping: After *5* iterations of no improvement',
 '',
 'INTERPRETABILITY [6/10]:',
 '- Feature pre-pruning strategy: FS',
 '- Monotonicity constraints: disabled',
 "- Feature engineering search space (where applicable): ['Date', 'FrequencyEncoding', 'Identity', 'Interactions', 'Lags', 'TargetEncoding', 'Text', 'WeightOfEvidence']",
 '',
 'XGBoost models to train:',
 '- Model and feature tuning: *16*',
 '- Feature evolution: *62*',
 '- Final pipeline: *1*',
 '',
 'Estimated max. total memory usage:',
 '- Feature engineering: *56.0MB*',
 '- GPU XGBoost: *360.0MB*',
 '',
 'Estimated runtime: *3 minutes*']

Equivalent Steps in Driverless: Set the Knobs, Configuration & Launch

Equivalent Steps in Driverless: Set the Knobs

Equivalent Steps in Driverless: Set the Knobs

4. Launch Experiment: Feature Engineering + Final Model Training

Launch the experiment using the parameters that Driverless AI suggested along with the testset, scorer, and seed that were added.

In [14]:
experiment = h2oai.start_experiment_sync(params)

Equivalent Steps in Driverless: Launch Experiment

Equivalent Steps in Driverless: Launch Your Experiment

Equivalent Steps in Driverless: Launch Your Experiment

Equivalent Steps in Driverless: Launch Your Experiment

Equivalent Steps in Driverless: Launch Your Experiment

5. Examine Experiment

View the final model score for the validation and test datasets. When feature engineering is complete, an ensemble model can be built depending on the accuracy setting. The experiment object also contains the score on the validation and test data for this ensemble model. In this case, the validation score is the score on the training cross-validation predictions.

In [15]:
print("Final Model Score on Validation Data: " + str(round(experiment.valid_score, 3)))
print("Final Model Score on Test Data: " + str(round(experiment.test_score, 3)))
Final Model Score on Validation Data: 0.78
Final Model Score on Test Data: 0.802

The experiment object also contains the scores calculated for each iteration on bootstrapped samples on the validation data. In the iteration graph in the UI, we can see the mean performance for the best model (yellow dot) and +/- 1 standard deviation of the best model performance (yellow bar).

This information is saved in the experiment object.

In [16]:
import matplotlib.pyplot as plt

iterations = list(map(lambda iteration: iteration.iteration, experiment.iteration_data))
scores_mean = list(map(lambda iteration: iteration.score_mean, experiment.iteration_data))
scores_sd = list(map(lambda iteration: iteration.score_sd, experiment.iteration_data))

plt.figure()
plt.errorbar(iterations, scores_mean, yerr=scores_sd, color = "y",
             ecolor='yellow', fmt = '--o', elinewidth = 4, alpha = 0.5)
plt.xlabel("Iteration")
plt.ylabel("AUC")
plt.show();
../../_images/examples_h2oai_client_demo_h2oai_client_demo_25_0.png

Equivalent Steps in Driverless: View Results

Equivalent Steps in Driverless: View Results

Equivalent Steps in Driverless: View Results

6. Download Results

Once an experiment is complete, we can see that the UI presents us options of downloading the:

  • predictions on the (holdout) train data
  • predictions on the test data
  • experiment summary - summary of the experiment including feature importance

We will show an example of downloading the test predictions below. Note that equivalent commands can also be run for downloading the train (holdout) predictions.

In [17]:
h2oai.download(src_path=experiment.test_predictions_path, dest_dir=".")
Out[17]:
'./test_preds.csv'
In [18]:
test_preds = pd.read_csv("./test_preds.csv")
test_preds.head()
Out[18]:
ID default payment next month.1
0 24001.0 0.580622
1 24002.0 0.138495
2 24003.0 0.071177
3 24004.0 0.513181
4 24005.0 0.151257

We can also download and examine the summary of the experiment and feature importance for the final model.

In [19]:
# Download Summary
import subprocess
summary_path = h2oai.download(src_path=experiment.summary_path, dest_dir=".")
dir_path = "./h2oai_experiment_summary_" + experiment.key
subprocess.call(['unzip', '-o', summary_path, '-d', dir_path], shell=False)
Out[19]:
0

The table below shows the feature name, its relative importance, and a description. Some features will be engineered by Driverless AI and some can be the original feature.

In [20]:
# View Features
features = pd.read_table(dir_path + "/features.txt", sep=',', skipinitialspace=True)
features.head(n = 10)
Out[20]:
Relative Importance Feature Description
0 1.000000 21_PAY_0 PAY_0 (original)
1 0.359350 4_CVTE:PAY_0.0 Out-of-fold mean of the response grouped by: [...
2 0.267380 5_CVTE:PAY_2.0 Out-of-fold mean of the response grouped by: [...
3 0.144710 12_BILL_AMT1 BILL_AMT1 (original)
4 0.113330 8_CVTE:PAY_5.0 Out-of-fold mean of the response grouped by: [...
5 0.110690 2_CVTE:LIMIT_BAL.0 Out-of-fold mean of the response grouped by: [...
6 0.104330 30_PAY_AMT4 PAY_AMT4 (original)
7 0.103460 0_CVTE:AGE.0 Out-of-fold mean of the response grouped by: [...
8 0.095507 19_LIMIT_BAL LIMIT_BAL (original)
9 0.093192 28_PAY_AMT2 PAY_AMT2 (original)

Build an Experiment in Web UI and Access Through Python

It is also possible to use the Python API to examine an experiment that was started through the Web UI using the experiment key.

Experiments List

Experiments List

1. Get pointer to experiment

You can get a pointer to the experiment by referencing the experiment key in the Web UI.

In [21]:
# Get list of experiments
experiment_list = list(map(lambda x: x.key, h2oai.list_models(offset=0, limit=100)))
experiment_list
Out[21]:
['kesufuda']
In [22]:
# Get pointer to experiment
experiment = h2oai.get_model_job(experiment_list[0]).entity

Score on New Data

You can use the python API to score on new data. This is equivalent to the SCORE ON ANOTHER DATASET button in the Web UI. The example below scores on the test data and then downloads the predictions.

Pass in any dataset that has the same columns as the original training set. If you passed a test set during the H2OAI model building step, the predictions already exist. Its path can be found with experiment.test_predictions_path.

1. Score Using the H2OAI Model

The following shows the predicted probability of default for each record in the test.

In [23]:
prediction = h2oai.make_prediction_sync(experiment.key, test_path, output_margin = False, pred_contribs = False)
pred_path = h2oai.download(prediction.predictions_csv_path, '.')
pred_table = pd.read_csv(pred_path)
pred_table.head()
Out[23]:
ID default payment next month.1
0 24001 0.590326
1 24002 0.131464
2 24003 0.069201
3 24004 0.523229
4 24005 0.156824

We can also get the contribution each feature had to the final prediction by setting pred_contribs = True. This will give us an idea of how each feature effects the predictions.

In [24]:
prediction_contributions = h2oai.make_prediction_sync(experiment.key, test_path,
                                                      output_margin = False, pred_contribs = True)
pred_contributions_path = h2oai.download(prediction_contributions.predictions_csv_path, '.')
pred_contributions_table = pd.read_csv(pred_contributions_path)
pred_contributions_table.head()
Out[24]:
ID contrib_0_CVTE:AGE.0 contrib_1_CVTE:EDUCATION.0 contrib_2_CVTE:LIMIT_BAL.0 contrib_3_CVTE:MARRIAGE.0 contrib_4_CVTE:PAY_0.0 contrib_5_CVTE:PAY_2.0 contrib_6_CVTE:PAY_3.0 contrib_7_CVTE:PAY_4.0 contrib_8_CVTE:PAY_5.0 ... contrib_24_PAY_AMT2 contrib_25_PAY_AMT3 contrib_26_PAY_AMT4 contrib_27_PAY_AMT5 contrib_28_PAY_AMT6 contrib_29_NumToCatWoE:BILL_AMT5.0 contrib_30_Freq:LIMIT_BAL contrib_31_NumToCatWoE:PAY_5.0 contrib_32_NumToCatTE:BILL_AMT5:PAY_AMT3.0 contrib_bias
0 24001 0.011125 0.023285 0.027892 -0.032955 0.680225 0.209166 0.009781 -0.002243 -0.022902 ... -0.016981 -0.003467 -0.007381 0.023113 0.030002 -0.011735 -0.014812 -0.016101 0.012515 -1.314773
1 24002 -0.048051 0.021069 0.103886 -0.047053 -0.169030 -0.032187 -0.016759 -0.018702 -0.020989 ... -0.031151 -0.025753 -0.024552 -0.013769 -0.008316 -0.015570 -0.052556 -0.015396 -0.030725 -1.314773
2 24003 -0.013649 0.013668 -0.083301 -0.035346 -0.193081 -0.047645 -0.025682 -0.019033 -0.021517 ... -0.126038 -0.046658 -0.026235 0.039928 -0.011408 0.002724 -0.065585 -0.014735 0.012307 -1.314773
3 24004 -0.022034 -0.271575 0.066143 -0.044987 0.272315 0.152282 0.060244 0.130927 0.168818 ... 0.038604 0.044121 0.009744 0.037309 0.025741 0.021676 -0.011991 0.125375 -0.031880 -1.314773
4 24005 -0.024565 0.011814 0.007143 -0.035887 -0.177835 -0.031054 -0.043696 -0.037261 0.003007 ... 0.063820 0.090742 0.036829 0.047521 0.028429 0.004863 -0.005218 -0.016371 0.019274 -1.314773

5 rows × 35 columns

We will examine the contributions for our first record more closely.

In [25]:
contrib = pd.DataFrame(pred_contributions_table.iloc[0][1:], columns = ["contribution"])
contrib["abs_contribution"] = contrib.contribution.abs()
contrib.sort_values(by="abs_contribution", ascending=False)[["contribution"]].head()
Out[25]:
contribution
contrib_bias -1.314773
contrib_4_CVTE:PAY_0.0 0.680225
contrib_20_PAY_0 0.666940
contrib_5_CVTE:PAY_2.0 0.209166
contrib_21_PAY_2 0.172259

This customer’s PAY_0 value had the greatest impact on their prediction. Since the contribution is positive, we know that it increases the probability that they will default.

Run Model Interpretation

Once we have completed an experiment, we can interpret our H2OAI model. Model Interpretability is used to provide model transparency and explanations. More information on Model Interpretability can be found here: http://docs.h2o.ai/driverless-ai/latest-stable/docs/userguide/interpreting.html.

1. Run Model Interpretation on the Raw Data

We can run the model interpretation in the python client as shown below. By setting the parameter, use_raw_features to True, we are interpreting the model using only the raw features in the data. This will not use the engineered features we saw in our final model’s features to explain the data.

By setting use_raw_features to False, we can interpret the model using the features used in the final model (raw and engineered).

In [ ]:
mli_experiment = h2oai.run_interpretation_sync(
    InterpretParameters(dai_model_key=experiment.key,
                        dataset_key=train.key,
                        target_col=target,
                        prediction_col="", # not needed since we are interpreting a Driverless experiment
                        use_raw_features=True, # show interpretation based on the original columns
                        nfolds=0, # number of folds used for k-lime
                        klime_cluster_col='',
                        weight_col=params.weight_col, # weight column used by Driverless AI
                        drop_cols=params.cols_to_drop, # columns to not use for Interpretability
                        sample=True, # whether the training dataset should be sampled down for the interpretation
                        sample_num_rows=-1, # use default from config.toml
                        qbin_cols=[], # numeric columns to convert to quantile bins (can help fit surrogate models)
                        qbin_count=0 # number of quantile bins for the quantile bin columns
                       ))

This is equivalent to clicking “Interpet this Model on Original Features” in the UI once the experiment has completed.

Equivalent Steps in Driverless: View Results

Equivalent Steps in Driverless: View Results

Once our interpretation is finished, we can navigate to the MLI tab in the UI to see our interpreted model.

Equivalent Steps in Driverless: MLI List

Equivalent Steps in Driverless: MLI List

We can also see the list of interpretations using the Python Client:

In [ ]:
# Get list of interpretations
mli_list = list(map(lambda x: x.key, h2oai.list_interpretations(offset=0, limit=100)))
mli_list

2. Run Model Interpretation on External Model Predictions

Model Interpretation does not need to be run on a Driverless AI experiment. We can also train an external model and run Model Interpretability on the predictions. In this next section, we will walk through the steps to interpret an external model.

Train External Model

We will begin by training a model with scikit-learn. Our end goal is to use Driverless AI to interpret the predictions made by our scikit-learn model.

In [ ]:
# Dataset must be located where python client is running
train_pd = pd.read_csv(train_path)
In [32]:
from sklearn.ensemble import GradientBoostingClassifier

predictors = list(set(train_pd.columns) - set([target]))

gbm_model = GradientBoostingClassifier(random_state=10)
gbm_model.fit(train_pd[predictors], train_pd[target])
Out[32]:
GradientBoostingClassifier(criterion='friedman_mse', init=None,
              learning_rate=0.1, loss='deviance', max_depth=3,
              max_features=None, max_leaf_nodes=None,
              min_impurity_decrease=0.0, min_impurity_split=None,
              min_samples_leaf=1, min_samples_split=2,
              min_weight_fraction_leaf=0.0, n_estimators=100,
              presort='auto', random_state=10, subsample=1.0, verbose=0,
              warm_start=False)
In [33]:
predictions = gbm_model.predict_proba(train_pd[predictors])
predictions[0:5]
Out[33]:
array([[0.38111179, 0.61888821],
       [0.44396186, 0.55603814],
       [0.91738328, 0.08261672],
       [0.88780536, 0.11219464],
       [0.80028008, 0.19971992]])

Interpret on External Predictions

Now that we have the predictions from our scikit-learn GBM model, we can call Driverless AI’s h2o_ai.run_interpretation_sync to create the interpretation screen.

In [34]:
train_gbm_path = "./CreditCard-train-gbm_pred.csv"
predictions = pd.concat([train_pd, pd.DataFrame(predictions[:, 1], columns = ["p1"])], axis = 1)
predictions.to_csv(path_or_buf=train_gbm_path, index = False)
In [35]:
train_gbm_pred = h2oai.upload_dataset_sync(train_gbm_path)
In [36]:
mli_external = h2oai.run_interpretation_sync(
    InterpretParameters(dai_model_key="", # no experiment key since we are interpreting an external model
                        dataset_key=train_gbm_pred.key,
                        target_col=target, # target column used by external model
                        prediction_col="p1", # column with external model's predictions
                        use_raw_features=True, # not relevant since we are interpreting our external model
                        nfolds=0,
                        klime_cluster_col='',
                        weight_col='', # weight column used by the external model
                        drop_cols=[], # columns not used by the external model
                        sample=True,
                        sample_num_rows=-1,
                        qbin_cols=[],
                        qbin_count=0
                       ))

We can also run Model Interpretability on an external model in the UI as shown below:

Equivalent Steps in Driverless: MLI External Model

Equivalent Steps in Driverless: MLI External Model

In [37]:
# Get list of interpretations
mli_list = list(map(lambda x: x.key, h2oai.list_interpretations(offset=0, limit=100)))
mli_list
Out[37]:
['pefudoba', 'fupimepo']

Build Scoring Pipelines

In our last section, we will build the scoring pipelines from our experiment. There are two scoring pipeline options:

  • Python Scoring Pipeline: requires Python runtime
  • MOJO Scoring Pipeline: requires Java runtime

Documentation on the scoring pipelines is provided here: http://docs.h2o.ai/driverless-ai/latest-stable/docs/userguide/python-mojo-pipelines.html.

Equivalent Steps in Driverless: View Results

Equivalent Steps in Driverless: View Results

The experiment screen shows two scoring pipeline buttons: Download Python Scoring Pipeline or Build MOJO Scoring Pipeline. Driverless AI determines if any scoring pipeline should be automatically built based on the config.toml file. In this example, we have run Driverless AI with the settings:

# Whether to create the Python scoring pipeline at the end of each experiment
make_python_scoring_pipeline = true

# Whether to create the MOJO scoring pipeline at the end of each experiment
# Note: Not all transformers or main models are available for MOJO (e.g. no gblinear main model)
make_mojo_scoring_pipeline = false

Therefore, only the Python Scoring Pipeline will be built by default.

1. Build Python Scoring Pipeline

The Python Scoring Pipeline has been built by default based on our config.toml settings. We can get the path to the Python Scoring Pipeline in our experiment object.

In [30]:
experiment.scoring_pipeline_path
Out[30]:
'h2oai_experiment_kesufuda/scoring_pipeline/scorer.zip'

We can also build the Python Scoring Pipeline - this is useful if the make_python_scoring_pipeline option was set to false.

In [31]:
python_scoring_pipeline = h2oai.build_scoring_pipeline_sync(experiment.key)
In [ ]:
python_scoring_pipeline.file_path
'h2oai_experiment_kesufuda/scoring_pipeline/scorer.zip'

Now we will download the scoring pipeline zip file.

In [ ]:
h2oai.download(python_scoring_pipeline.file_path, dest_dir=".")

2. Build MOJO Scoring Pipeline

The MOJO Scoring Pipeline has not been built by default because of our config.toml settings. We can build the MOJO Scoring Pipeline using the Python client. This is equivalent to selecting the Build MOJO Scoring Pipeline on the experiment screen.

In [27]:
mojo_scoring_pipeline = h2oai.build_mojo_pipeline_sync(experiment.key)
In [28]:
mojo_scoring_pipeline.file_path
Out[28]:
'h2oai_experiment_kesufuda/mojo_pipeline/mojo.zip'

Now we can download the scoring pipeline zip file.

In [29]:
h2oai.download(mojo_scoring_pipeline.file_path, dest_dir=".")
Out[29]:
'./mojo.zip'

Once the MOJO Scoring Pipeline is built, the Build MOJO Scoring Pipeline changes to Download MOJO Scoring Pipeline.

Equivalent Steps in Driverless: Download MOJO

Equivalent Steps in Driverless: Download MOJO