Skip to main content
Version: v0.67.0

Quickstart

Install the H2O MLOps Python client

pip install h2o-mlops

Import modules

import h2o_mlops
import httpx
import time

Connect to H2O MLOps

Connect to H2O MLOps. In this example, the client detects credentials and configuration options from the environment.

mlops = h2o_mlops.Client()
Note

To connect to an environment that uses a private certificate, you need to configure the environment variable MLOPS_AUTH_CA_FILE_OVERRIDE. This variable must point to the path of the certificate file that the client should use for secure communication. For example:

export MLOPS_AUTH_CA_FILE_OVERRIDE=/path/to/your/ca_certificate.pem

Projects: Everything starts with a project

In H2O MLOps, projects are the main base of operations for most MLOps activities.

project = mlops.projects.create(name="demo")
mlops.projects.list(name="demo")

Output:

    | name   | uid
----+--------+--------------------------------------
0 | demo | 45e5a888-ec1f-4f9c-85ca-817465344b1f

Note that you can also run the following command:

project = mlops.projects.get(uid=...)

Upload an experiment

experiment = project.experiments.create(
data="/Users/username/Downloads/GBM_model_python_1649367037255_1.zip",
name="experiment-from-client"
)

The following are several experiment attributes of interest:

Artifact type

Input:

experiment.scoring_artifact_types

Output:

['h2o3_mojo']

Experiment ID

Input:

experiment.uid

Output:

e307aa9f-895f-4b07-9404-b0728d1b7f03

You can view and retrieve existing experiments:

project.experiments.list()

Output:

    | name                   | uid                                  | tags
----+------------------------+--------------------------------------+--------
0 | experiment-from-client | e307aa9f-895f-4b07-9404-b0728d1b7f03 |

Note that you can also run the following command:

experiment = projects.experiments.get(uid=...)

Create a model

model = project.models.create(name="model-from-client")

You can view and retrieve existing models:

project.models.list()

Output:

    | name              | uid
----+-------------------+--------------------------------------
0 | model-from-client | d18a677f-b800-4a4b-8642-0f59e202d225

Note that you can also run the following command:

model = project.models.get(uid=...)

Register an experiment to a model

A model must have experiments registered to it before it can be deployed.

model.register(experiment=experiment)
model.versions()

Output:

    |   version | experiment_uid
----+-----------+--------------------------------------
0 | 1 | e307aa9f-895f-4b07-9404-b0728d1b7f03

Input:

model.get_experiment(model_version="latest").name

Output:

'experiment-from-client'

Deployment

The following are required for a single model deployment:

  • project
  • model
  • environment
  • scoring runtime
  • name for the deployment

We already have a project and model. Let's see how to get the environment.

project.environments.list()

Output:

    | name   | uid
----+--------+--------------------------------------
0 | DEV | a6af758e-4a98-4ae2-94bf-1c84e5e5a3ed
1 | PROD | f98afa18-91f9-4a97-a031-4924018a8b8f

Input:

environment = project.environments.list(name="DEV")[0]

Note that you can also run the following command:

project.environments.get(uid=...)

Next, we'll get the scoring_runtime for our model type. Notice that we're using the artifact type from the experiment to filter runtimes.

mlops.runtimes.scoring.list(artifact_type=model.get_experiment().scoring_artifact_types[0])

Output:

    | name              | artifact_type   | uid
----+-------------------+-----------------+-------------------
0 | H2O-3 MOJO scorer | h2o3_mojo | h2o3_mojo_runtime

Input:

scoring_runtime = mlops.runtimes.scoring.list(
artifact_type=model.get_experiment().scoring_artifact_types[0]
)[0]

You can now create a deployment.

deployment = environment.deployments.create_single(
name = "deployment-from-client",
model = model,
scoring_runtime = scoring_runtime
)

while not deployment.is_healthy():
deployment.raise_for_failure()
time.sleep(5)

deployment.status()

Output:

'HEALTHY'

Score

Once you have a deployment, you can score with it through the HTTP protocol.

response = httpx.post( 
url=deployment.url_for_scoring,
json=deployment.get_sample_request()
)

response.json()

Output:

{'fields': ['C11.0', 'C11.1'],
'id': 'e307aa9f-895f-4b07-9404-b0728d1b7f03',
'score': [['0.49786656666743145', '0.5021334333325685']]}

Cleanup

Delete deployments

deployment.delete()
environment.deployments.list()

Output:

   | name   | mode   | uid
----+--------+--------+-------

Delete experiments

experiment.delete()
project.experiments.list()

Output:

   | name   | uid   | tags
----+--------+-------+--------

Delete projects

for p in mlops.projects.list(name="demo"):
p.delete()

mlops.projects.list(name="demo")

Output:

   | name   | uid
----+--------+-------

Feedback