Skip to main content
Version: v1.0.0

Python client migration guide

From v1.3.x to v1.4.x

This guide compares the H2O MLOps Python client in version 1.3.x and version 1.4.x. Each table shows the version 1.3.x way of performing an operation in the left column and the version 1.4.x way in the right column, so you can easily compare what has changed and update your code accordingly.

Imports

v1.3.xv1.4.x
import time

import httpx
import h2o_authn
import h2o_mlops
import h2o_mlops.options as options
import h2o_mlops
import h2o_mlops.options as options
import h2o_mlops.types as types

Client creation

From v1.4.x onwards, support for creating the client using gateway_url and token_provider has been removed. Instead, you must use refresh_token and h2o_cloud_url.

v1.3.xv1.4.x
token_provider = h2o_authn.TokenProvider(
refresh_token=...,
client_id=...,
token_endpoint_url=...,
)
mlops = h2o_mlops.Client(
gateway_url=...,
token_provider=token_provider,
)
mlops = h2o_mlops.Client(
h2o_cloud_url=<H2O_CLOUD_URL>,
refresh_token=<REFRESH_TOKEN>,
)

Get allowed affinities and tolerations

v1.3.xv1.4.x
mlops.allowed_affinities
mlops.configs.allowed_k8s_affinities
mlops.allowed_tolerations
mlops.configs.allowed_k8s_tolerations

Get the current user

v1.3.xv1.4.x
mlops.get_user_info()
mlops.users.get_me()

Returns the user's information as a Python dictionary.

Returns the user's information as an MLOpsUser instance.

In version 1.4.x, the concept of projects has been replaced by workspaces. Update your code by replacing references to projects with workspaces.

v1.3.xv1.4.x
mlops.projects.<action>()
mlops.workspaces.<action>()

Create and register an experiment into a model

The previous method of creating experiments and registering them with models is still supported.

v1.3.xv1.4.x
experiment = project.experiments.create(
data=..., name=...
)
model = project.models.create(name=...)

or

model = project.models.get(uid=...)
model.register(experiment=experiment)
model.register(
experiment="/path/experiment.zip",
name=...,
)

or

workspace.models.register(
experiment="/path/experiment.zip",
name=...,
)

Users can pass an instance of the MLOpsExperiment as well.

note
  • When you link an experiment to a workspace from H2O Driverless AI, a new model version is automatically registered under the model that matches the experiment’s name.
  • If no matching model exists, a new model is created with the experiment name, and the experiment is registered as its first version.
  • Therefore, you don’t need to manually register experiments in MLOps. You can use the model directly.

Update an artifact’s parent

v1.3.xv1.4.x
artifact.update(
parent_experiment=experiment,
)
artifact.update(
parent_entity=experiment,
)

Get artifact's model-specific metadata (if applicable)

v1.3.xv1.4.x
artifact.get_model_info()
artifact.model_info

Convert JSON artifact to a dictionary

v1.3.xv1.4.x
artifact.to_dictionary()
artifact.to_dict()

Get the experiment associated with a model version

v1.3.xv1.4.x
model.get_experiment(model_version=n)
model.experiment(model_version=n)

List scoring runtimes

The experiment.scoring_artifact_types property was removed in 1.4.x.

v1.3.xv1.4.x
scoring_runtimes = mlops.runtimes.scoring.list(
artifact_type=experiment.scoring_artifact_types[correct_index]
)
scoring_runtimes = experiment.scoring_runtimes
scoring_runtimes = mlops.runtimes.scoring.list(
artifact_type=..., uid=...
)
scoring_runtimes = mlops.runtimes.scoring.list(
artifact_type=..., runtime_uid=...
)
note

When creating a deployment, instead of passing scoring_runtimes[correct_index], you can use mlops.runtimes.scoring.get(artifact_type=..., runtime_uid=...) to get the scoring_runtime, if you already know the corresponding artifact_type and runtime_uid.

Create a deployment

v1.3.xv1.4.x
project.deployments.create_single(
name=...,
model=...,
scoring_runtime=...,
security_options=options.SecurityOptions(
passphrase=...,
hashed_passphrase=...,
disabled_security=...,
oidc_token_auth=...,
),
)
workspace.deployments.create(
name=...,
composition_options=options.CompositionOptions(
model=...,
scoring_runtime=...,
),
security_options=options.SecurityOptions(
security_type=types.SecurityType.<TYPE>,
passphrase=...,
),
)
note

Starting in v1.4.x, when you create a deployment with hash-based security options, provide the passphrase directly. In earlier versions, you had to provide the hashed value instead.

Create a deployment with new model monitoring options

v1.3.xv1.4.x
project.deployments.create_single(
...,
monitoring_record_options=options.MonitoringRecordOptions(
...,
),
)
workspace.deployments.create(
...,
monitoring_options=options.MonitoringOptions(
...,
),
)
note

This is equivalent to how users created deployments with the old monitoring in the previous client. After the old monitoring was removed, this change was introduced. Note that the parameters accepted by options.MonitoringOptions differ from those used in the old monitoring.

Wait for deployment to become healthy

The previous method is still supported.

v1.3.xv1.4.x
while not deployment.is_healthy():
deployment.raise_for_failure()
time.sleep(5)
deployment.wait_for_healthy()

Get deployment state

v1.3.xv1.4.x
deployment.status()
deployment.state
deployment.is_healthy()
deployment.is_healthy

Update a deployment

v1.3.xv1.4.x
deployment.update_security_options(
...,
)
deployment.update(
security_options=options.SecurityOptions(
...,
),
kubernetes_options=options.KubernetesOptions(
...,
),
environment_variables={
"KEY1": "VALUE1",
"KEY2": "VALUE2",
},
monitoring_options=options.MonitoringOptions(
...,
),
)
deployment.update_kubernetes_options(
...,
)
deployment.set_environment_variables(
environment_variables={
"KEY1": "VALUE1",
"KEY2": "VALUE2",
},
)
deployment.update_monitoring_options(
...,
)
note

In v1.4.x, you can update multiple settings at once.

Access deployment scorer

v1.3.xv1.4.x

You do not need to fetch the scorer.

scorer = deployment.scorer

or

scorer = workspace.deployments.scorers(
key=value,
)[index]
deployment.scorer_api_base_url
scorer.api_base_url
deployment.url_for_capabilities
scorer.capabilities_endpoint
deployment.url_for_schema
scorer.schema_endpoint
deployment.url_for_sample_request
scorer.sample_request_endpoint
deployment.url_for_scoring
scorer.scoring_endpoint
deployment.get_capabilities(...)
scorer.capabilities(...)
deployment.get_schema(...)
scorer.schema(...)
deployment.get_sample_request(...)
scorer.sample_request(...)

Score against a deployment

The previous method is still supported if the correct scoring endpoint URL is provided.

v1.3.xv1.4.x
response = httpx.post(
url=deployment.url_for_scoring,
json=...,
)

response.json()
scorer.score(payload=...)

Get entity creator (if applicable)

v1.3.xv1.4.x
entity.owner
entity.creator

View the complete Table

v1.3.xv1.4.x
table
table.show(n=...)
note

In version 1.4.x, a Table instance renders a nicely formatted view but displays only up to 50 rows by default.


Feedback