Getting started
This page helps you get started with the H2O MLOps Python client by walking through a complete quickstart workflow, from connecting to H2O MLOps, to deploying a model, and scoring data against the deployment.
H2O MLOps enables teams to manage the lifecycle of machine learning models, including registration, deployment, monitoring, and scoring. The Python client allows you to perform these tasks directly from your Python code.
Follow these steps to connect to your H2O MLOps environment, create a workspace, register a model, deploy it, and score data against the deployment.
Prerequisites
Before you begin, install the H2O MLOps Python client. For more information, see Installation.
Step 1: Import the required packages
import h2o_mlops
import h2o_mlops.options as options
import h2o_mlops.types as types
Step 2: Initialize the H2O MLOps client
Connect to H2O MLOps using your H2O Cloud URL, refresh token, and SSL certificate (if required):
mlops = h2o_mlops.Client(
h2o_cloud_url=<H2O_CLOUD_URL>,
refresh_token=<REFRESH_TOKEN>,
ssl_cacert="/path/to/your/ca_certificate.pem",
)
Replace placeholders with your actual credentials and file paths. If SSL is not needed, you can omit ssl_cacert
.
Step 3: Create a workspace
Workspaces are containers that group related models, deployments, and artifacts. Create a new one:
workspace = mlops.workspaces.create(name="my-workspace")
Step 4: Register an experiment as a model version
Register an existing experiment artifact as a model version to make it available for deployment:
model = workspace.models.register(
experiment="/path/to/my_experiment_artifact.zip",
name="my-experiment",
)
Make sure the path points to a valid model artifact on your local machine.
Step 5: Deploy the model
Deploy the registered model as an API endpoint using a supported scoring runtime:
deployment = workspace.deployments.create(
name="my-deployment",
composition_options=options.CompositionOptions(
model=model,
scoring_runtime=model.experiment().scoring_runtimes[0]
),
security_options=options.SecurityOptions(
security_type=types.SecurityType.DISABLED,
),
)
Make sure to use the index that matches the scoring runtime you want from model.experiment().scoring_runtimes
.
Step 6: Wait for the deployment to become healthy
Deployment may take a few seconds. Use the following to wait until it's ready:
deployment.wait_for_healthy()
Step 7: Score data against the deployment
Once the deployment is healthy, you can send data to it for scoring:
deployment.scorer.score(
payload={
"fields": [
"Origin", "Dest", "fDayofMonth", "fYear", "UniqueCarrier", "fDayOfWeek", "fMonth", "IsDepDelayed",
],
"rows": [
["text", "text", "text", "text", "text", "text", "text", "text"],
["text", "text", "text", "text", "text", "text", "text", "text"],
["text", "text", "text", "text", "text", "text", "text", "text"],
]
},
)
The output contains predictions for the provided input rows:
{'id': '2afe0ab6-db1c-4ecc-abb2-747340b3b8dc',
'fields': ['Distance'],
'score': [['713.7770420135266'],
['713.7770420135266'],
['713.7770420135266']]}
Explore more examples
This Getting started page covered the basics of connecting to H2O MLOps, deploying a model, and scoring data using the H2O MLOps Python client.
To learn more, see the Examples section, which includes code examples for the following operations:
- Connect to H2O MLOps
- Manage Workspaces
- Manage Experiments
- Handle artifacts
- Manage Models
- Configure deployments
- Manage deployments
- Deployment scorer
- Batch scoring
- Monitoring setup
- Submit and view feedback for this page
- Send feedback about H2O MLOps to cloud-feedback@h2o.ai