The H2O MLOps Scoring Client is a Python client library that simplifies mini-batch scoring against an H2O MLOps scoring endpoint. This library lets you run batch scoring jobs on your local machine, a standalone server, Databricks, or a Spark 3 cluster.
The following is a basic example of how to use the H2O MLOps Scoring Client:
The following is a basic example of how to use the H2O MLOps Scoring Client if you want to work with Pandas or Spark data frames:
scores_df = h2o_mlops_scoring_client.score_data_frame(
This section describes how to install the H2O MLOps Scoring Client.
- Linux or macOS (Windows is not supported)
- Python 3.8 and later
Install from PyPI
pip install h2o-mlops-scoring-client
Frequently asked questions
When should I use the H2O MLOps Scoring Client?
Using the H2O MLOps Scoring Client is recommended when you need to perform batch scoring outside of the H2O AI Cloud platform, but still want to keep the scoring process integrated with the H2O MLOps workflow. This client lets you maintain seamless connections between H2O MLOps projects, scoring, registry, and monitoring features, while processing tasks such as authenticating and connecting to a source or sink and file/data processing or conversions.
Where does scoring take place?
As the batch scoring processing occurs, the data is sent to an H2O MLOps deployment for scoring. The scores are then returned for the batch scoring processing to complete.
What Source/Sinks are supported?
The MLOps scoring client can support many source/sinks, including:
- ADLS Gen 2
- Databases with a JDBC driver
- Local file system
What file types are supported?
The H2O MLOps Scoring Client can read and write the following file types:
- BigQuery tables
- JDBC queries
- JDBC tables
- Snowflake queries
- Snowflake tables
If there's a file type you'd like to see supported, contact email@example.com.
I want model monitoring for batch scoring, can I do that?
Yes. The MLOps Scoring Client uses H2O MLOps scoring endpoints, which are automatically monitored.
Is a Spark installation required?
No, a Spark installation isn't required. If you're running locally and scoring local files or data frames, then no extra Spark install or configuration is needed. If you want to connect to an external source or sink, you'll need to do a small amount of configuration.