Skip to main content
Version: Next 🚧

Batch scoring

This page guides you on how to use the H2O MLOps Python client for batch scoring.

For more information about batch scoring and the supported source and sink types, see Batch scoring.

Configure the input source​

To list available source connectors, run:

client.batch_connectors.source_specs.list()

Use the following code to configure the input source:

source = h2o_mlops.options.BatchSourceOptions(
spec_uid="s3",
config={
"region": "us-west-2",
"accessKeyID": credentials['AccessKeyId'],
"secretAccessKey": credentials['SecretAccessKey'],
"sessionToken": credentials['SessionToken'],
},
mime_type=h2o_mlops.options.MimeTypeOptions.CSV,
location="s3://<bucket-name>/<path-to-input-file>.csv",
)
note

Public S3 buckets are also supported as an input sink. To read from the public S3 bucket, leave the access key and secret key fields empty. Only the input sink allows public S3 buckets.

Configure the output location​

To list available sink connectors, run:

client.batch_connectors.sink_specs.list()

This command returns schema details, supported paths, and MIME types.

Set up the output location where the batch scoring results will be stored:

output_location = location="s3://<bucket-name>/<path-to-output-directory>/" + datetime.now().strftime("%Y%m%d-%H%M%S")
sink = h2o_mlops.options.BatchSinkOptions(
spec_uid="s3",
config={
"region": "us-west-2",
"accessKeyID": credentials['AccessKeyId'],
"secretAccessKey": credentials['SecretAccessKey'],
"sessionToken": credentials['SessionToken'],
},
mime_type=h2o_mlops.options.MimeTypeOptions.JSONL,
location=output_location,
)

Create batch scoring job​

First, retrieve the scoring runtime for the model:

scoring_runtime = client.runtimes.scoring.list(
artifact_type=model.get_experiment().scoring_artifact_types[0]
)[0]

To retrieve a list of available resource specifications for job creation, use:

client.batch_connectors.sink_specs.list()

Create the batch scoring job:

job = project.batch_scoring_jobs.create(
source=source,
sink=sink,
model=model,
scoring_runtime=scoring_runtime,
name="DEMO JOB",
mini_batch_size=100, #number of rows sent per request during batch processing
resource_spec=h2o_mlops.options.BatchKubernetesOptions(
replicas=2,
min_replicas=1,

),
)
job

Retrieve the job ID:

job.uid

Wait for job completion​

During the execution of the following code, you can view the log output from both the scorer and the batch scoring job.

job.wait()

By default, this command will print logs while waiting. If you want to wait for job completion without printing any logs, use:

job.wait(logs=False)

List all jobs​

project.batch_scoring_jobs.list()

Retrieve a job by ID​

project.batch_scoring_jobs.get('<job ID>')

Cancel a job​

job.cancel()

By default, this command blocks until the job is fully canceled. If you want to cancel without waiting for completion, use:

job.cancel(wait=False)

Delete a job​

job.delete()

Feedback