Skip to main content
Version: Next

Tutorial 2B: Model deployment with a model's H2O MLOps pipeline

Overview

This tutorial explores one of the options available to deploy a built model. In particular, this tutorial builds an image regression model to explore how it can be deployed to H2O MLOps using the model's H2O MLOps pipeline.

Objectives

  1. Learn how to deploy a built model to H2O MLOps using the model's H2O MLOps pipeline.
  2. Understand how to use the model's endpoint URL to score new data and receive a JSON response.

Prerequisites

Step 1: Import dataset

For this tutorial, let's utilize the demo out-of-the-box preprocessed coins_image_regression.zip dataset. The dataset contains a collection of 6,028 images with one or more coins. Each image has been labeled to indicate the sum of its coins. The currency of the coins is the Brazilian Real (R$). Let's import the dataset:

  1. In the H2O Hydrogen Torch navigation menu, click Import dataset.
  2. In the File name list, select coins_image_regression.zip.
  3. Click Continue.
  4. Click Continue.
  5. Click Continue.

Four Brazilian Real coins image called 150_1479430290.jpg

Step 2: Build model

Let's quickly build an image regression model capable of predicting the sum of Brazilian Real (R$) coins in images. After creating the model, we will use the model's H2O MLOps pipeline to generate predictions (deploy the model).

  1. In the H2O Hydrogen Torch navigation menu, click Create experiment.
  2. In the Dataset list, select coins_image_regression.
  3. In the Experiment name box, enter tutorial-2b.
  4. Click Run experiment.

Step 3: Download model's H2O MLOps pipeline

When H2O Hydrogen Torch completes the experiment (model), you can download the model's H2O MLOps pipeline to deploy to H2O MLOps. Let's download the pipeline.

  1. In the Experiments table, click tutorial-2b.
  2. Click Download MLOps.
    note

    H2O Hydrogen Torch downloads a file with the following naming convention: mlops_tutorial-2b_*.

Step 4: Deploy H2O MLOps pipeline

The downloaded H2O MLOps pipeline contains, in particular, the following files we will utilized in a moment:

  • api_pipeline.py: an example Python script demonstrating how to score new data using an MLOps API endpoint.
  • model.mlflow.zip: a zip file container (model) ready to be uploaded to H2O MLOps for deployment.

To deploy the model to H2O MLOps, we need to upload the model.mlflow.zip file to H2O MLOps. Right after, we will use the api_pipeline.py file to score new data.

Depending on whether you use the legacy H2O MLOps UI or the Wave H2O MLOps application, proceed with the following steps:

  1. Upload the MLFlow model (model.mlflow.zip) to H2O MLOps
  2. Deploy the MLFLow model in H2O MLOps
  3. Copy the model's endpoint URL
note
  • To learn about deploying models to H2O MLOps, see Deploy a model.
  • To learn about MLFlow models, see MLflow.

Step 5: Score new data

After deploying the model to H2O MLOps, we can use the endpoint URL of the deployed model to score new data. For instance, utlizing the api_pipeline.py file, let's score the following image: 150_1479430290.jpg.

Four Brazilian Real coins image called 150_1479430290.jpg

150_1479430290.jpg

Before we can score the image, let's modify the api_pipeline.py file for our purposes.

  1. On line 8, paste the copied endpoint URL of the deployed model.
  2. In line 11, specify the path to the 150_1479430290.jpg image.
    • For example, img = cv2.imread("150_1479430290.jpg").
caution

Do not change the names of the "fields" provided in the api_pipeline.py file. Changing them will prevent the JSON data from being sent to the API.

api_pipeline.py
import base64
import json

import cv2
import requests

# fill in the endpoint URL from MLOps
URL = "endpoint_url"

# if you want to score an image, please base64 encode it and send it as string
img = cv2.imread("image.jpg")
image = base64.b64encode(cv2.imencode(".png", img)[1].tobytes()).decode()

# in case of a multi-channel numpy array, please json encode it and send it as string
# img = np.load("image.npy")
# input = json.dumps(img.tolist())

# json data to be sent to API
data = {
"fields": ["image_path"], # do NOT change this line
"rows": [[image]],
}

# post request
r = requests.post(url=URL, json=data)
if r.status_code != 200:
raise ValueError(
f"Error in MLOPs deployment with a status code: {r.status_code}. "
"Please, check MLOPs deployment logs."
)
else:
# extract data in json format
ret = r.json()

# read output, output is a dictionary
ret = json.loads(ret["score"][0][0])
print(f"Scoring is successful. Output keys: {ret.keys()}")

After modifying and running the api_pipeline.py file, the response obtained is in a JSON format following the format of the Pickle files discussed in the follwing page: Download a prediction.

413 Request Entity Too Large

If you obtained the following error after running the api_pipeline.py file, you need to reduce the image size: 413 Request Entity Too Large. For example:

img = cv2.imread("150_1479430290.jpg")
img = cv2.resize(img, (200,150)) # From 640x480 to 200x150
input = base64.b64encode(cv2.imencode(".png", img)[1]).decode()

For example, scoring the 150_1479430290.jpg image, we receive the following response:

{'predictions': [[143.6985321044922]], 'labels': [['label']]}

Summary

In this tutorial, we learned how to deploy a built model to H2O MLOps. Mainly, we learned how to use a model's H2O MLOps pipeline to obtain an endpoint URL to score new data. We also learned that a call to the endpoint REST API returns a JSON response.

Next

Now that you know how to deploy a built model with it's H2O MLOps scoring pipeline, consider the following tutorials to learn about the other options to deploy a built model:


Feedback