Skip to main content
Version: v1.3.0

Deployment options

Overview

H2O Hydrogen Torch offers the following options to deploy a built model:

  1. Through the H2O Hydrogen Torch, you can deploy a built model to score new data later. To learn more, see Deploy and score from within the H2O Hydrogen Torch UI​.
  2. Through the H2O Hydrogen Torch UI, you can deploy a built model to H2O MLOps to later score new data (in H2O MLOps). To learn more, see Deploy from within the H2O Hydrogen Torch UI to H2O MLOps (to later score new data in H2O MLOps).
  3. You can use a model's H2O MLOps pipeline to score new data later. To learn more, see H2O MLOps pipeline.
  4. You can use a model's Python scoring pipeline to score new data later. To learn more, see Python scoring pipeline.

H2O Hydrogen Torch UI

Deploy and score from within the H2O Hydrogen Torch UI

You can score new data on built models (experiments) that generate downloadable predictions through the H2O Hydrogen Torch user interface (UI). To score new data through the H2O Hydrogen Torch UI, follow these instructions:

  1. In the H2O Hydrogen Torch navigation menu, click Predict data.
  2. In the Experiment box, select the built experiment you want to use to score new data.
  3. In the Prediction Name box, enter a name for your prediction.
  4. In the General, Dataset, Prediction, and Environment settings section, define the display settings.
    Note

    Display settings depend on the problem type of the selected built model (experiment). See Prediction settings to learn about the settings.

  5. Click Run predictions.
    Note
    • After running your predictions, H2O Hydrogen Torch takes you to the View predictions card, where you can view running and completed predictions. To learn more, see View a prediction.
    • To download the generated predictions through the H2O Hydrogen Torch UI, see Download a prediction.
tutorial

Explore the following tutorial to learn how to deploy a model through the H2O Hydrogen Torch UI: Tutorial 1B: Model deployment through the H2O Hydrogen Torch UI.

Deploy from within the H2O Hydrogen Torch UI to H2O MLOps (to later score new data in H2O MLOps)

After building an H2O Hydrogen Torch model, you can deploy it to H2O MLOps utilizing the H2O Hydrogen Torch UI. To learn more, see Deploy a model to H2O MLOps (through the H2O Hydrogen Torch UI).

H2O MLOps pipeline

H2O Hydrogen Torch allows you to download an H2O MLOps scoring pipeline of a built model that you can use to score new data using the H2O MLOps REST API.

Download a model's H2O MLOps Pipeline

To download the H2O MLOps pipeline of a built model, follow these instructions:

  1. In the H2O Hydrogen Torch navigation menu, click View experiments.
  2. In the View experiments table, select the name of the experiment (model) you want to download its H2O MLOps Pipeline.
    Note
    • An H2O MLOps pipeline is only available for an experiment with a finished status. List experiments
  3. Click Download MLOps.
    Note

    The downloaded H2O MLOps pipeline, contains the following files:

    • api_pipeline.py: an example Python script demonstrating how to score new data using an MLOps API endpoint
    • model.mlflow.zip: a .zip file container (model) ready to be uploaded to H2O MLOps for deployment
    • README.txt: a README file that contains information about the other files in the folder

Deploy a model utilizing its H2O MLOps pipeline

Consider the following high level sequential steps to deploy a model using the model's H2O MLOps pipeline:

  1. Select a built model

  2. Download the MLOps Pipeline

  3. Deploy the MLFlow model to H2O MLOps

    Note

    The MLFlow model comes inside the downloaded H2O MLOps pipeline (model.mlflow.zip).

  4. You can score new data by using the endpoint URL of your deploy model in H2O MLOps. The downloaded H2O MLOps pipeline includes sample code in the api_pipeline.py file.

    api_pipeline.py (sample)

    import base64
    import json

    import cv2
    import requests

    # fill in the endpoint URL from MLOps
    URL = "enpoint_url"

    # if you want to score an image, please base64 encode it and send it as string
    img = cv2.imread("image.jpg")
    input = base64.b64encode(cv2.imencode(".png", img)[1]).decode()

    # in case of a multi-channel numpy array, please json encode it and send it as string
    # img = np.load("image.npy")
    # input = json.dumps(img.tolist())

    # # if you want to score an audio, please base64 encode it and send it as string
    # input = base64.b64encode(open("audio.ogg", "rb").read()).decode()

    # # in case of text, you can simply send the string
    # input = "This is a test message!"

    # json data to be sent to API
    data = {"fields": ["input"], "rows": [[input]]}

    # for text span prediction problem type, pass question and context texts
    # input = ["Input question", "Input context"]
    # data = {"fields": ["question", "context"], "rows": [input]}

    # post request
    r = requests.post(url=URL, json=data)

    # extracting data in json format
    ret = r.json()

    # read output, output is a dictionary
    ret = json.loads(ret["score"][0][0])

    Note

    The received JSON response from an H2O MLOps REST API call follows the same format as the .pkl files discussed on the follwing page: Download a prediction.

  5. Monitor requests and predictions in H2O MLOps.

tutorial

Explore the following tutorial to learn how to deploy a model using the model's H2O MLOps pipeline: Tutorial 2B: Model deployment with a model's H2O MLOps pipeline.

Python scoring pipeline

H2O Hydrogen Torch allows you to download a Python scoring pipeline of a built model that you can use to score new data in any external Python environment.

Download a model's Python scoring pipeline

To download a model's Python scoring pipeline, follow these instructions:

  1. In the H2O Hydrogen Torch navigation menu, click View experiments.
  2. In the View experiments table, select the name of the experiment (model) you want to download its Python scoring pipeline.
    Note
    • A Python scoring pipeline is only available for experiments with a finished status. List experiments
  3. Click Download scoring.
    Note

    The downloaded Python scoring pipeline, contains the following files:

    • hydrogen_torch-*.whl: a wheel package containing the necessary H2O Hydrogen Torch framework functionality to generate predictions
    • scoring_pipeline.py: an example Python script demonstrating how to load the model and score new data
    • README.txt: a README file that contains information about the other files in the folder
    • checkpoint.pth: checkpoint of trained model
    • cfg.p: internal hydrogen_torch config file
    • images: a folder containing sample images from the validation dataset
    • audios: a folder containing sample audios from the validation dataset
    • texts: a folder containing sample texts from the validation dataset

Deploy a model utilizing its Python scoring pipeline

Consider the following high level sequential steps to deploy a model using the model's Python scoring pipeline:

  1. Select a built model

  2. Download the model's Python scoring pipeline

  3. Install the H2O Hydrogen Torch wheel package in a Python 3.8 environment of your choice

    Note
    • A fresh environment is highly recommended and can be set up using pyenv or conda. For more information, see Pyenv or Managing Conda environments
    • The H2O Hydrogen Torch Python scoring pipeline only supports Ubuntu 16.04+ with Python 3.8
      • Ensure that Python 3.8-dev is installed for Ubuntu versions that support it. To install it, run sudo apt-get install python3.8-dev
      • Update setuptools (pip install --upgrade pip setuptools) and pip (pip install --upgrade pip)
      • For audio models you need to install the following dependencies:
        • sudo apt-get install libsndfile1 ffmpeg
    • The H2O Hydrogen Torch .whl package is shipped with the downloaded Python scoring pipeline
      • To install the .whl package, run pip install hydrogen_torch-*.whl
  4. Run the scoring_pipeline.py file (which contains sample code to score new data using your trained model weights)

    scoring_pipeline.py (sample)

    # Copyright (c) 2023 H2O.ai. Proprietary License - All Rights Reserved

    """scoring pipeline for models trained in H2O Hydrogen Torch."""

    import glob
    import json
    import os

    import dill
    import pandas as pd
    from torch.utils.data import DataLoader, SequentialSampler

    from hydrogen_torch.src.utils.modeling_utils import (
    load_checkpoint,
    run_python_scoring_inference,
    )

    # reading the config from trained experiment
    with open("cfg.p", "rb") as pickle_file:
    cfg = dill.load(pickle_file)

    # changing internal cfg settings for inference, not subject to change
    cfg.prediction._calculate_test_metric = False

    # preparing examplary dataframe for inference loading samples
    # this has to be altered for custom data

    # Image data -------------------------------------------------------
    if hasattr(cfg.dataset, "image_column"):
    images = []
    for image in sorted(glob.glob("images/*")):
    images.append(os.path.basename(image))

    test_df = pd.DataFrame({f"{cfg.dataset.image_column}": images})

    # set image folder
    cfg.dataset.data_folder_test = "images"
    # ------------------------------------------------------------------

    # Audio data -------------------------------------------------------
    if hasattr(cfg.dataset, "audio_column"):
    audios = []
    for audio in sorted(glob.glob("audios/*")):
    audios.append(os.path.basename(audio))

    test_df = pd.DataFrame({f"{cfg.dataset.audio_column}": audios})

    # set audio folder
    cfg.dataset.data_folder_test = "audios"
    # ------------------------------------------------------------------

    # Text data --------------------------------------------------------
    if hasattr(cfg.dataset, "text_column"):
    texts = []
    for text in sorted(glob.glob("texts/*")):
    texts.append(open(text).read())

    test_df = pd.DataFrame({f"{cfg.dataset.text_column}": texts})

    # special handling for span prediction problem type
    if all(
    hasattr(cfg.dataset, column) for column in ("question_column", "context_column")
    ):
    questions_and_contexts = []

    for text in sorted(glob.glob("texts/*")):
    data = json.load(open(text))

    questions_and_contexts.append(
    {
    cfg.dataset.question_column: data["question"],
    cfg.dataset.context_column: data["context"],
    }
    )

    test_df = pd.DataFrame.from_dict(questions_and_contexts)
    # ------------------------------------------------------------------

    # set device for inference
    cfg.environment._device = "cuda:0"

    # disable original pretrained weights for model initialization
    if hasattr(cfg.architecture, "pretrained"):
    cfg.architecture.pretrained = False

    # it is possible to specify a custom cache directory for Huggingface transformers models
    if hasattr(cfg, "transformers_cache_directory"):
    cfg.transformers_cache_directory = None

    # loading model and checkpoint
    model = cfg.architecture.model_class(cfg).eval().to(cfg.environment._device)
    cfg.architecture.pretrained_weights = "checkpoint.pth"
    load_checkpoint(cfg, model)

    # preparing torch dataset and dataloader
    # batch_size and num_workers are subject to change
    batch_size = 1 if cfg.training._single_sample_inference_batch else 16

    test_dataset = cfg.dataset.dataset_class(df=test_df, cfg=cfg, mode="test")
    test_dataloader = DataLoader(
    test_dataset,
    sampler=SequentialSampler(test_df),
    batch_size=batch_size,
    num_workers=4,
    pin_memory=True,
    collate_fn=test_dataset.get_validation_collate_fn(),
    )

    # running actual inference
    # raw_predictions is a dictionary with predictions in the raw format
    # df_predictions is a pandas DataFrame with predictions
    raw_predictions, df_predictions = run_python_scoring_inference(
    cfg=cfg, model=model, dataloader=test_dataloader
    )

    # final output
    print(df_predictions.head())

tutorial

Explore the following tutorial to learn how to deploy a model using the model's Python scoring pipeline: Tutorial 3B: Model deployment with a model's Python scoring pipeline.


Feedback