MLflow model support
H2O MLOps lets you upload and deploy MLflow models. The following sections describe this feature.
Supported third-party models
The following is a list of tested and supported third-party Python models.
Package | Version |
---|---|
fastai | ~=2.7.10 |
gluon | ~=1.1.0 |
keras | ~=2.10.0 |
lightgbm | ~=3.3.3 |
mlflow | ~=1.29.0 |
onnx | ~=1.12.0 |
scikit-learn | ~=1.1.3 |
statsmodels | ~=0.13.5 |
tensorflow | ~=2.10.1 |
torch | ~=1.12.1 |
torchvision | ~=0.13.1 |
xgboost | ~=1.7.1 |
Create MLflow artifacts for third-party frameworks
The following is an example of how to create MLflow artifacts for third-party frameworks.
import shutil
import mlflow.sklearn
import sklearn.datasets
import sklearn.ensemble
from mlflow.models import signature
from mlflow.types import Schema, ColSpec, DataType
# Train sklearn model
X_train, y_train = sklearn.datasets.load_wine(return_X_y=True, as_frame=True)
y_train = (y_train >= 7).astype(int)
sklearn_model = sklearn.ensemble.RandomForestClassifier(n_estimators=10)
sklearn_model.fit(X_train, y_train)
# Infer and set model signature
model_signature = signature.infer_signature(X_train, sklearn_model.predict(X_train))
model_signature.outputs = Schema(
[ColSpec(name="quality", type=DataType.float)]
)
# Define the path to store the model in the current directory
model_dir_path = "wine_model"
# Save the trained sklearn model with MLflow
mlflow.sklearn.save_model(
sklearn_model, model_dir_path, signature=model_signature
)
# Create a zip archive of the saved model
shutil.make_archive("artifact", "zip", model_dir_path)
Understanding BYOM
Experiments in H2O MLOps
In MLOps, an experiment is defined as the output of a training job. Many different experiments can be rapidly created by modifying specific parameters and hyperparameters. Experiments can be imported from Driverless AI, H2O-3 open source, MLflow, or as a serialized Python file. Before being deployed, imported experiments must first be registered as a model version.
For more information, see Key terms.
Experiment metadata
Each experiment in the H2O.ai Storage can have multiple key-value pairs attached to it. These values are not interpreted by the storage itself but can be interpreted by the clients or client services that access data in the Storage.
Experiments that are added to H2O MLOps from the MLflow Model Registry include both the MLflow model name (source_model_name
) and MLflow version number (source_model_version
) as part of the experiment metadata.
Model schema
Each model can be described by its input and output column names and their types. Knowing the model schema is essential for monitoring purposes. Currently, only models using the known schema can be deployed by MLOps.
Model schema is represented by the experiment metadata attached to the experiment. Deployer expects the model schema to be stored in the json_value of the input_schema and output_schema keys.
Natively supported Driverless AI MOJO2 and H2O-3 MOJO2 models are not required to contain the schema as the schema is an integral part of the MOJO2 artifacts.
Schema format
The following example shows how model schema is formatted:
[{"name": "<column name>", "type": "<column type>"}, ... ]
Column types
The following is a list of supported column types:
- Boolean
- Time64
- Float32
- Float64
- Int32
- Int64
- String
Column type names are not case-sensitive.
Artifacts in MLOps
Defining artifacts and experiment artifacts
Artifact: An arbitrary binary large object (BLOB) attached to a particular entity in the H2O.ai Storage.
Experiment Artifact: Any artifact that is attached to the experiment entity.
Artifact type
Because any entity can have multiple artifacts attached to it, specific artifacts must be identified by their type. Type is an arbitrary string. Artifact type is recognized by and relevant to MLOps deployments.
The following is a list of artifact types:
dai/mojo_pipeline
(Natively supported, ingestion supported)h2o3/mojo
(Natively supported, ingestion supported)Python/MLflow
(Ingestion supported)
Deployable artifact type
A deployable artifact type is an artifact type that the Deployer knows how to process and deploy. Each deployable artifact type consists of the name, readable name, and reference to the artifact type.
Artifact processor
Artifact processor is the routine that takes the raw artifact data and transforms it into the format that is digestible by the runtime. In this routine, an artifact is defined by its name, the model type that it produces, and a container image reference.
Artifact processors can be any container image that can be pulled from the target deployment environment. Each processor needs to recognize and use two environmental variables.
The following is a list of artifact processor environment variables:
SOURCE_PATH
: Path to the file containing the raw data of the artifactTARGET_PATH
: Path where the processor saves its output. This path is passed asMODEL_PATH
to the runtime
Model type
Model types enumerate types outputted by artifact processors. This indirection is included due to the fact that one particular artifact type can contain multiple internal artifacts, each of which may be consumed by different runtimes. One particular artifact type can be processed in different ways, producing different outputs consumable by different runtimes.
A model type defines what runtime can be used for artifact deployment.
- Submit and view feedback for this page
- Send feedback about H2O MLOps to cloud-feedback@h2o.ai