Client Class

class h2oai_client.protocol.Client(address: str, username: Optional[str] = None, password: Optional[str] = None, verify=True, cert=None, use_tls_authentication: bool = False)

Bases: object

__init__(address: str, username: Optional[str] = None, password: Optional[str] = None, verify=True, cert=None, use_tls_authentication: bool = False) → None

Initialize self. See help(type(self)) for accurate signature.

abort_autoreport(key: str) → None
abort_custom_recipe_job(key: str) → None
abort_experiment(key: str) → None

Abort the experiment.

Parameters

key – The experiment’s key.

abort_interpretation(key: str) → None

Abort MLI experiment

Parameters

key – The interpretation key.

abort_sa_action(sa_key: str) → bool
property autoviz
build_mojo_pipeline(model_key: str) → str
build_mojo_pipeline_sync(model_key: str) → h2oai_client.messages.MojoPipeline

Build MOJO pipeline.

Parameters

model_key – Model key.

Returns

a new ScoringPipeline instance.

build_scoring_pipeline(model_key: str) → str
build_scoring_pipeline_sync(model_key: str) → h2oai_client.messages.ScoringPipeline

Build scoring pipeline.

Parameters

model_key – Model key.

Returns

a new ScoringPipeline instance.

change_sa_ws(sa_key: str, action: str, target_col: str, target_row: int, value: str) → h2oai_client.messages.SaShape
check_rest_scorer_deployment_health() → bool
clear_sa_history(sa_key: str) → bool
copy_azr_blob_store_to_local(src: str, dst: str) → bool
copy_dtap_to_local(src: str, dst: str) → bool
copy_gcs_to_local(src: str, dst: str) → bool
copy_hdfs_to_local(src: str, dst: str) → bool
copy_minio_to_local(src: str, dst: str) → bool
copy_s3_to_local(src: str, dst: str) → bool
create_and_download_autoreport(model_key: str, template_path: str = '', config_overrides: str = '', dest_dir: str = '.', **kwargs)

Make and download an autoreport from a Driverless AI experiment.

Parameters
  • model_key – Model key.

  • template_path – Path to custom autoreport template, which will be uploaded and used during rendering

  • config_overrides – TOML string format with configurations overrides for AutoDoc

  • dest_dir – The directory where the AutoReport should be saved.

  • **kwargs – See below

Keyword Arguments
  • mli_key (str) –

    MLI instance key

  • autoviz_key (str) –

    Visualization key

  • individual_rows (list) –

    List of row indices for rows of interest in training dataset, for which additional information can be shown (ICE, LOCO, KLIME)

  • placeholders (dict) –

    Additional text to be added to documentation in dict format, key is the name of the placeholder in template, value is the text content to be added in place of placeholder

  • external_dataset_keys (list) –

    List of additional dataset keys, to be used for computing different statistics and generating plots.

Returns

str: the path to the saved AutoReport

create_aws_lambda(model_key: str, aws_credentials: h2oai_client.messages.AwsCredentials, aws_lambda_parameters: h2oai_client.messages.AwsLambdaParameters) → str

Creates a new AWS lambda deployment for the specified model using the given AWS credentials.

create_csv_from_dataset(key: str) → str

Create csv version of dataset in it’s folder. Returns url to created file.

create_custom_recipe_from_url(url: str) → str
create_dataset(filepath: str) → str
create_dataset_from_azr_blob(filepath: str) → str
create_dataset_from_azure_blob_store_sync(filepath: str) → h2oai_client.messages.Dataset

Import a dataset from Azure Blob Storage

Param

filepath: A path specifying the location of the data to upload.

Returns

a new :class: Dataset instance.

create_dataset_from_bigquery_sync(datasetid: str, dst: str, query: str) → h2oai_client.messages.Dataset

Import a dataset using BigQuery Query

Parameters
  • datasetid – Name of BQ Dataset to use for tmp tables

  • dst – destination filepath within GCS (gs://<bucket>/<file.csv>)

  • query – SQL query to pass to BQ

:returns a new Dataset instance.

create_dataset_from_dtap(filepath: str) → str
create_dataset_from_dtap_sync(filepath: str) → h2oai_client.messages.Dataset

Import a dataset.

Parameters

filepath – A path specifying the location of the data to upload.

Returns

a new Dataset instance.

create_dataset_from_file(filepath: str) → str
create_dataset_from_gbq(args: h2oai_client.messages.GbqCreateDatasetArgs) → str
create_dataset_from_gcs(filepath: str) → str
create_dataset_from_gcs_sync(filepath: str) → h2oai_client.messages.Dataset

Import a dataset from Google Cloud Storage.

Parameters

filepath – A path specifying the location of the data to upload.

Returns

a new Dataset instance.

create_dataset_from_hadoop(filepath: str) → str
create_dataset_from_hadoop_sync(filepath: str) → h2oai_client.messages.Dataset

Import a dataset.

Parameters

filepath – A path specifying the location of the data to upload.

Returns

a new Dataset instance.

create_dataset_from_hive_sync(destination: str, query: str, hadoop_conf_path: str = '', auth_type: str = '', keytab_path: str = '', principal_user: str = '', db_name: str = '')

Import a dataset from Hive using a Hive Query

Parameters
  • hadoop_conf_path – (String) local path to hadoop configuration directory. Ex. /home/ubuntu/hadoop/conf

  • auth_type – (String) type of authentication to use, can be [noauth, keytab, keytabimpersonation]

  • destination – (String) name for resultant dataset, Ex. ‘my_hive_query_result’

  • query – (String) SQL hive query

  • keytab_path – Optional (String) path to keytab if using keytab authentication. Ex. /home/ubuntu/hive.keytab

  • principal_user – Optional (String) user id authorized by keytab to make queries. Ex. hive/localhost@H2O.AI

  • db_name

    Optional (String) name of database configuration in config.toml to use. ex. {“hive_1”: {configurations for hive #1},

    ”hive_config_2”: {configurations for alternative hive db #2}}

    db_name could be “hive_1” or “hive_config_2” if provided will ignore all other optional arguments, and will take them directly from config.toml

Returns

(Dataset) dataset object containing information regarding resultant dataset

create_dataset_from_jdbc_sync(jdbc_user: str, password: str, query: str, id_column: str, destination: str, db_name: str = '', jdbc_jar: str = '', jdbc_url: str = '', jdbc_driver: str = '') → h2oai_client.messages.Dataset

Import a dataset using JDBC drivers and SQL Query

Parameters
  • jdbc_user – (String) username to authenticate query with

  • password – (String) password of user to authenticate query with

  • query – (String) SQL query

  • id_column – (String) name of id column in dataset

  • destination – (String) name for resulting dataset. ex. my_dataset or credit_fraud_data_train

  • db_name

    Optional (String) name of database configuration in config.toml to use. ex. {“postgres”: {configurations for postgres jdbc connection},

    ”sqlite”: {configurations for sqlite jdbc connection}}

    db_name could be “postgres” or “sqlite” If provided will ignore jdbc_jar, jdbc_url, jdbc_driver arguments. Takes these parameters directly from config.toml configuration

  • jdbc_jar – Optional (String) path to JDBC driver jar. Uses this if db_name parameter not provided. Requires jdbc_url and jdbc_driver to be provided, in addition to this parameter

  • jdbc_url – Optional (String) JDBC connection url. Uses this if db_name parameter not provided Requires jdbc_jar and jdbc_driver to be provided, in addition to this parameter

  • jdbc_driver – Optional (String) classpath of JDBC driver. Uses this if db_name not provided Requires jdbc_jar and jdbc_url to be provided, in addition to this parameter

Returns

(Dataset) dataset object containing information regarding resultant dataset

create_dataset_from_kdb(args: h2oai_client.messages.KdbCreateDatasetArgs) → str
create_dataset_from_kdb_sync(destination: str, query: str)

Import a dataset using KDB+ Query.

Parameters
  • destination – Destination for KDB+ Query to be stored on the local filesystem

  • query – KDB query. Use standard q queries.

create_dataset_from_minio(filepath: str) → str
create_dataset_from_minio_sync(filepath: str) → h2oai_client.messages.Dataset

Import a dataset from Minio.

Parameters

filepath – A path specifying the location of the data to upload.

Returns

a new Dataset instance.

create_dataset_from_recipe(recipe_path: str) → str
create_dataset_from_s3(filepath: str) → str
create_dataset_from_s3_sync(filepath: str) → h2oai_client.messages.Dataset

Import a dataset.

Parameters

filepath – A path specifying the location of the data to upload.

Returns

a new Dataset instance.

create_dataset_from_snowflake(args: h2oai_client.messages.SnowCreateDatasetArgs) → str
create_dataset_from_snowflake_sync(region: str, database: str, warehouse: str, schema: str, role: str, optional_file_formatting: str, dst: str, query: str) → h2oai_client.messages.Dataset

Import a dataset using Snowflake Query.

Parameters
  • region – (Optional) Region where Snowflake warehouse exists

  • database – Name of Snowflake database to query

  • warehouse – Name of Snowflake warehouse to query

  • schema – Schema to use during query

  • role – (Optional) Snowflake role to be used for query

  • optional_file_formatting – (Optional) Additional arguments for formatting the output SQL query to csv file. See snowflake documentation for “Create File Format”

  • dst – Destination within local file system for resulting dataset

  • query – SQL query to pass to Snowflake

create_dataset_from_spark_hive(args: h2oai_client.messages.HiveCreateDatasetArgs) → str
create_dataset_from_spark_jdbc(args: h2oai_client.messages.JdbcCreateDatasetArgs) → str
create_dataset_from_upload(filepath: str) → str
create_dataset_sync(filepath: str) → h2oai_client.messages.Dataset

Import a dataset.

Parameters

filepath – A path specifying the location of the data to upload.

Returns

a new Dataset instance.

create_entity_permission(permission: h2oai_client.messages.Permission) → h2oai_client.messages.Permission

Grant an access to an entity.

Parameters

permission – New access permission to grant to an entity within.

create_local_rest_scorer(model_key: str, local_rest_scorer_parameters: h2oai_client.messages.LocalRestScorerParameters) → str

Creates new local rest scorer deployment for specified model

create_local_rest_scorer_sync(model_key: str, deployment_name: str, port_number: int, max_heap_size: int = None)

Deploy REST server locally on Driverless AI server. NOTE: This function is primarily for testing & ci purposes.

Parameters
  • model_key – Name of model generated by experiment

  • deployment_name – Name to apply to deployment

  • port_number – port number on which the deployment REST service will be exposed

Param

max_heap_size: maximum heap size (Gb) for rest server deployment. Used to set Xmx_g

Return Deployment

Class Deployment with attributes associated with the successful deployment of

local rest scorer to Driverless AI server

create_project(name: str, description: str) → str
create_sa(mli_key: str) → str
delete_autoviz_job(key: str) → None
delete_dataset(key: str) → None
delete_entity_permission(permission_id: str) → None

Revoke an access to an entity. An async job returning key of JobStatus.

Parameters

permission_id – The h2oai-storage ID of a permission to revoke.

delete_interpretation(key: str) → None
delete_model(key: str) → None
delete_model_diagnostic_job(key: str) → None
delete_project(key: str) → None
delete_storage_dataset(dataset_id: str) → None
Parameters

dataset_id – The h2oai-storage ID of a dataset to delete remotely.

delete_storage_model(model_id: str) → None
Parameters

model_id – The h2oai-storage ID of a model to delete remotely.

destroy_aws_lambda(deployment_key: str) → str

Shuts down an AWS lambda deployment removing it entirely from the associated AWS account. Any new deployment will result in a different endpoint URL using a different api_key.

destroy_local_rest_scorer(deployment_key: str) → str
destroy_local_rest_scorer_sync(deployment_key)

Function to take down REST server that was deployed locally on Driverless AI server

Parameters

deployment_key – Name of deployment as generated by function create_local_rest_scorer_sync

Returns

job status, should be 0

do_tornado_upload(filename, skip_parse=False)
download(src_path: str, dest_dir: str) → str
download_prediction(model_key: str, dataset_type: str, include_columns: List[str]) → str
Parameters
  • model_key – Model Key

  • dataset_type – Type of dataset [train/valid/test]

  • include_columns – List of columns, which should be included in predictions csv

download_prediction_sync(dest_dir: str, model_key: str, dataset_type: str, include_columns: list)

Downloads train/valid/test set predictions into csv file

Parameters
  • dest_dir – Destination directory, where csv will be downloaded

  • model_key – Model key for which predictions will be downloaded

  • dataset_type – Type of dataset for which predictions will be downloaded. Available options are “train”, “valid” or “test”

  • include_columns – List of columns from dataset, which will be included in predictions csv

Returns

Local path to csv

drop_local_rest_scorer_from_database(key: str) → None
export_dataset_to_storage(key: str, location: h2oai_client.messages.Location) → str

Export a local dataset to the h2oai-storage location. An async job returning key of ExportEntityJob.

Parameters

key – Key of the dataset to export.

export_model_to_storage(key: str, location: h2oai_client.messages.Location) → str

Export a local model to the h2oai-storage location. An async job returning key of ExportEntityJob.

Parameters

key – Key of the model to export.

filter_sa_ws(sa_key: str, row_from: int, row_to: int, expr_feature: str, expr_op: str, expr_value: str, f_expr: str) → h2oai_client.messages.SaShape

filter the last history entry

fit_transform_batch(model_key: str, training_dataset_key: str, validation_dataset_key: str, test_dataset_key: str, validation_split_fraction: float, seed: int, fold_column: str) → str
fit_transform_batch_sync(model_key, training_dataset_key, validation_dataset_key, test_dataset_key, validation_split_fraction, seed, fold_column) → h2oai_client.messages.Transformation

Use model feature engineering to transform provided dataset and get engineered feature in output CSV

Parameters
  • model_key – Key of the model to use for transformation

  • training_dataset_key – Dataset key which will be used for training

  • validation_dataset_key – Dataset key which will be used for validation

  • test_dataset_key – Dataset key which will be used for testing

  • validation_split_fraction – If not having valid dataset, split ratio for splitting training dataset

  • seed – Random seed for splitting

  • fold_column – Fold column used for splitting

generate_local_rest_scorer_sample_data(model_key: str) → str
get_1d_vega_plot(dataset_key: str, plot_type: str, x_variable_name: str, kwargs: Any) → str
get_2d_vega_plot(dataset_key: str, plot_type: str, x_variable_name: str, y_variable_name: str, kwargs: Any) → str
get_all_config_options() → List[h2oai_client.messages.ConfigItem]

Get metadata and current value for all exposed options

get_all_dia_parity_ui(dia_key: str, dia_variable: str, low_threshold: float, high_threshold: float, offset: int, count: int, sort_column: str, sort_order: str) → List[h2oai_client.messages.DiaNamedMatrix]
get_app_version() → h2oai_client.messages.AppVersion

Returns the application version.

Returns

The application version.

get_artifact_upload_job(key: str, artifact_path: str) → h2oai_client.messages.ArtifactsExportJob
get_autoreport_job(key: str) → h2oai_client.messages.AutoReportJob
get_autoviz(dataset_key: str, maximum_number_of_plots: int) → str
get_autoviz_job(key: str) → h2oai_client.messages.AutoVizJob
get_autoviz_summary(key: str) → h2oai_client.messages.AutoVizSummary
get_barchart(dataset_key: str, variable_name: str) → str
get_barchart_job(key: str) → h2oai_client.messages.BarchartJob
get_boxplot(dataset_key: str, variable_name: str) → str
get_boxplot_job(key: str) → h2oai_client.messages.BoxplotJob
get_column_stat(dataset_key: str, column_name: str, stat_type: str, meta: Any) → str

Gets column statistics like mean, median or specific percentile

Parameters
  • stat_type – Statistics type ref. h2oaicore/imputation_utils.py:ImputationType

  • meta – Can be e.g. percentile rank

get_config_options(keys: List[str]) → List[h2oai_client.messages.ConfigItem]

Get metadata and current value for specified options

get_configurable_options() → List[h2oai_client.messages.ConfigItem]

Get all config options configurable through expert settings

get_connector_config_options(connector_type: str) → List[str]
get_connector_ui_config(connector_type: str) → h2oai_client.messages.ConnectorProperties
get_create_csv_job(key: str) → h2oai_client.messages.CreateCsvJob
get_create_deployment_job(key: str) → h2oai_client.messages.CreateDeploymentJob
get_current_user_info() → h2oai_client.messages.UserInfo
get_custom_recipe_job(key: str) → h2oai_client.messages.CustomRecipeJob
get_custom_recipes_acceptance_jobs() → List[h2oai_client.messages.CustomRecipeJob]
get_dai_feat_imp_status(importance_type: str, mli_key: str) → h2oai_client.messages.JobStatus
get_data_preview_job(key: str) → h2oai_client.messages.DataPreviewJob
get_data_recipe_preview(dataset_key: str, code: str) → str

Gets the preview of recipe on subset of data Returns DataPreviewJob

Parameters
  • dataset_key – Dataset key on which recipe is run

  • code – Raw code of the recipe

get_dataset_job(key: str) → h2oai_client.messages.DatasetJob
get_dataset_split_job(key: str) → h2oai_client.messages.DatasetSplitJob
get_dataset_summary(key: str) → h2oai_client.messages.DatasetSummary
get_datasets_for_project(project_key: str, dataset_type: str) → List[h2oai_client.messages.DatasetSummary]
get_deployment(key: str) → h2oai_client.messages.Deployment
get_destroy_deployment_job(key: str) → h2oai_client.messages.DestroyDeploymentJob
get_dia(dia_key: str, dia_variable: str, dia_ref_levels: List[str], offset: int, count: int, sort_column: str, sort_order: str) → h2oai_client.messages.Dia
get_dia_avp(key: str, dia_variable: str) → h2oai_client.messages.DiaAvp
get_dia_parity_ui(dia_key: str, dia_variable: str, ref_level: str, low_threshold: float, high_threshold: float, offset: int, count: int, sort_column: str, sort_order: str) → h2oai_client.messages.DiaMatrix
get_dia_status(key: str) → h2oai_client.messages.JobStatus
get_dia_summary(key: str) → h2oai_client.messages.DiaSummary
get_diagnostic_cm_for_threshold(diagnostic_key: str, threshold: float) → str

Returns Model diagnostic Job, where only argmax_cm will be populated

get_disk_stats() → h2oai_client.messages.DiskStats

Returns the server’s disk usage as if called by diskinfo (systemutils)

get_dotplot(key: str, variable_name: str, digits: int) → str
get_dotplot_job(key: str) → h2oai_client.messages.DotplotJob
get_exemplar_rows(key: str, exemplar_id: int, offset: int, limit: int, variable_id: int) → h2oai_client.messages.ExemplarRowsResponse
get_experiment_preview(dataset_key: str, validset_key: str, classification: bool, dropped_cols: List[str], target_col: str, is_time_series: bool, time_col: str, enable_gpus: bool, accuracy: int, time: int, interpretability: int, config_overrides: str, reproducible: bool, resumed_experiment_id: str) → str
get_experiment_preview_job(key: str) → h2oai_client.messages.ExperimentPreviewJob
get_experiment_preview_sync(dataset_key: str, validset_key: str, classification: bool, dropped_cols: List[str], target_col: str, is_time_series: bool, time_col: str, enable_gpus: bool, accuracy: int, time: int, interpretability: int, reproducible: bool, resumed_experiment_id: str, config_overrides: str)

Get explanation text for experiment settings

Parameters
  • dataset_key (str) – Training dataset key

  • validset_key (str) – Validation dataset key if any

  • classification (bool) – Indicating whether problem is classification or regression. Pass True for classification

  • dropped_cols (list of strings) – List of column names, which won’t be used in training

  • target_col (str) – Name of the target column for training

  • is_time_series (bool) – Whether it’s a time-series problem

  • enable_gpus (bool) – Specifies whether experiment will use GPUs for training

  • accuracy (int) – Accuracy parameter value

  • time (int) – Time parameter value

  • interpretability (int) – Interpretability parameter value

  • reproducbile (bool) – Set experiment to be reproducible

  • resumed_experiment_id (str) – Name of resumed experiment

  • config_overrides (str) – Raw config.toml file content (UTF8-encoded string)

Returns

List of strings describing the experiment properties

Return type

list of strings

get_experiment_summary_for_mli_key(mli_job_key: str) → str
get_experiment_tuning_suggestion(dataset_key: str, target_col: str, is_classification: bool, is_time_series: bool, config_overrides: str, cols_to_drop: List[str]) → h2oai_client.messages.ModelParameters
get_experiments_for_project(project_key: str) → List[h2oai_client.messages.ModelSummaryWithDiagnostics]
get_experiments_stats() → h2oai_client.messages.ExperimentsStats

Returns stats about experiments

get_export_entity_job(key: str) → h2oai_client.messages.ExportEntityJob
get_frame_row_by_value(frame_name: str, feature_name: str, feature_value: str, num_rows: int, mli_job_key: str) → str
get_frame_row_offset_by_value(feature_name: str, feature_value: str, mli_job_key: str) → int
get_frame_rows(frame_name: str, row_offset: int, num_rows: int, mli_job_key: str, orig_feat_shapley: bool) → str
get_gpu_stats() → h2oai_client.messages.GPUStats

Returns gpu stats as if called by get_gpu_info_safe (systemutils)

get_grouped_boxplot(datset_key: str, variable_name: str, group_variable_name: str) → str
get_grouped_boxplot_job(key: str) → h2oai_client.messages.BoxplotJob
get_heatmap(key: str, variable_names: List[str], matrix_type: str, normalize: bool, permute: bool, missing: bool) → str
get_heatmap_job(key: str) → h2oai_client.messages.HeatMapJob
get_histogram(dataset_key: str, variable_name: str, number_of_bars: Any, transform: str) → str
get_histogram_job(key: str) → h2oai_client.messages.HistogramJob
get_hive_config(db_name: str) → h2oai_client.messages.HiveConfig
get_import_entity_job(key: str) → h2oai_client.messages.ImportEntityJob
get_importmodel_job(key: str) → h2oai_client.messages.ImportModelJob
get_individual_conditional_expectation(row_offset: int, mli_job_key: str) → str
get_interpret_timeseries_job(key: str) → h2oai_client.messages.InterpretTimeSeriesJob
get_interpret_timeseries_summary(key: str) → h2oai_client.messages.InterpretTimeSeriesSummary
get_interpretation_job(key: str) → h2oai_client.messages.InterpretationJob
get_interpretation_summary(key: str) → h2oai_client.messages.InterpretSummary
get_iteration_data(key: str) → h2oai_client.messages.AutoDLProgress
get_jdbc_config(db_name: str) → h2oai_client.messages.SparkJDBCConfig
get_json(json_name: str, job_key: str) → str
get_mli_importance(model_type: str, importance_type: str, mli_key: str, row_idx: int, code_offset: int, number_of_codes: int) → List[h2oai_client.messages.MliVarImpTable]
get_mli_nlp_status(key: str) → h2oai_client.messages.JobStatus
get_mli_nlp_tokens_status(key: str) → h2oai_client.messages.JobStatus
get_mli_variable_importance(key: str, mli_job_key: str, original: bool) → h2oai_client.messages.VarImpTable
get_model_diagnostic(model_key: str, dataset_key: str) → str

Makes model diagnostic from DAI model, containing logic for creating the predictions

get_model_diagnostic_job(key: str) → h2oai_client.messages.ModelDiagnosticJob
get_model_job(key: str) → h2oai_client.messages.ModelJob
get_model_job_partial(key: str, from_iteration: int) → h2oai_client.messages.ModelJob
get_model_summary(key: str) → h2oai_client.messages.ModelSummary
get_model_summary_with_diagnostics(key: str) → h2oai_client.messages.ModelSummaryWithDiagnostics
get_model_trace(key: str, offset: int, limit: int) → h2oai_client.messages.ModelTraceEvents
get_mojo_pipeline_job(key: str) → h2oai_client.messages.MojoPipelineJob
get_multinode_stats() → h2oai_client.messages.MultinodeStats

Return stats about multinode

get_network(dataset_key: str, matrix_type: str, normalize: bool) → str
get_network_job(key: str) → h2oai_client.messages.NetworkJob
get_original_mli_frame_rows(row_offset: int, num_rows: int, mli_job_key: str) → str
get_original_model_ice(row_offset: int, mli_job_key: str) → str
get_outliers(dataset_key: str, variable_names: List[str], alpha: float) → str
get_outliers_job(key: str) → h2oai_client.messages.OutliersJob
get_parallel_coordinates_plot(key: str, variable_names: List[str]) → str
get_parallel_coordinates_plot_job(key: str) → h2oai_client.messages.ParallelCoordinatesPlotJob
get_prediction_job(key: str) → h2oai_client.messages.PredictionJob
get_project(key: str) → h2oai_client.messages.Project
get_raw_data(key: str, offset: int, limit: int) → h2oai_client.messages.ExemplarRowsResponse
get_sa(sa_key: str, hist_entry: int, ws_features: List[str], main_chart_feature: str) → h2oai_client.messages.Sa
get_sa_create_progress(sa_key: str) → int
get_sa_dataset_summary(sa_key: str) → h2oai_client.messages.SaDatasetSummary
get_sa_history(sa_key: str) → h2oai_client.messages.SaHistory
get_sa_history_entry(sa_key: str, hist_entry: int) → h2oai_client.messages.SaHistoryItem
get_sa_main_chart_data(sa_key: str, hist_entry: int, feature: str, page_offset: int, page_size: int, aggregate: bool) → h2oai_client.messages.SaMainChartData
get_sa_predictions(sa_key: str, hist_entry: int) → h2oai_client.messages.SaWorkingSetPreds
get_sa_preds_history_chart_data(sa_key: str) → h2oai_client.messages.SaPredsHistoryChartData
get_sa_score_progress(sa_key: str, hist_entry: int) → int
get_sa_statistics(sa_key: str, hist_entry: int) → h2oai_client.messages.SaStatistics
get_sa_ws(sa_key: str, hist_entry: int, features: List[str], page_offset: int, page_size: int) → h2oai_client.messages.SaWorkingSet
get_sa_ws_summary(sa_key: str, hist_entry: int) → h2oai_client.messages.SaWorkingSetSummary
get_sa_ws_summary_for_column(sa_key: str, hist_entry: int, column: str) → h2oai_client.messages.SaFeatureMeta
get_sa_ws_summary_for_row(sa_key: str, hist_entry: int, row: int) → h2oai_client.messages.SaWorkingSetRow
get_sa_ws_summary_row(sa_key: str, hist_entry: int, features: List[str]) → h2oai_client.messages.SaWorkingSetRow
get_sas_for_mli(mli_key: str) → List[str]
get_scale(dataset_key: str, data_min: float, data_max: float) → h2oai_client.messages.H2OScale
get_scatterplot(dataset_key: str, x_variable_name: str, y_variable_name: str) → str
get_scatterplot_job(key: str) → h2oai_client.messages.ScatterPlotJob
get_scoring_pipeline_job(key: str) → h2oai_client.messages.ScoringPipelineJob
get_timeseries_split_suggestion(train_key: str, time_col: str, time_groups_columns: List[str], test_key: str, config_overrides: str) → str
get_timeseries_split_suggestion_job(key: str) → h2oai_client.messages.TimeSeriesSplitSuggestionJob
get_transformation_job(key: str) → h2oai_client.messages.TransformationJob
get_users() → List[str]
get_variable_importance(key: str) → h2oai_client.messages.VarImpTable
get_vega_plot(dataset_key: str, plot_type: str, variable_names: List[str], kwargs: Any) → str
get_vega_plot_job(key: str) → h2oai_client.messages.VegaPlotJob
get_vis_stats(dataset_key: str) → str
get_vis_stats_job(key: str) → h2oai_client.messages.VisStatsJob
have_valid_license() → h2oai_client.messages.License
import_model(filepath: str) → str
import_storage_dataset(dataset_id: str) → str

Import dataset from the h2oai-storage locally. An async job returning key of ImportEntityJob.

Parameters

dataset_id – The h2oai-storage ID of the dataset to import.

import_storage_model(model_id: str) → str

Import model from the h2oai-storage locally. An async job returning key of ImportEntityJob.

Parameters

model_id – The h2oai-storage ID of the model to import.

is_autoreport_active(key: str) → bool

Indicates whether there is some active autoreport job with such key

is_original_model_pd_available(mli_job_key: str) → bool
is_original_shapley_available(mli_job_key: str) → bool
is_sa_enabled() → bool

Sensitivity analysis: REST RPC

is_valid_license_key(license_key: str) → h2oai_client.messages.License
link_dataset_to_project(project_key: str, dataset_key: str, dataset_type: str) → bool
link_experiment_to_project(project_key: str, experiment_key: str) → bool
list_allowed_file_systems(offset: int, limit: int) → List[str]
list_aws_regions(aws_credentials: h2oai_client.messages.AwsCredentials) → List[str]

List supported AWS regions.

list_azr_blob_store_buckets(offset: int, limit: int) → List[str]
list_datasets(offset: int, limit: int, include_inactive: bool) → h2oai_client.messages.ListDatasetQueryResponse
Parameters

include_inactive – Whether to include datasets in failed, cancelled or in-progress state.

list_datasets_with_similar_name(name: str) → List[str]
list_deployments(offset: int, limit: int) → List[h2oai_client.messages.Deployment]
list_entity_permissions(entity_id: str) → List[h2oai_client.messages.Permission]

List permissions of a h2oai-storage entity.

Parameters

entity_id – The h2oai-storage ID of the entity to list the permissions of.

list_experiment_artifacts(model_key: str) → h2oai_client.messages.ExperimentArtifactSummary
list_gcs_buckets(offset: int, limit: int) → List[str]
list_interpret_timeseries(offset: int, limit: int) → List[h2oai_client.messages.InterpretTimeSeriesSummary]
list_interpretations(offset: int, limit: int) → List[h2oai_client.messages.InterpretSummary]
list_keys_by_name(kind: str, display_name: str) → List[str]

List all keys of caller’s entities with the given display_name and kind. Note that display_names are not unique so this call returns a list of keys.

Parameters
  • kind – Kind of entities to be listed.

  • display_name – Display name of the entities to be listed.

list_minio_buckets(offset: int, limit: int) → List[str]
list_model_diagnostic(offset: int, limit: int) → List[h2oai_client.messages.ModelDiagnosticJob]
list_model_estimators() → List[h2oai_client.messages.ModelEstimatorWrapper]
list_model_iteration_data(key: str, offset: int, limit: int) → List[h2oai_client.messages.AutoDLProgress]
list_models(offset: int, limit: int) → h2oai_client.messages.ListModelQueryResponse
list_models_with_similar_name(name: str) → List[str]

List all model names with display_name similar as name, e.g. to prevent display_name collision :returns: List of similar model names

list_projects(offset: int, limit: int) → List[h2oai_client.messages.Project]
list_s3_buckets(offset: int, limit: int) → List[str]
list_scorers() → List[h2oai_client.messages.Scorer]
list_storage_datasets(offset: int, limit: int, location: h2oai_client.messages.Location) → h2oai_client.messages.ListDatasetQueryResponse

List datasets based on the h2oai-storage location.

list_storage_models(offset: int, limit: int, location: h2oai_client.messages.Location) → h2oai_client.messages.ListModelQueryResponse

List models based on the h2oai-storage location.

list_storage_projects(offset: int, limit: int) → h2oai_client.messages.ListProjectQueryResponse

List h2oai-storage projects from USER_PROJECTS root location.

list_storage_users(offset: int, limit: int) → List[h2oai_client.messages.StorageUser]

List users known to h2oai-storage.

list_transformers() → List[h2oai_client.messages.TransformerWrapper]
list_visualizations(offset: int, limit: int) → List[h2oai_client.messages.AutoVizSummary]
make_autoreport(model_key: str, mli_key: str, individual_rows: List[int], autoviz_key: str, template_path: str, placeholders: Any, external_dataset_keys: List[str], config_overrides: str) → str
make_autoreport_sync(model_key: str, template_path: str = '', config_overrides: str = '', **kwargs)

Make an autoreport from a Driverless AI experiment.

Parameters
  • model_key – Model key.

  • template_path – Path to custom autoreport template, which will be uploaded and used during rendering

  • config_overrides – TOML string format with configurations overrides for AutoDoc

  • **kwargs – See below

Keyword Arguments
  • mli_key (str) –

    MLI instance key

  • autoviz_key (str) –

    Visualization key

  • individual_rows (list) –

    List of row indices for rows of interest in training dataset, for which additional information can be shown (ICE, LOCO, KLIME)

  • placeholders (dict) –

    Additional text to be added to documentation in dict format, key is the name of the placeholder in template, value is the text content to be added in place of placeholder

  • external_dataset_keys (list) –

    List of additional dataset keys, to be used for computing different statistics and generating plots.

Returns

a new AutoReport instance.

make_dataset_split(dataset_key: str, output_name1: str, output_name2: str, target: str, fold_col: str, time_col: str, ratio: float, seed: int) → str
make_dataset_split_sync(dataset_key: str, output_name1: str, output_name2: str, target: str, fold_col: str, time_col: str, ratio: float, seed: int) → str
make_model_diagnostic_sync(model_key: str, dataset_key: str) → h2oai_client.messages.Dataset

Make model diagnostics from a model and dataset

Parameters
  • model_key – Model key.

  • dataset_key – Dataset key

Returns

a new ModelDiagnostic instance.

make_prediction(model_key: str, dataset_key: str, output_margin: bool, pred_contribs: bool, keep_non_missing_actuals: bool, include_columns: List[str]) → str
make_prediction_sync(model_key: str, dataset_key: str, output_margin: bool, pred_contribs: bool, keep_non_missing_actuals: bool = False, include_columns: list = [])

Make a prediction from a model.

Parameters
  • model_key – Model key.

  • dataset_key – Dataset key on which prediction will be made

  • output_margin – Whether to return predictions as margins (in link space)

  • pred_contribs – Whether to return prediction contributions

  • keep_non_missing_actuals

  • include_columns – List of column names, which should be included in output csv

Returns

a new Predictions instance.

modify_dataset_by_recipe_file(key: str, recipe_path: str) → str

Returns custom recipe job key

Parameters
  • key – Dataset key

  • recipe_path – Recipe file path

modify_dataset_by_recipe_url(key: str, recipe_url: str) → str

Returns custom recipe job key

Parameters
  • key – Dataset key

  • recipe_url – Url of the recipe

perform_chunked_upload(file_path, skip_parse=False)
perform_stream_upload(file_path, skip_parse=False)
perform_upload(file_path, skip_parse=False)
pop_sa_history(sa_key: str) → bool
query_datatable(frame_name: str, query_str: str, job_key: str) → str
remove_sa_history_entry(sa_key: str, hist_entry: int) → bool
reset_sa_ws(sa_key: str) → h2oai_client.messages.SaShape
restart_deployment(deployment_key: str) → str
run_custom_recipes_acceptance_checks() → None
run_interpret_timeseries(interpret_timeseries_params: h2oai_client.messages.InterpretTimeSeriesParameters) → str
run_interpret_timeseries_sync(dai_model_key: str, **kwargs)

Run Interpretation for Time Series

Parameters
  • dai_model_key – Driverless AI Time Series Model key, which will be interpreted

  • **kwargs – See below

:Keyword Arguments
  • sample_num_rows (int) –

    Number of rows to sample to generate metrics. Default -1 (All rows)

Returns

a new :class: InterpretTimeSeries instance.

run_interpretation(interpret_params: h2oai_client.messages.InterpretParameters) → str
run_interpretation_sync(dai_model_key: str, dataset_key: str, target_col: str, **kwargs)

Run MLI.

Parameters
  • dai_model_key – Driverless AI Model key, which will be interpreted

  • dataset_key – Dataset key

  • target_col – Target column name

  • **kwargs – See below

Keyword Arguments
  • use_raw_features (bool) –

    Show interpretation based on the original columns. Default True

  • weight_col (str) –

    Weight column used by Driverless AI experiment

  • drop_cols (list) –

    List of columns not used for interpretation

  • klime_cluster_col (str) –

    Column used to split data into k-LIME clusters

  • nfolds (int) –

    Number of folds used by the surrogate models. Default 0

  • sample (bool) –

    Whether the training dataset should be sampled down for the interpretation

  • sample_num_rows (int) –

    Number of sampled rows. Default -1 == specified by config.toml

  • qbin_cols (list) –

    List of numeric columns to convert to quantile bins (can help fit surrogate models)

  • qbin_count (int) –

    Number of quantile bins for the quantile bin columns. Default 0

  • lime_method (str) –

    LIME method type from [‘k-LIME’, ‘LIME_SUP’]. Default ‘k-LIME’

  • dt_tree_depth (int) –

    Max depth of decision tree surrogate model. Default 3

  • config_overrides (str) –

    Driverless AI config overrides for separate experiment in TOML string format

Returns

a new Interpretation instance.

save_license_key(license_key: str) → h2oai_client.messages.License
score_sa(sa_key: str, hist_entry: int) → int
search_azr_blob_store_files(pattern: str) → h2oai_client.messages.FileSearchResults
search_dtap_files(pattern: str) → h2oai_client.messages.FileSearchResults
search_files(pattern: str) → h2oai_client.messages.FileSearchResults
search_gcs_files(pattern: str) → h2oai_client.messages.FileSearchResults
search_hdfs_files(pattern: str) → h2oai_client.messages.FileSearchResults
search_minio_files(pattern: str) → h2oai_client.messages.FileSearchResults
search_s3_files(pattern: str) → h2oai_client.messages.FileSearchResults
set_config_option(key: str, value: Any) → List[h2oai_client.messages.ConfigItem]

Set value for a given option Returns list of settings modified byt config rules application

set_config_option_dummy(key: str, value: Any, config_overrides: str) → List[h2oai_client.messages.ConfigItem]

Set value for a given option on local copy of config, without touching the global config Returns list of settings modified byt config rules application

Parameters

config_overrides – Used to initialize local config

start_echo(message: str, repeat: int) → str
start_experiment(req: h2oai_client.messages.ModelParameters, experiment_name: str) → str

Start a new experiment.

Parameters
  • req – The experiment’s parameters.

  • experiment_name – Display name of newly started experiment

Returns

The experiment’s key.

start_experiment_sync(dataset_key: str, target_col: str, is_classification: bool, accuracy: int, time: int, interpretability: int, scorer=None, score_f_name: str = None, **kwargs) → h2oai_client.messages.Model

Start an experiment.

Parameters
  • dataset_key (str) – Training dataset key

  • target_col (str) – Name of the targed column

  • is_classification (bool) – True for classification problem, False for regression

  • accuracy – Accuracy setting [1-10]

  • time – Time setting [1-10]

  • interpretability – Interpretability setting [1-10]

  • score (str) – <same as score_f_name> for backwards compatibiilty

  • score_f_name (str) – Name of one of the available scorers Default None - automatically decided

  • **kwargs – See below

Keyword Arguments
  • validset_key (str) –

    Validation daset key

  • testset_key (str) –

    Test daset key

  • weight_col (str) –

    Weights column name

  • fold_col (str) –

    Fold column name

  • cols_to_drop (list) –

    List of column to be dropped

  • enable_gpus (bool) –

    Allow GPU usage in experiment. Default True

  • seed (int) –

    Seed for PRNG. Default False

  • time_col (str) –

    Time column name, containing time ordering for timeseries problems

  • is_timeseries (bool) –

    Specifies whether problem is timeseries. Default False

  • time_groups_columns (list) –

    List of column names, contributing to time ordering

  • unavailable_columns_at_prediction_time (list) –

    List of column names, which won’t be present at prediction time in the testing dataset

  • time_period_in_seconds (int) –

    The length of the time period in seconds, used in timeseries problems

  • num_prediction_periods (int) –

    Timeseries forecast horizont in time period units

  • num_gap_periods (int) –

    Number of time periods after which forecast starts

  • config_overrides (str) –

    Driverless AI config overrides for separate experiment in TOML string format

  • resumed_model_key (str) –

    Experiment key, used for retraining/re-ensembling/starting from checkpoint

  • force_skip_acceptance_tests (bool) –

    Force experiment to skip custom recipes acceptance tests to finish, which may lead to not having all expected custom recipes

  • experiment_name (str) –

    Display name of newly started experiment

  • cols_imputation (List[ColumnImputation]) –

    List of column imputations for dataset. Ref messages::ColumnImputation

Returns

a new Model instance.

stop_echo(key: str) → None
stop_experiment(key: str) → None

Stop the experiment.

Parameters

key – The experiment’s key.

tornado_raw_producer(filename, write)
track_subsystem_event(subsystem_name: str, event_name: str) → None
type_of_mli(mli_job_key: str) → str
unlink_dataset_from_project(project_key: str, dataset_key: str, dataset_type: str) → bool
unlink_experiment_from_project(project_key: str, experiment_key: str) → bool
update_dataset_col_format(key: str, colname: str, datetime_format: str) → None
update_dataset_col_logical_types(key: str, colname: str, logical_types: List[str]) → None
update_dataset_name(key: str, new_name: str) → None
update_mli_description(key: str, new_description: str) → None
update_model_description(key: str, new_description: str) → None
update_project_name(key: str, name: str) → bool
upload_custom_recipe_sync(file_path: str) → h2oai_client.messages.CustomRecipe

Upload a custom recipe

Parameters

file_path – A path specifying the location of the python file containing custom transformer classes

Returns

CustomRecipe: which contains models, transformers and scorers lists to see newly loaded recipes

upload_dataset(file_path: str) → str

Upload a dataset

Parameters

file_path – A path specifying the location of the data to upload.

Returns

str: REST response

upload_dataset_sync(file_path)

Upload a dataset and wait for the upload to complete.

Parameters

file_path – A path specifying the location of the file to upload.

Returns

a Dataset instance.

upload_experiment_artifacts(model_key: str, user_note: str, artifact_path: str, name_override: str) → str
upload_file_sync(file_path: str)

Upload a file.

Parameters

file_path – A path specifying the location of the file to upload.

Returns

str: Absolute server-side path to the uploaded file.