Using the config.toml File

The config.toml file is a configuration file that uses the TOML v0.5.0 file format. Administrators can customize various aspects of a Driverless AI (DAI) environment by editing the config.toml file before starting DAI.

备注

For information on configuration security, see Configuration Security.

Configuration Override Chain

The configuration engine reads and overrides variables in the following order:

  1. Driverless AI defaults: These are stored in a Python config module.

  2. config.toml - Place this file in a folder or mount it in a Docker container and specify the path in the “DRIVERLESS_AI_CONFIG_FILE” environment variable.

  3. Keystore file - Set the keystore_file parameter in the config.toml file or the environment variable “DRIVERLESS_AI_KEYSTORE_FILE” to point to a valid DAI keystore file generated using the h2oai.keystore tool. If an environment variable is set, the value in the config.toml for keystore_file is overridden.

  4. Environment variable - Configuration variables can also be provided as environment variables. They must have the prefix DRIVERLESS_AI_ followed by the variable name in all caps. For example, “authentication_method” can be provided as “DRIVERLESS_AI_AUTHENTICATION_METHOD”. Setting environment variables overrides values from the keystore file.

  1. Copy the config.toml file from inside the Docker image to your local filesystem.

                # Make a config directory
                mkdir config

                # Copy the config.toml file to the new config directory.
                docker run --runtime=nvidia \
                  --pid=host \
                  --rm \
                  --init \
                  -u `id -u`:`id -g` \
                  -v `pwd`/config:/config \
                  --entrypoint bash \
                  h2oai/dai-ubi8-x86_64:2.0.0-cuda11.8.0.xx
                  -c "cp /etc/dai/config.toml /config"
  1. Edit the desired variables in the config.toml file. Save your changes when you are done.

  2. Start DAI with the DRIVERLESS_AI_CONFIG_FILE environment variable. Ensure that this environment variable points to the location of the edited config.toml file so that the software can locate the configuration file.

                docker run --runtime=nvidia \
                  --pid=host \
                  --init \
                  --rm \
                  --shm-size=2g --cap-add=SYS_NICE --ulimit nofile=131071:131071 --ulimit nproc=16384:16384 \
                  -u `id -u`:`id -g` \
                  -p 12345:12345 \
                  -e DRIVERLESS_AI_CONFIG_FILE="/config/config.toml" \
                  -v `pwd`/config:/config \
                  -v `pwd`/data:/data \
                  -v `pwd`/log:/log \
                  -v `pwd`/license:/license \
                  -v `pwd`/tmp:/tmp \
                  h2oai/dai-ubi8-x86_64:2.0.0-cuda11.8.0.xx

Sample config.toml File

The following is a copy of the standard config.toml file included with this version of DAI. The sections that follow describe some examples showing how to set different environment variables, data connectors, authentication methods, and notifications.

   1
   2##############################################################################
   3#                        DRIVERLESS AI CONFIGURATION FILE
   4#
   5# Comments:
   6# This file is authored in TOML (see https://github.com/toml-lang/toml)
   7#
   8# Config Override Chain
   9# Configuration variables for Driverless AI can be provided in several ways,
  10# the config engine reads and overrides variables in the following order
  11#
  12# 1. h2oai/config/config.toml
  13# [internal not visible to users]
  14#
  15# 2. config.toml
  16# [place file in a folder/mount file in docker container and provide path
  17# in "DRIVERLESS_AI_CONFIG_FILE" environment variable]
  18#
  19# 3. Keystore file
  20# [set keystore_file parameter in config.toml, or environment variable
  21# "DRIVERLESS_AI_KEYSTORE_FILE" to point to a valid DAI keystore file 
  22# generated using h2oai.keystore tool
  23#
  24# 4. Environment variable
  25# [configuration variables can also be provided as environment variables
  26# they must have the prefix "DRIVERLESS_AI_" followed by
  27# variable name in caps e.g "authentication_method" can be provided as
  28# "DRIVERLESS_AI_AUTHENTICATION_METHOD"]
  29##############################################################################
  30
  31# If the experiment is not done after this many minutes, stop feature engineering and model tuning as soon as possible and proceed with building the final modeling pipeline and deployment artifacts, independent of model score convergence or pre-determined number of iterations. Only active is not in reproducible mode. Depending on the data and experiment settings, overall experiment runtime can differ significantly from this setting.
  32#max_runtime_minutes = 1440
  33
  34# if non-zero, then set max_runtime_minutes automatically to min(max_runtime_minutes, max(min_auto_runtime_minutes, runtime estimate)) when enable_preview_time_estimate is true, so that the preview performs a best estimate of the runtime.  Set to zero to disable runtime estimate being used to constrain runtime of experiment.
  35#min_auto_runtime_minutes = 60
  36
  37# Whether to tune max_runtime_minutes based upon final number of base models,so try to trigger start of final model in order to better ensure stop entire experiment before max_runtime_minutes.Note: If the time given is short enough that tuning models are reduced belowfinal model expectations, the final model may be shorter than expected leadingto an overall shorter experiment time.
  38#max_runtime_minutes_smart = true
  39
  40# If the experiment is not done after this many minutes, push the abort button. Preserves experiment artifacts made so far for summary and log zip files, but further artifacts are made.
  41#max_runtime_minutes_until_abort = 10080
  42
  43# If reproducbile is set, then experiment and all artifacts are reproducible, however then experiments may take arbitrarily long for a given choice of dials, features, and models.
  44# Setting this to False allows the experiment to complete after a fixed time, with all aspects of the model and feature building are reproducible and seeded, but the overall experiment behavior will not necessarily be reproducible if later iterations would have been used in final model building.
  45# This should set to True if every seeded experiment of exact same setup needs to generate the exact same final model, regardless of duration.
  46#strict_reproducible_for_max_runtime = true
  47
  48# Uses model built on large number of experiments to estimate runtime.  It can be inaccurate in cases that were not trained on.
  49#enable_preview_time_estimate = true
  50
  51# Uses model built on large number of experiments to estimate mojo size.  It can be inaccurate in cases that were not trained on.
  52#enable_preview_mojo_size_estimate = true
  53
  54# Uses model built on large number of experiments to estimate max cpu memory.  It can be inaccurate in cases that were not trained on.
  55#enable_preview_cpu_memory_estimate = true
  56
  57#enable_preview_time_estimate_rough = false
  58
  59# If the experiment is not done by this time, push the abort button. Accepts time in format given by time_abort_format (defaults to %Y-%m-%d %H:%M:%S)assuming a time zone set by time_abort_timezone (defaults to UTC). One can also give integer seconds since 1970-01-01 00:00:00 UTC. Applies to time on a DAI worker that runs experiments. Preserves experiment artifacts made so far for summary and log zip files, but further artifacts are made.NOTE: If start new experiment with same parameters, restart, or refit, thisabsolute time will apply to such experiments or set of leaderboard experiments.
  60#time_abort = ""
  61
  62# Any format is allowed as accepted by datetime.strptime.
  63#time_abort_format = "%Y-%m-%d %H:%M:%S"
  64
  65# Any time zone in format accepted by datetime.strptime.
  66#time_abort_timezone = "UTC"
  67
  68# Whether to delete all directories and files matching experiment pattern when call do_delete_model (True),
  69# or whether to just delete directories (False).  False can be used to preserve experiment logs that do
  70# not take up much space.
  71# 
  72#delete_model_dirs_and_files = true
  73
  74# Whether to delete all directories and files matching dataset pattern when call do_delete_dataset (True),
  75# or whether to just delete directories (False).  False can be used to preserve dataset logs that do
  76# not take up much space.
  77# 
  78#delete_data_dirs_and_files = true
  79
  80# # Recipe type
  81# ## Recipes override any GUI settings
  82# - **'auto'**: all models and features automatically determined by experiment settings, toml settings, and feature_engineering_effort
  83# - **'compliant'** : like 'auto' except:
  84# - *interpretability=10* (to avoid complexity, overrides GUI or python client chose for interpretability)
  85# - *enable_glm='on'* (rest 'off', to avoid complexity and be compatible with algorithms supported by MLI)
  86# - *fixed_ensemble_level=0*: Don't use any ensemble
  87# - *feature_brain_level=0*(: No feature brain used (to ensure every restart is identical)
  88# - *max_feature_interaction_depth=1*: interaction depth is set to 1 (no multi-feature interactions to avoid complexity)
  89# - *target_transformer='identity'*: for regression (to avoid complexity)
  90# - *check_distribution_shift_drop='off'*: Don't use distribution shift between train, valid, and test to drop features (bit risky without fine-tuning)
  91# - **'monotonic_gbm'** : like 'auto' except:
  92# - *monotonicity_constraints_interpretability_switch=1*: enable monotonicity constraints
  93# - *self.config.monotonicity_constraints_correlation_threshold = 0.01*: see below
  94# - *monotonicity_constraints_drop_low_correlation_features=true*: drop features that aren't correlated with target by at least 0.01 (specified by parameter above)
  95# - *fixed_ensemble_level=0*: Don't use any ensemble (to avoid complexity)
  96# - *included_models=['LightGBMModel']*
  97# - *included_transformers=['OriginalTransformer']*: only original (numeric) features will be used
  98# - *feature_brain_level=0*: No feature brain used (to ensure every restart is identical)
  99# - *monotonicity_constraints_log_level='high'*
 100# - *autodoc_pd_max_runtime=-1*: no timeout for PDP creation in AutoDoc
 101# - **'kaggle'** : like 'auto' except:
 102# - external validation set is concatenated with train set, with target marked as missing
 103# - test set is concatenated with train set, with target marked as missing
 104# - transformers that do not use the target are allowed to fit_transform across entire train + validation + test
 105# - several config toml expert options open-up limits (e.g. more numerics are treated as categoricals)
 106# - Note: If plentiful memory, can:
 107# - choose kaggle mode and then change fixed_feature_interaction_depth to large negative number,
 108# otherwise default number of features given to transformer is limited to 50 by default
 109# - choose mutation_mode = "full", so even more types are transformations are done at once per transformer
 110# - **'nlp_model'**: Only enables NLP models that process pure text
 111# - **'nlp_transformer'**: Only enables NLP transformers that process pure text, while any model type is allowed
 112# - **'image_model'**: Only enables Image models that process pure images
 113# - **'image_transformer'**: Only enables Image transformers that process pure images, while any model type is allowed
 114# - **'unsupervised'**: Only enables unsupervised transformers, models and scorers
 115# - **'gpus_max'**: Maximize use of GPUs (e.g. use XGBoost, rapids, Optuna hyperparameter search, etc.)
 116# - **'more_overfit_protection'**: Potentially improve overfit, esp. for small data, by disabling target encoding and making GA behave like final model for tree counts and learning rate
 117# - **'feature_store_mojo'**: Creates a MOJO to be used as transformer in the H2O Feature Store, to augment data on a row-by-row level based on Driverless AI's feature engineering. Only includes transformers that don't depend on the target, since features like target encoding need to be created at model fitting time to avoid data leakage. And features like lags need to be created from the raw data, they can't be computed with a row-by-row MOJO transformer.
 118# Each pipeline building recipe mode can be chosen, and then fine-tuned using each expert settings.  Changing the
 119# pipeline building recipe will reset all pipeline building recipe options back to default and then re-apply the
 120# specific rules for the new mode, which will undo any fine-tuning of expert options that are part of pipeline building
 121# recipe rules.
 122# If choose to do new/continued/refitted/retrained experiment from parent experiment, the recipe rules are not re-applied
 123# and any fine-tuning is preserved.  To reset recipe behavior, one can switch between 'auto' and the desired mode.  This
 124# way the new child experiment will use the default settings for the chosen recipe.
 125#recipe = "auto"
 126
 127# Whether to treat model like UnsupervisedModel, so that one specifies each scorer, pretransformer, and transformer in expert panel like one would do for supervised experiments.
 128# Otherwise (False), custom unsupervised models will assume the model itself specified these.
 129# If the unsupervised model chosen has _included_transformers, _included_pretransformers, and _included_scorers selected, this should be set to False (default) else should be set to True.
 130# Then if one wants the unsupervised model to only produce 1 gene-transformer, then the custom unsupervised model can have:
 131# _ngenes_max = 1
 132# _ngenes_max_by_layer = [1000, 1]
 133# The 1000 for the pretransformer layer just means that layer can have any number of genes.  Choose 1 if you expect single instance of the pretransformer to be all one needs, e.g. consumes input features fully and produces complete useful output features.
 134# 
 135#custom_unsupervised_expert_mode = false
 136
 137# Whether to enable genetic algorithm for selection and hyper-parameter tuning of features and models.
 138# - If disabled ('off'), will go directly to final pipeline training (using default feature engineering and feature selection).
 139# - 'auto' is same as 'on' unless pure NLP or Image experiment.
 140# - "Optuna": Uses DAI genetic algorithm for feature engineering, but model hyperparameters are tuned with Optuna.
 141# - In the Optuna case, the scores shown in the iteration panel are the best score and trial scores.
 142# - Optuna mode currently only uses Optuna for XGBoost, LightGBM, and CatBoost (custom recipe).
 143# - If Pruner is enabled, as is default, Optuna mode disables mutations of eval_metric so pruning uses same metric across trials to compare properly.
 144# Currently does not supported when pre_transformers or multi-layer pipeline used, which must go through at least one round of tuning or evolution.
 145# 
 146#enable_genetic_algorithm = "auto"
 147
 148# How much effort to spend on feature engineering (-1...10)
 149# Heuristic combination of various developer-level toml parameters
 150# -1  : auto (5, except 1 for wide data in order to limit engineering)
 151# 0   : keep only numeric features, only model tuning during evolution
 152# 1   : keep only numeric features and frequency-encoded categoricals, only model tuning during evolution
 153# 2   : Like #1 but instead just no Text features.  Some feature tuning before evolution.
 154# 3   : Like #5 but only tuning during evolution.  Mixed tuning of features and model parameters.
 155# 4   : Like #5, but slightly more focused on model tuning
 156# 5   : Default.  Balanced feature-model tuning
 157# 6-7 : Like #5, but slightly more focused on feature engineering
 158# 8   : Like #6-7, but even more focused on feature engineering with high feature generation rate, no feature dropping even if high interpretability
 159# 9-10: Like #8, but no model tuning during feature evolution
 160# 
 161#feature_engineering_effort = -1
 162
 163# Whether to enable train/valid and train/test distribution shift detection ('auto'/'on'/'off').
 164# By default, LightGBMModel is used for shift detection if possible, unless it is turned off in model
 165# expert panel, and then only the models selected in recipe list will be used.
 166# 
 167#check_distribution_shift = "auto"
 168
 169# Whether to enable train/test distribution shift detection ('auto'/'on'/'off') for final model transformed features.
 170# By default, LightGBMModel is used for shift detection if possible, unless it is turned off in model
 171# expert panel, and then only the models selected in recipe list will be used.
 172# 
 173#check_distribution_shift_transformed = "auto"
 174
 175# Whether to drop high-shift features ('auto'/'on'/'off').  Auto disables for time series.
 176#check_distribution_shift_drop = "auto"
 177
 178# If distribution shift detection is enabled, drop features (except ID, text, date/datetime, time, weight) for
 179# which shift AUC, GINI, or Spearman correlation is above this value
 180# (e.g. AUC of a binary classifier that predicts whether given feature value
 181# belongs to train or test data)
 182# 
 183#drop_features_distribution_shift_threshold_auc = 0.999
 184
 185# Specify whether to check leakage for each feature (``on`` or ``off``).
 186# If a fold column is used, this option checks leakage without using the fold column.
 187# By default, LightGBM Model is used for leakage detection when possible, unless it is
 188# turned off in the Model Expert Settings tab, in which case only the models selected with
 189# the ``included_models`` option are used. Note that this option is always disabled for time
 190# series experiments.
 191# 
 192#check_leakage = "auto"
 193
 194# If leakage detection is enabled,
 195# drop features for which AUC (R2 for regression), GINI,
 196# or Spearman correlation is above this value.
 197# If fold column present, features are not dropped,
 198# because leakage test applies without fold column used.
 199# 
 200#drop_features_leakage_threshold_auc = 0.999
 201
 202# Max number of rows x number of columns to trigger (stratified) sampling for leakage checks
 203# 
 204#leakage_max_data_size = 10000000
 205
 206# Specify the maximum number of features to use and show in importance tables.
 207# When Interpretability is set higher than 1,
 208# transformed or original features with lower importance than the top max_features_importance features are always removed.
 209# Feature importances of transformed or original features correspondingly will be pruned.
 210# Higher values can lead to lower performance and larger disk space used for datasets with more than 100k columns.
 211# 
 212#max_features_importance = 100000
 213
 214# Whether to create the Python scoring pipeline at the end of each experiment.
 215#make_python_scoring_pipeline = "auto"
 216
 217# Whether to create the MOJO scoring pipeline at the end of each experiment. If set to "auto", will attempt to
 218# create it if possible (without dropping capabilities). If set to "on", might need to drop some models,
 219# transformers or custom recipes.
 220# 
 221#make_mojo_scoring_pipeline = "auto"
 222
 223# Whether to create a C++ MOJO based Triton scoring pipeline at the end of each experiment. If set to "auto", will attempt to
 224# create it if possible (without dropping capabilities). If set to "on", might need to drop some models,
 225# transformers or custom recipes. Requires make_mojo_scoring_pipeline != "off".
 226# 
 227#make_triton_scoring_pipeline = "off"
 228
 229# Whether to automatically deploy the model to the Triton inference server at the end of each experiment.
 230# "remote" will deploy to the remote Triton inference server to location provided by triton_host_remote (and optionally, triton_model_repository_dir_remote).
 231# "off" requires manual action (Deploy wizard or Python client or manual transfer of exported Triton directory from Deploy wizard) to deploy the model to Triton.
 232# 
 233#auto_deploy_triton_scoring_pipeline = "off"
 234
 235# Test remote Triton deployments during creation of MOJO pipeline. Requires triton_host_remote to be configured and make_triton_scoring_pipeline to be enabled.
 236#triton_mini_acceptance_test_remote = true
 237
 238#triton_client_timeout_testing = 300
 239
 240#test_triton_when_making_mojo_pipeline_only = false
 241
 242# Perform timing and accuracy benchmarks for Injected MOJO scoring vs Python scoring. This is for full scoring data, and can be slow. This also requires hard asserts. Doesn't force MOJO scoring by itself, so depends on mojo_for_predictions='on' if want full coverage.
 243#mojo_for_predictions_benchmark = true
 244
 245# Fail hard if MOJO scoring is this many times slower than Python scoring.
 246#mojo_for_predictions_benchmark_slower_than_python_threshold = 10
 247
 248# Fail hard if MOJO scoring is slower than Python scoring by a factor specified by mojo_for_predictions_benchmark_slower_than_python_threshold, but only if have at least this many rows. To reduce false positives.
 249#mojo_for_predictions_benchmark_slower_than_python_min_rows = 100
 250
 251# Fail hard if MOJO scoring is slower than Python scoring by a factor specified by mojo_for_predictions_benchmark_slower_than_python_threshold, but only if takes at least this many seconds. To reduce false positives.
 252#mojo_for_predictions_benchmark_slower_than_python_min_seconds = 2.0
 253
 254# Inject MOJO into fitted Python state if mini acceptance test passes, so can use C++ MOJO runtime when calling predict(enable_mojo=True, IS_SCORER=True, ...). Prerequisite for mojo_for_predictions='on' or 'auto'.
 255#inject_mojo_for_predictions = true
 256
 257# Use MOJO for making fast low-latency predictions after experiment has finished (when applicable, for AutoDoc/Diagnostics/Predictions/MLI and standalone Python scoring via scorer.zip). For 'auto', only use MOJO if number of rows is equal or below mojo_for_predictions_max_rows. For larger frames, it can be faster to use the Python backend since used libraries are more likely already vectorized.
 258#mojo_for_predictions = "auto"
 259
 260# For smaller datasets, the single-threaded but low latency C++ MOJO runtime can lead to significantly faster scoring times than the regular in-Driverless AI Python scoring environment. If enable_mojo=True is passed to the predict API, and the MOJO exists and is applicable, then use the MOJO runtime for datasets that have fewer or equal number of rows than this threshold. MLI/AutoDoc set enable_mojo=True by default, so this setting applies. This setting is only used if mojo_for_predictions is 'auto'.
 261#mojo_for_predictions_max_rows = 10000
 262
 263# Batch size (in rows) for C++ MOJO predictions. Only when enable_mojo=True is passed to the predict API, and when the MOJO is applicable (e.g., fewer rows than mojo_for_predictions_max_rows). Larger values can lead to faster scoring, but use more memory.
 264#mojo_for_predictions_batch_size = 100
 265
 266# Relative tolerance for mini MOJO acceptance test. If Python/C++ MOJO differs more than this from Python, won't use MOJO inside Python for later scoring. Only applicable if mojo_for_predictions=True. Disabled if <= 0.
 267#mojo_acceptance_test_rtol = 0.0
 268
 269# Absolute tolerance for mini MOJO acceptance test (for regression/Shapley, will be scaled by max(abs(preds)). If Python/C++ MOJO differs more than this from Python, won't use MOJO inside Python for later scoring. Only applicable if mojo_for_predictions=True. Disabled if <= 0.
 270#mojo_acceptance_test_atol = 0.0
 271
 272# Whether to attempt to reduce the size of the MOJO scoring pipeline. A smaller MOJO will also lead to
 273# less memory footprint during scoring. It is achieved by reducing some other settings like interaction depth, and
 274# hence can affect the predictive accuracy of the model.
 275# 
 276#reduce_mojo_size = false
 277
 278# Whether to create the pipeline visualization at the end of each experiment.
 279# Uses MOJO to show pipeline, input features, transformers, model, and outputs of model.  MOJO-capable tree models show first tree.
 280#make_pipeline_visualization = "auto"
 281
 282# Whether to create the python pipeline visualization at the end of each experiment.
 283# Each feature and transformer includes a variable importance at end in brackets.
 284# Only done when forced on, and artifacts as png files will appear in summary zip.
 285# Each experiment has files per individual in final population:
 286# 1) preprune_False_0.0 : Before final pruning, without any additional variable importance threshold pruning
 287# 2) preprune_True_0.0 : Before final pruning, with additional variable importance <=0.0 pruning
 288# 3) postprune_False_0.0 : After final pruning, without any additional variable importance threshold pruning
 289# 4) postprune_True_0.0 : After final pruning, with additional variable importance <=0.0 pruning
 290# 5) posttournament_False_0.0 : After final pruning and tournament, without any additional variable importance threshold pruning
 291# 6) posttournament_True_0.0 : After final pruning and tournament, with additional variable importance <=0.0 pruning
 292# 1-5 are done with 'on' while 'auto' only does 6 corresponding to the final post-pruned individuals.
 293# Even post pruning, some features have zero importance, because only those genes that have value+variance in
 294# variable importance of value=0.0 get pruned.  GA can have many folds with positive variance
 295# for a gene, and those are not removed in case they are useful features for final model.
 296# If small mojo option is chosen (reduce_mojo_size True), then the variance of feature gain is ignored
 297# for which genes and features are pruned as well as for what appears in the graph.
 298# 
 299#make_python_pipeline_visualization = "auto"
 300
 301# Whether to create the experiment AutoDoc after end of experiment.
 302# 
 303#make_autoreport = true
 304
 305#max_cols_make_autoreport_automatically = 1000
 306
 307#max_cols_make_pipeline_visualization_automatically = 5000
 308
 309# Pass environment variables from running Driverless AI instance to Python scoring pipeline for
 310# deprecated models, when they are used to make predictions. Use with caution.
 311# If config.toml overrides are set by env vars, and they differ from what the experiment's env
 312# looked like when it was trained, then unexpected consequences can occur. Enable this only to "
 313# override certain well-controlled settings like the port for H2O-3 custom recipe server.
 314# 
 315#pass_env_to_deprecated_python_scoring = false
 316
 317#transformer_description_line_length = -1
 318
 319# Whether to measure the MOJO scoring latency at the time of MOJO creation.
 320#benchmark_mojo_latency = "auto"
 321
 322# Max size of pipeline.mojo file (in MB) for automatic mode of MOJO scoring latency measurement
 323#benchmark_mojo_latency_auto_size_limit = 2048
 324
 325# If MOJO creation times out at end of experiment, can still make MOJO from the GUI or from the R/Py clients (timeout doesn't apply there).
 326#mojo_building_timeout = 1800.0
 327
 328# If MOJO visualization creation times out at end of experiment, MOJO is still created if possible within the time limit specified by mojo_building_timeout.
 329#mojo_vis_building_timeout = 600.0
 330
 331# If MOJO creation is too slow, increase this value. Higher values can finish faster, but use more memory.
 332# If MOJO creation fails due to an out-of-memory error, reduce this value to 1.
 333# Set to -1 for all physical cores.
 334# 
 335#mojo_building_parallelism = -1
 336
 337# Size in bytes that all pickled and compressed base models have to satisfy to use parallel MOJO building.
 338# For large base models, parallel MOJO building can use too much memory.
 339# Only used if final_fitted_model_per_model_fold_files is true.
 340# 
 341#mojo_building_parallelism_base_model_size_limit = 100000000
 342
 343# Whether to show model and pipeline sizes in logs.
 344# If 'auto', then not done if more than 10 base models+folds, because expect not concerned with size.
 345#show_pipeline_sizes = "auto"
 346
 347# safe: assume might be running another experiment on same node
 348# moderate: assume not running any other experiments or tasks on same node, but still only use physical core count
 349# max: assume not running anything else on node at all except the experiment
 350# If multinode is enabled, this option has no effect, unless worker_remote_processors=1 when it will still be applied.
 351# Each exclusive mode can be chosen, and then fine-tuned using each expert settings.  Changing the
 352# exclusive mode will reset all exclusive mode related options back to default and then re-apply the
 353# specific rules for the new mode, which will undo any fine-tuning of expert options that are part of exclusive mode rules.
 354# If choose to do new/continued/refitted/retrained experiment from parent experiment, all the mode rules are not re-applied
 355# and any fine-tuning is preserved.  To reset mode behavior, one can switch between 'safe' and the desired mode.   This
 356# way the new child experiment will use the default system resources for the chosen mode.
 357# 
 358#exclusive_mode = "safe"
 359
 360# Maximum number of workers for Driverless AI server pool (only 1 needed currently)
 361#max_workers = 1
 362
 363# Max number of CPU cores to use for the whole system. Set to <= 0 to use all (physical) cores.
 364# If the number of ``worker_remote_processors`` is set to a value >= 3, the number of cores will be reduced
 365# by the ratio (``worker_remote_processors_max_threads_reduction_factor`` * ``worker_remote_processors``)
 366# to avoid overloading the system when too many remote tasks are processed at once.
 367# One can also set environment variable 'OMP_NUM_THREADS' to number of cores to use for OpenMP
 368# (e.g., in bash: 'export OMP_NUM_THREADS=32' and 'export OPENBLAS_NUM_THREADS=32').
 369# 
 370#max_cores = 0
 371
 372# Max number of CPU cores to use across all of DAI experiments and tasks.
 373# -1 is all available, with stall_subprocess_submission_dai_fork_threshold_count=0 means restricted to core count.
 374# 
 375#max_cores_dai = -1
 376
 377# Number of virtual cores per physical core (0: auto mode, >=1 use that integer value).  If >=1, the reported physical cores in logs will match the virtual cores divided by this value.
 378#virtual_cores_per_physical_core = 0
 379
 380# Mininum number of virtual cores per physical core. Only applies if virtual cores != physical cores. Can help situations like Intel i9 13900 with 24 physical cores and only 32 virtual cores. So better to limit physical cores to 16.
 381#min_virtual_cores_per_physical_core_if_unequal = 2
 382
 383# Number of physical cores to assume are present (0: auto, >=1 use that integer value).
 384# If for some reason DAI does not automatically figure out physical cores correctly,
 385# one can override with this value.  Some systems, especially virtualized, do not always provide
 386# correct information about the virtual cores, physical cores, sockets, etc.
 387#override_physical_cores = 0
 388
 389# Number of virtual cores to assume are present (0: auto, >=1 use that integer value).
 390# If for some reason DAI does not automatically figure out virtual cores correctly,
 391# or only a portion of the system is to be used, one can override with this value.
 392# Some systems, especially virtualized, do not always provide
 393# correct information about the virtual cores, physical cores, sockets, etc.
 394#override_virtual_cores = 0
 395
 396# Whether to treat data as small recipe in terms of work, by spreading many small tasks across many cores instead of forcing GPUs, for models that support it via static var _use_single_core_if_many.  'auto' looks at _use_single_core_if_many for models and data size, 'on' forces, 'off' disables.
 397#small_data_recipe_work = "auto"
 398
 399# Stall submission of tasks if total DAI fork count exceeds count (-1 to disable, 0 for automatic of max_cores_dai)
 400#stall_subprocess_submission_dai_fork_threshold_count = 0
 401
 402# Stall submission of tasks if system memory available is less than this threshold in percent (set to 0 to disable).
 403# Above this threshold, the number of workers in any pool of workers is linearly reduced down to 1 once hitting this threshold.
 404# 
 405#stall_subprocess_submission_mem_threshold_pct = 2
 406
 407# Whether to set automatic number of cores by physical (True) or logical (False) count.
 408# Using all logical cores can lead to poor performance due to cache thrashing.
 409# 
 410#max_cores_by_physical = true
 411
 412# Absolute limit to core count
 413#max_cores_limit = 200
 414
 415# Control maximum number of cores to use for a model's fit call (0 = all physical cores >= 1 that count).  See also tensorflow_model_max_cores to further limit TensorFlow main models.
 416#max_fit_cores = 10
 417
 418# Control maximum number of cores to use for a scoring across all chosen scorers (0 = auto)
 419#parallel_score_max_workers = 0
 420
 421# Whether to use full multinode distributed cluster (True) or single-node dask (False).
 422# In some cases, using entire cluster can be inefficient.  E.g. several DGX nodes can be more efficient
 423# if used one DGX at a time for medium-sized data.
 424# 
 425#use_dask_cluster = true
 426
 427# Control maximum number of cores to use for a model's predict call (0 = all physical cores >= 1 that count)
 428#max_predict_cores = 0
 429
 430# Factor by which to reduce physical cores, to use for post-model experiment tasks like autoreport, MLI, etc.
 431#max_predict_cores_in_dai_reduce_factor = 4
 432
 433# Maximum number of cores to use for post-model experiment tasks like autoreport, MLI, etc.
 434#max_max_predict_cores_in_dai = 10
 435
 436# Control maximum number of cores to use for a model's transform and predict call when doing operations inside DAI-MLI GUI and R/Py client.
 437# The main experiment and other tasks like MLI and autoreport have separate queues.  The main experiments have run at most worker_remote_processors tasks (limited by cores if auto mode),
 438# while other tasks run at most worker_local_processors (limited by cores if auto mode) tasks at the same time,
 439# so many small tasks can add up.  To prevent overloading the system, the defaults are conservative.  However, if most of the activity involves autoreport or MLI, and no model experiments
 440# are running, it may be safe to increase this value to something larger than 4.
 441# -1   : Auto mode.  Up to physical cores divided by 4, up to maximum of 10.
 442# 0   : all physical cores
 443# >= 1: that count).
 444# 
 445#max_predict_cores_in_dai = -1
 446
 447# Control number of workers used in CPU mode for tuning (0 = socket count -1 = all physical cores >= 1 that count).  More workers will be more parallel but models learn less from each other.
 448#batch_cpu_tuning_max_workers = 0
 449
 450# Control number of workers used in CPU mode for training (0 = socket count -1 = all physical cores >= 1 that count)
 451#cpu_max_workers = 0
 452
 453# Expected maximum number of forks, used to ensure datatable doesn't overload system. For actual use beyond this value, system will start to have slow-down issues
 454#assumed_simultaneous_dt_forks_munging = 3
 455
 456# Expected maximum number of forks by computing statistics during ingestion, used to ensure datatable doesn't overload system
 457#assumed_simultaneous_dt_forks_stats_openblas = 1
 458
 459# Maximum of threads for datatable for munging
 460#max_max_dt_threads_munging = 4
 461
 462# Expected maximum of threads for datatable no matter if many more cores
 463#max_max_dt_threads_stats_openblas = 8
 464
 465# Maximum of threads for datatable for reading/writing files
 466#max_max_dt_threads_readwrite = 4
 467
 468# Maximum parallel workers for final model building.
 469# 0 means automatic, >=1 means limit to no more than that number of parallel jobs.
 470# Can be required if some transformer or model uses more than the expected amount of memory.
 471# Ways to reduce final model building memory usage, e.g. set one or more of these and retrain final model:
 472# 1) Increase munging_memory_overhead_factor to 10
 473# 2) Increase final_munging_memory_reduction_factor to 10
 474# 3) Lower max_workers_final_munging to 1
 475# 4) Lower max_workers_final_base_models to 1
 476# 5) Lower max_cores to, e.g., 1/2 or 1/4 of physical cores.
 477#max_workers_final_base_models = 0
 478
 479# Maximum parallel workers for final per-model munging.
 480# 0 means automatic, >=1 means limit to no more than that number of parallel jobs.
 481# Can be required if some transformer uses more than the expected amount of memory.
 482#max_workers_final_munging = 0
 483
 484# Minimum number of threads for datatable (and OpenMP) during data munging (per process).
 485# datatable is the main data munging tool used within Driverless ai (source :
 486# https://github.com/h2oai/datatable)
 487# 
 488#min_dt_threads_munging = 1
 489
 490# Like min_datatable (and OpenMP)_threads_munging but for final pipeline munging
 491#min_dt_threads_final_munging = 1
 492
 493# Maximum number of threads for datatable during data munging (per process) (0 = all, -1 = auto).
 494# If multiple forks, threads are distributed across forks.
 495#max_dt_threads_munging = -1
 496
 497# Maximum number of threads for datatable during data reading and writing (per process) (0 = all, -1 = auto).
 498# If multiple forks, threads are distributed across forks.
 499#max_dt_threads_readwrite = -1
 500
 501# Maximum number of threads for datatable stats and openblas (per process) (0 = all, -1 = auto).
 502# If multiple forks, threads are distributed across forks.
 503#max_dt_threads_stats_openblas = -1
 504
 505# Maximum number of threads for datatable during TS properties preview panel computations).
 506#max_dt_threads_do_timeseries_split_suggestion = 1
 507
 508# Number of GPUs to use per experiment for training task.  Set to -1 for all GPUs.
 509# An experiment will generate many different models.
 510# Currently num_gpus_per_experiment!=-1 disables GPU locking, so is only recommended for
 511# single experiments and single users.
 512# Ignored if GPUs disabled or no GPUs on system.
 513# More info at: https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker#gpu-isolation
 514# In multinode context when using dask, this refers to the per-node value.
 515# For ImageAutoModel, this refers to the total number of GPUs used for that entire model type,
 516# since there is only one model type for the entire experiment.
 517# E.g. if have 4 GPUs and want 2 ImageAuto experiments to run on 2 GPUs each, can set
 518# num_gpus_per_experiment to 2 for each experiment, and each of the 4 GPUs will be used one at a time
 519# by the 2 experiments each using 2 GPUs only.
 520# 
 521#num_gpus_per_experiment = -1
 522
 523# Number of CPU cores per GPU. Limits number of GPUs in order to have sufficient cores per GPU.
 524# Set to -1 to disable, -2 for auto mode.
 525# In auto mode, if lightgbm_use_gpu is 'auto' or 'off', then min_num_cores_per_gpu=1, else min_num_cores_per_gpu=2, due to lightgbm requiring more cores even when using GPUs.
 526#min_num_cores_per_gpu = -2
 527
 528# Number of GPUs to use per model training task.  Set to -1 for all GPUs.
 529# For example, when this is set to -1 and there are 4 GPUs available, all of them can be used for the training of a single model.
 530# Only applicable currently to image auto pipeline building recipe or Dask models with more than one GPU or more than one node.
 531# Ignored if GPUs disabled or no GPUs on system.
 532# For ImageAutoModel, the maximum of num_gpus_per_model and num_gpus_per_experiment (all GPUs if -1) is taken.
 533# More info at: https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker#gpu-isolation
 534# In multinode context when using Dask, this refers to the per-node value.
 535# 
 536#num_gpus_per_model = 1
 537
 538# Number of GPUs to use for predict for models and transform for transformers when running outside of fit/fit_transform.
 539# -1 means all, 0 means no GPUs, >1 means that many GPUs up to visible limit.
 540# If predict/transform are called in same process as fit/fit_transform, number of GPUs will match,
 541# while new processes will use this count for number of GPUs for applicable models/transformers.
 542# Exception: TensorFlow, PyTorch models/transformers, and RAPIDS predict on GPU always if GPUs exist.
 543# RAPIDS requires python scoring package be used also on GPUs.
 544# In multinode context when using Dask, this refers to the per-node value.
 545# 
 546#num_gpus_for_prediction = 0
 547
 548# Which gpu_id to start with
 549# -1 : auto-mode.  E.g. 2 experiments can each set num_gpus_per_experiment to 2 and use 4 GPUs
 550# If using CUDA_VISIBLE_DEVICES=... to control GPUs (preferred method), gpu_id=0 is the
 551# first in that restricted list of devices.
 552# E.g. if CUDA_VISIBLE_DEVICES='4,5' then gpu_id_start=0 will refer to the
 553# device #4.
 554# E.g. from expert mode, to run 2 experiments, each on a distinct GPU out of 2 GPUs:
 555# Experiment#1: num_gpus_per_model=1, num_gpus_per_experiment=1, gpu_id_start=0
 556# Experiment#2: num_gpus_per_model=1, num_gpus_per_experiment=1, gpu_id_start=1
 557# E.g. from expert mode, to run 2 experiments, each on a distinct GPU out of 8 GPUs:
 558# Experiment#1: num_gpus_per_model=1, num_gpus_per_experiment=4, gpu_id_start=0
 559# Experiment#2: num_gpus_per_model=1, num_gpus_per_experiment=4, gpu_id_start=4
 560# E.g. Like just above, but now run on all 4 GPUs/model
 561# Experiment#1: num_gpus_per_model=4, num_gpus_per_experiment=4, gpu_id_start=0
 562# Experiment#2: num_gpus_per_model=4, num_gpus_per_experiment=4, gpu_id_start=4
 563# If num_gpus_per_model!=1, global GPU locking is disabled
 564# (because underlying algorithms don't support arbitrary gpu ids, only sequential ids),
 565# so must setup above correctly to avoid overlap across all experiments by all users
 566# More info at: https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker#gpu-isolation
 567# Note that GPU selection does not wrap, so gpu_id_start + num_gpus_per_model must be less than number of visibile GPUs
 568# 
 569#gpu_id_start = -1
 570
 571# Whether to reduce features until model does not fail.
 572# Currently for non-dask XGBoost models (i.e. GLMModel, XGBoostGBMModel, XGBoostDartModel, XGBoostRFModel),
 573# during normal fit or when using Optuna.
 574# Primarily useful for GPU OOM.
 575# If XGBoost runs out of GPU memory, this is detected, and
 576# (regardless of setting of skip_model_failures),
 577# we perform feature selection using XGBoost on subsets of features.
 578# The dataset is progressively reduced by factor of 2 with more models to cover all features.
 579# This splitting continues until no failure occurs.
 580# Then all sub-models are used to estimate variable importance by absolute information gain,
 581# in order to decide which features to include.
 582# Finally, a single model with the most important features
 583# is built using the feature count that did not lead to OOM.
 584# For 'auto', this option is set to 'off' when reproducible experiment is enabled,
 585# because the condition of running OOM can change for same experiment seed.
 586# Reduction is only done on features and not on rows for the feature selection step.
 587# 
 588#allow_reduce_features_when_failure = "auto"
 589
 590# With allow_reduce_features_when_failure, this controls how many repeats of sub-models
 591# used for feature selection.  A single repeat only has each sub-model
 592# consider a single sub-set of features, while repeats shuffle which
 593# features are considered allowing more chance to find important interactions.
 594# More repeats can lead to higher accuracy.
 595# The cost of this option is proportional to the repeat count.
 596# 
 597#reduce_repeats_when_failure = 1
 598
 599# With allow_reduce_features_when_failure, this controls the fraction of features
 600# treated as an anchor that are fixed for all sub-models.
 601# Each repeat gets new anchors.
 602# For tuning and evolution, the probability depends
 603# upon any prior importance (if present) from other individuals,
 604# while final model uses uniform probability for anchor features.
 605# 
 606#fraction_anchor_reduce_features_when_failure = 0.1
 607
 608# Error strings from XGBoost that are used to trigger re-fit on reduced sub-models.
 609# See allow_reduce_features_when_failure.
 610# 
 611#xgboost_reduce_on_errors_list = "['Memory allocation error on worker', 'out of memory', 'XGBDefaultDeviceAllocatorImpl', 'invalid configuration argument', 'Requested memory']"
 612
 613# Error strings from LightGBM that are used to trigger re-fit on reduced sub-models.
 614# See allow_reduce_features_when_failure.
 615# 
 616#lightgbm_reduce_on_errors_list = "['Out of Host Memory']"
 617
 618# LightGBM does not significantly benefit from GPUs, unlike other tools like XGBoost or Bert/Image Models.
 619# Each experiment will try to use all GPUs, and on systems with many cores and GPUs,
 620# this leads to many experiments running at once, all trying to lock the GPU for use,
 621# leaving the cores heavily under-utilized.  So by default, DAI always uses CPU for LightGBM, unless 'on' is specified.
 622#lightgbm_use_gpu = "auto"
 623
 624# Kaggle username for automatic submission and scoring of test set predictions.
 625# See https://github.com/Kaggle/kaggle-api#api-credentials for details on how to obtain Kaggle API credentials",
 626# 
 627#kaggle_username = ""
 628
 629# Kaggle key for automatic submission and scoring of test set predictions.
 630# See https://github.com/Kaggle/kaggle-api#api-credentials for details on how to obtain Kaggle API credentials",
 631# 
 632#kaggle_key = ""
 633
 634# Max. number of seconds to wait for Kaggle API call to return scores for given predictions
 635#kaggle_timeout = 120
 636
 637#kaggle_keep_submission = false
 638
 639# If provided, can extend the list to arbitrary and potentially future Kaggle competitions to make
 640# submissions for. Only used if kaggle_key and kaggle_username are provided.
 641# Provide a quoted comma-separated list of tuples (target column name, number of test rows, competition, metric) like this:
 642# kaggle_competitions='("target", 200000, "santander-customer-transaction-prediction", "AUC"), ("TARGET", 75818, "santander-customer-satisfaction", "AUC")'
 643# 
 644#kaggle_competitions = ""
 645
 646# Period (in seconds) of ping by Driverless AI server to each experiment
 647# (in order to get logger info like disk space and memory usage).
 648# 0 means don't print anything.
 649#ping_period = 60
 650
 651# Whether to enable ping of system status during DAI experiments.
 652#ping_autodl = true
 653
 654# Minimum amount of disk space in GB needed to run experiments.
 655# Experiments will fail if this limit is crossed.
 656# This limit exists because Driverless AI needs to generate data for model training
 657# feature engineering, documentation and other such processes.
 658#disk_limit_gb = 5
 659
 660# Minimum amount of disk space in GB needed to before stall forking of new processes during an experiment.
 661#stall_disk_limit_gb = 1
 662
 663# Minimum amount of system memory in GB needed to start experiments.
 664# Similarly with disk space, a certain amount of system memory is needed to run some basic
 665# operations.
 666#memory_limit_gb = 5
 667
 668# Minimum number of rows needed to run experiments (values lower than 100 might not work).
 669# A minimum threshold is set to ensure there is enough data to create a statistically
 670# reliable model and avoid other small-data related failures.
 671# 
 672#min_num_rows = 100
 673
 674# Minimum required number of rows (in the training data) for each class label for classification problems.
 675#min_rows_per_class = 5
 676
 677# Minimum required number of rows for each split when generating validation samples.
 678#min_rows_per_split = 5
 679
 680# Level of reproducibility desired (for same data and same inputs).
 681# Only active if 'reproducible' mode is enabled (GUI button enabled or a seed is set from the client API).
 682# Supported levels are:
 683# reproducibility_level = 1 for same experiment results as long as same O/S, same CPU(s) and same GPU(s)
 684# reproducibility_level = 2 for same experiment results as long as same O/S, same CPU architecture and same GPU architecture
 685# reproducibility_level = 3 for same experiment results as long as same O/S, same CPU architecture, not using GPUs
 686# reproducibility_level = 4 for same experiment results as long as same O/S, (best effort)
 687# 
 688#reproducibility_level = 1
 689
 690# Seed for random number generator to make experiments reproducible, to a certain reproducibility level (see above).
 691# Only active if 'reproducible' mode is enabled (GUI button enabled or a seed is set from the client API).
 692# 
 693#seed = 1234
 694
 695# The list of values that should be interpreted as missing values during data import.
 696# This applies to both numeric and string columns. Note that the dataset must be reloaded after applying changes to this config via the expert settings.
 697# Also note that 'nan' is always interpreted as a missing value for numeric columns.
 698#missing_values = "['', '?', 'None', 'nan', 'NA', 'N/A', 'unknown', 'inf', '-inf', '1.7976931348623157e+308', '-1.7976931348623157e+308']"
 699
 700# Whether to impute (to mean) for GLM on training data.
 701#glm_nan_impute_training_data = false
 702
 703# Whether to impute (to mean) for GLM on validation data.
 704#glm_nan_impute_validation_data = false
 705
 706# Whether to impute (to mean) for GLM on prediction data (required for consistency with MOJO).
 707#glm_nan_impute_prediction_data = true
 708
 709# For tensorflow, what numerical value to give to missing values, where numeric values are standardized.
 710# So 0 is center of distribution, and if Normal distribution then +-5 is 5 standard deviations away from the center.
 711# In many cases, an out of bounds value is a good way to represent missings, but in some cases the mean (0) may be better.
 712#tf_nan_impute_value = -5
 713
 714# Internal threshold for number of rows x number of columns to trigger certain statistical
 715# techniques (small data recipe like including one hot encoding for all model types, and smaller learning rate)
 716# to increase model accuracy
 717#statistical_threshold_data_size_small = 100000
 718
 719# Internal threshold for number of rows x number of columns to trigger certain statistical
 720# techniques (fewer genes created, removal of high max_depth for tree models, etc.) that can speed up modeling.
 721# Also controls maximum rows used in training final model,
 722# by sampling statistical_threshold_data_size_large / columns number of rows
 723#statistical_threshold_data_size_large = 500000000
 724
 725# Internal threshold for number of rows x number of columns to trigger sampling for auxiliary data uses,
 726# like imbalanced data set detection and bootstrap scoring sample size and iterations
 727#aux_threshold_data_size_large = 10000000
 728
 729# Internal threshold for set-based method for sampling without replacement.
 730# Can be 10x faster than np_random_choice internal optimized method, and
 731# up to 30x faster than np.random.choice to sample 250k rows from 1B rows etc.
 732#set_method_sampling_row_limit = 5000000
 733
 734# Internal threshold for number of rows x number of columns to trigger certain changes in performance
 735# (fewer threads if beyond large value) to help avoid OOM or unnecessary slowdowns
 736# (fewer threads if lower than small value) to avoid excess forking of tasks
 737#performance_threshold_data_size_small = 100000
 738
 739# Internal threshold for number of rows x number of columns to trigger certain changes in performance
 740# (fewer threads if beyond large value) to help avoid OOM or unnecessary slowdowns
 741# (fewer threads if lower than small value) to avoid excess forking of tasks
 742#performance_threshold_data_size_large = 100000000
 743
 744# Threshold for number of rows x number of columns to trigger GPU to be default for models like XGBoost GBM.
 745#gpu_default_threshold_data_size_large = 1000000
 746
 747# Maximum fraction of mismatched columns to allow between train and either valid or test.  Beyond this value the experiment will fail with invalid data error.
 748#max_relative_cols_mismatch_allowed = 0.5
 749
 750# Enable various rules to handle wide (Num. columns > Num. rows) datasets ('auto'/'on'/'off').  Setting on forces rules to be enabled regardless of columns.
 751#enable_wide_rules = "auto"
 752
 753# If columns > wide_factor * rows, then enable wide rules if auto.  For columns > rows, random forest is always enabled.
 754#wide_factor = 5.0
 755
 756# Maximum number of columns to start an experiment. This threshold exists to constraint the # complexity and the length of the Driverless AI's processes.
 757#max_cols = 10000000
 758
 759# Largest number of rows to use for column stats, otherwise sample randomly
 760#max_rows_col_stats = 1000000
 761
 762# Largest number of rows to use for cv in cv for target encoding when doing gini scoring test
 763#max_rows_cv_in_cv_gini = 100000
 764
 765# Largest number of rows to use for constant model fit, otherwise sample randomly
 766#max_rows_constant_model = 1000000
 767
 768# Largest number of rows to use for final ensemble base model fold cores, otherwise sample randomly
 769#max_rows_final_ensemble_base_model_fold_scores = 1000000
 770
 771# Largest number of rows to use for final ensemble blender for regression and binary (scaled down linearly by number of classes for multiclass for >= 10 classes), otherwise sample randomly.
 772#max_rows_final_blender = 1000000
 773
 774# Smallest number of rows (or number of rows if less than this) to use for final ensemble blender.
 775#min_rows_final_blender = 10000
 776
 777# Largest number of rows to use for final training score (no holdout), otherwise sample randomly
 778#max_rows_final_train_score = 5000000
 779
 780# Largest number of rows to use for final ROC, lift-gains, confusion matrix, residual, and actual vs. predicted.  Otherwise sample randomly
 781#max_rows_final_roccmconf = 1000000
 782
 783# Largest number of rows to use for final holdout scores, otherwise sample randomly
 784#max_rows_final_holdout_score = 5000000
 785
 786# Largest number of rows to use for final holdout bootstrap scores, otherwise sample randomly
 787#max_rows_final_holdout_bootstrap_score = 1000000
 788
 789# Whether to obtain permutation feature importance on original features for reporting in logs and summary zip file
 790# (as files with pattern fs_*.json or fs_*.tab.txt).
 791# This computes feature importance on a single un-tuned model
 792# (typically LightGBM with pre-defined un-tuned hyperparameters)
 793# and simple set of features (encoding typically is frequency encoding or target encoding).
 794# Features with low importance are automatically dropped if there are many original features,
 795# or a model with feature selection by permutation importance is created if interpretability is high enough in order to see if it gives a better score.
 796# One can manually drop low importance features, but this can be risky as transformers or hyperparameters might recover
 797# their usefulness.
 798# Permutation importance is obtained by:
 799# 1) Transforming categoricals to frequency or target encoding features.
 800# 2) Fitting that model on many folds, different data sizes, and slightly varying hyperparameters.
 801# 3) Predicting on that model for each feature where each feature has its data shuffled.
 802# 4) Computing the score on each shuffled prediction.
 803# 5) Computing the difference between the unshuffled score and the shuffled score to arrive at a delta score
 804# 6) The delta score becomes the variable importance once normalized by the maximum.
 805# Positive delta scores indicate the feature helped the model score,
 806# while negative delta scores indicate the feature hurt the model score.
 807# The normalized scores are stored in the fs_normalized_* files in the summary zip.
 808# The unnormalized scores (actual delta scores) are stored in the fs_unnormalized_* files in the summary zip.
 809# AutoDoc has a similar functionality of providing permutation importance on original features,
 810# where that takes the specific final model of an experiment and runs training data set through permutation importance to get original importance,
 811# so shuffling of original features is performed and the full pipeline is computed in each shuffled set of original features.
 812# 
 813#orig_features_fs_report = false
 814
 815# Maximum number of rows when doing permutation feature importance, reduced by (stratified) random sampling.
 816# 
 817#max_rows_fs = 500000
 818
 819#max_rows_leak = 100000
 820
 821# How many workers to use for feature selection by permutation for predict phase.
 822# (0 = auto, > 0: min of DAI value and this value, < 0: exactly negative of this value)
 823# 
 824#max_workers_fs = 0
 825
 826# How many workers to use for shift and leakage checks  if using LightGBM on CPU.
 827# (0 = auto, > 0: min of DAI value and this value, < 0: exactly negative of this value)
 828# 
 829#max_workers_shift_leak = 0
 830
 831# Maximum number of columns selected out of original set of original columns, using feature selection.
 832# The selection is based upon how well target encoding (or frequency encoding if not available) on categoricals and numerics treated as categoricals.
 833# This is useful to reduce the final model complexity. First the best
 834# [max_orig_cols_selected] are found through feature selection methods and then
 835# these features are used in feature evolution (to derive other features) and in modelling.
 836# 
 837#max_orig_cols_selected = 10000000
 838
 839# Maximum number of numeric columns selected, above which will do feature selection
 840# same max_orig_cols_selected but for numeric columns.
 841#max_orig_numeric_cols_selected = 10000000
 842
 843#max_orig_nonnumeric_cols_selected_default = 300
 844
 845# Maximum number of non-numeric columns selected, above which will do feature selection on all features. Same as max_orig_numeric_cols_selected but for categorical columns.
 846# If set to -1, then auto mode which uses max_orig_nonnumeric_cols_selected_default, but then for small data can be increased up to 10x larger.
 847# 
 848#max_orig_nonnumeric_cols_selected = -1
 849
 850# The factor times max_orig_cols_selected, by which column selection is based upon no target encoding and no treating numerical as categorical
 851# in order to limit performance cost of feature engineering
 852#max_orig_cols_selected_simple_factor = 2
 853
 854# Like max_orig_cols_selected, but columns above which add special individual with original columns reduced.
 855# 
 856#fs_orig_cols_selected = 10000000
 857
 858# Like max_orig_numeric_cols_selected, but applicable to special individual with original columns reduced.
 859# A separate individual in the genetic algorithm is created by doing feature selection by permutation importance on original features.
 860# 
 861#fs_orig_numeric_cols_selected = 10000000
 862
 863# Like max_orig_nonnumeric_cols_selected, but applicable to special individual with original columns reduced.
 864# A separate individual in the genetic algorithm is created by doing feature selection by permutation importance on original features.
 865# 
 866#fs_orig_nonnumeric_cols_selected = 200
 867
 868# Like max_orig_cols_selected_simple_factor, but applicable to special individual with original columns reduced.
 869#fs_orig_cols_selected_simple_factor = 2
 870
 871#predict_shuffle_inside_model = true
 872
 873#use_native_cats_for_lgbm_fs = true
 874
 875#orig_stddev_max_cols = 1000
 876
 877# Maximum allowed fraction of unique values for integer and categorical columns (otherwise will treat column as ID and drop)
 878#max_relative_cardinality = 0.95
 879
 880# Maximum allowed number of unique values for integer and categorical columns (otherwise will treat column as ID and drop)
 881#max_absolute_cardinality = 1000000
 882
 883# Whether to treat some numerical features as categorical.
 884# For instance, sometimes an integer column may not represent a numerical feature but
 885# represent different numerical codes instead.
 886# Very restrictive to disable, since then even columns with few categorical levels that happen to be numerical
 887# in value will not be encoded like a categorical.
 888# 
 889#num_as_cat = true
 890
 891# Max number of unique values for integer/real columns to be treated as categoricals (test applies to first statistical_threshold_data_size_small rows only)
 892#max_int_as_cat_uniques = 50
 893
 894# Max number of unique values for integer/real columns to be treated as categoricals (test applies to first statistical_threshold_data_size_small rows only). Applies to integer or real numerical feature that violates Benford's law, and so is ID-like but not entirely an ID.
 895#max_int_as_cat_uniques_if_not_benford = 10000
 896
 897# When the fraction of non-numeric (and non-missing) values is less or equal than this value, consider the
 898# column numeric. Can help with minor data quality issues for experimentation, > 0 is not recommended for production,
 899# since type inconsistencies can occur. Note: Replaces non-numeric values with missing values
 900# at start of experiment, so some information is lost, but column is now treated as numeric, which can help.
 901# If < 0, then disabled.
 902# If == 0, then if number of rows <= max_rows_col_stats, then convert any column of strings of numbers to numeric type.
 903# 
 904#max_fraction_invalid_numeric = 0.0
 905
 906# Number of folds for models used during the feature engineering process.
 907# Increasing this will put a lower fraction of data into validation and more into training
 908# (e.g., num_folds=3 means 67%/33% training/validation splits).
 909# Actual value will vary for small or big data cases.
 910# 
 911#num_folds = 3
 912
 913#fold_balancing_repeats_times_rows = 100000000.0
 914
 915#max_fold_balancing_repeats = 10
 916
 917#fixed_split_seed = 0
 918
 919#show_fold_stats = true
 920
 921# For multiclass problems only. Whether to allow different sets of target classes across (cross-)validation
 922# fold splits. Especially important when passing a fold column that isn't balanced w.r.t class distribution.
 923# 
 924#allow_different_classes_across_fold_splits = true
 925
 926# Accuracy setting equal and above which enables full cross-validation (multiple folds) during feature evolution
 927# as opposed to only a single holdout split (e.g. 2/3 train and 1/3 validation holdout)
 928# 
 929#full_cv_accuracy_switch = 9
 930
 931# Accuracy setting equal and above which enables stacked ensemble as final model.
 932# Stacking commences at the end of the feature evolution process..
 933# It quite often leads to better model performance, but it does increase the complexity
 934# and execution time of the final model.
 935# 
 936#ensemble_accuracy_switch = 5
 937
 938# Number of fold splits to use for ensemble_level >= 2.
 939# The ensemble modelling may require predictions to be made on out-of-fold samples
 940# hence the data needs to be split on different folds to generate these predictions.
 941# Less folds (like 2 or 3) normally create more stable models, but may be less accurate
 942# More folds can get to higher accuracy at the expense of more time, but the performance
 943# may be less stable when the training data is not enough (i.e. higher chance of overfitting).
 944# Actual value will vary for small or big data cases.
 945# 
 946#num_ensemble_folds = 4
 947
 948# Includes pickles of (train_idx, valid_idx) tuples (numpy row indices for original training data)
 949# for all internal validation folds in the experiment summary zip. For debugging.
 950# 
 951#save_validation_splits = false
 952
 953# Number of repeats for each fold for all validation
 954# (modified slightly for small or big data cases)
 955# 
 956#fold_reps = 1
 957
 958#max_num_classes_hard_limit = 10000
 959
 960# Maximum number of classes to allow for a classification problem.
 961# High number of classes may make certain processes of Driverless AI time-consuming.
 962# Memory requirements also increase with higher number of classes
 963# 
 964#max_num_classes = 1000
 965
 966# Maximum number of classes to compute ROC and CM for,
 967# beyond which roc_reduce_type choice for reduction is applied.
 968# Too many classes can take much longer than model building time.
 969# 
 970#max_num_classes_compute_roc = 200
 971
 972# Maximum number of classes to show in GUI for confusion matrix, showing first max_num_classes_client_and_gui labels.
 973# Beyond 6 classes the diagnostics launched from GUI are visually truncated.
 974# This will only modify client-GUI launched diagnostics if changed in config.toml and server is restarted,
 975# while this value can be changed in expert settings to control experiment plots.
 976# 
 977#max_num_classes_client_and_gui = 10
 978
 979# If too many classes when computing roc,
 980# reduce by "rows" by randomly sampling rows,
 981# or reduce by truncating classes to no more than max_num_classes_compute_roc.
 982# If have sufficient rows for class count, can reduce by rows.
 983# 
 984#roc_reduce_type = "rows"
 985
 986#min_roc_sample_size = 1
 987
 988# Maximum number of rows to obtain confusion matrix related plots during feature evolution.
 989# Does not limit final model calculation.
 990# 
 991#max_rows_cm_ga = 500000
 992
 993# Number of actuals vs. predicted data points to use in order to generate in the relevant
 994# plot/graph which is shown at the right part of the screen within an experiment.
 995#num_actuals_vs_predicted = 100
 996
 997# Whether to use feature_brain results even if running new experiments.
 998# Feature brain can be risky with some types of changes to experiment setup.
 999# Even rescoring may be insufficient, so by default this is False.
1000# For example, one experiment may have training=external validation by accident, and get high score,
1001# and while feature_brain_reset_score='on' means we will rescore, it will have already seen
1002# during training the external validation and leak that data as part of what it learned from.
1003# If this is False, feature_brain_level just sets possible models to use and logs/notifies,
1004# but does not use these feature brain cached models.
1005# 
1006#use_feature_brain_new_experiments = false
1007
1008# Whether reuse dataset schema, such as data types set in UI for each column, from parent experiment ('on') or to ignore original dataset schema and only use new schema ('off').
1009# resume_data_schema=True is a basic form of data lineage, but it may not be desirable if data colunn names changed to incompatible data types like int to string.
1010# 'auto': for restart, retrain final pipeline, or refit best models, default is to resume data schema, but new experiments would not by default reuse old schema.
1011# 'on': force reuse of data schema from parent experiment if possible
1012# 'off': don't reuse data schema under any case.
1013# The reuse of the column schema can also be disabled by:
1014# in UI: selecting Parent Experiment as None
1015# in client: setting resume_experiment_id to None
1016#resume_data_schema = "auto"
1017
1018#resume_data_schema_old_logic = false
1019
1020# Whether to show (or use) results from H2O.ai brain: the local caching and smart re-use of prior experiments,
1021# in order to generate more useful features and models for new experiments.
1022# See use_feature_brain_new_experiments for how new experiments by default do not use brain cache.
1023# It can also be used to control checkpointing for experiments that have been paused or interrupted.
1024# DAI will use H2O.ai brain cache if cache file has
1025# a) any matching column names and types for a similar experiment type
1026# b) exactly matches classes
1027# c) exactly matches class labels
1028# d) matches basic time series choices
1029# e) interpretability of cache is equal or lower
1030# f) main model (booster) is allowed by new experiment.
1031# Level of brain to use (for chosen level, where higher levels will also do all lower level operations automatically)
1032# -1 = Don't use any brain cache and don't write any cache
1033# 0 = Don't use any brain cache but still write cache
1034# Use case: Want to save model for later use, but want current model to be built without any brain models
1035# 1 = smart checkpoint from latest best individual model
1036# Use case: Want to use latest matching model, but match can be loose, so needs caution
1037# 2 = smart checkpoint from H2O.ai brain cache of individual best models
1038# Use case: DAI scans through H2O.ai brain cache for best models to restart from
1039# 3 = smart checkpoint like level #1, but for entire population.  Tune only if brain population insufficient size
1040# (will re-score entire population in single iteration, so appears to take longer to complete first iteration)
1041# 4 = smart checkpoint like level #2, but for entire population.  Tune only if brain population insufficient size
1042# (will re-score entire population in single iteration, so appears to take longer to complete first iteration)
1043# 5 = like #4, but will scan over entire brain cache of populations to get best scored individuals
1044# (can be slower due to brain cache scanning if big cache)
1045# 1000 + feature_brain_level (above positive values) = use resumed_experiment_id and actual feature_brain_level,
1046# to use other specific experiment as base for individuals or population,
1047# instead of sampling from any old experiments
1048# GUI has 3 options and corresponding settings:
1049# 1) New Experiment: Uses feature brain level default of 2
1050# 2) New Experiment With Same Settings: Re-uses the same feature brain level as parent experiment
1051# 3) Restart From Last Checkpoint: Resets feature brain level to 1003 and sets experiment ID to resume from
1052# (continued genetic algorithm iterations)
1053# 4) Retrain Final Pipeline:  Like Restart but also time=0 so skips any tuning and heads straight to final model
1054# (assumes had at least one tuning iteration in parent experiment)
1055# Other use cases:
1056# a) Restart on different data: Use same column names and fewer or more rows (applicable to 1 - 5)
1057# b) Re-fit only final pipeline: Like (a), but choose time=1 and feature_brain_level=3 - 5
1058# c) Restart with more columns: Add columns, so model builds upon old model built from old column names (1 - 5)
1059# d) Restart with focus on model tuning: Restart, then select feature_engineering_effort = 3 in expert settings
1060# e) can retrain final model but ignore any original features except those in final pipeline (normal retrain but set brain_add_features_for_new_columns=false)
1061# Notes:
1062# 1) In all cases, we first check the resumed experiment id if given, and then the brain cache
1063# 2) For Restart cases, may want to set min_dai_iterations to non-zero to force delayed early stopping, else may not be enough iterations to find better model.
1064# 3) A "New experiment with Same Settings" of a Restart will use feature_brain_level=1003 for default Restart mode (revert to 2, or even 0 if want to start a fresh experiment otherwise)
1065#feature_brain_level = 2
1066
1067# Whether to smartly keep score to avoid re-munging/re-training/re-scoring steps brain models ('auto'), always
1068# force all steps for all brain imports ('on'), or never rescore ('off').
1069# 'auto' only re-scores if a difference in current and prior experiment warrants re-scoring, like column changes, metric changes, etc.
1070# 'on' is useful when smart similarity checking is not reliable enough.
1071# 'off' is uesful when know want to keep exact same features and model for final model refit, despite changes in seed or other behaviors
1072# in features that might change the outcome if re-scored before reaching final model.
1073# If set off, then no limits are applied to features during brain ingestion,
1074# while can set brain_add_features_for_new_columns to false if want to ignore any new columns in data.
1075# In addition, any unscored individuals loaded from parent experiment are not rescored when doing refit or retrain.
1076# Can also set refit_same_best_individual True if want exact same best individual (highest scored model+features) to be used
1077# regardless of any scoring changes.
1078# 
1079#feature_brain_reset_score = "auto"
1080
1081#enable_strict_confict_key_check_for_brain = true
1082
1083#allow_change_layer_count_brain = false
1084
1085# Relative number of columns that must match between current reference individual and brain individual.
1086# 0.0: perfect match
1087# 1.0: All columns are different, worst match
1088# e.g. 0.1 implies no more than 10% of columns mismatch between reference set of columns and brain individual.
1089# 
1090#brain_maximum_diff_score = 0.1
1091
1092# Maximum number of brain individuals pulled from H2O.ai brain cache for feature_brain_level=1, 2
1093#max_num_brain_indivs = 3
1094
1095# Save feature brain iterations every iter_num % feature_brain_iterations_save_every_iteration == 0, to be able to restart/refit with which_iteration_brain >= 0
1096# 0 means disable
1097# 
1098#feature_brain_save_every_iteration = 0
1099
1100# When doing restart or re-fit type feature_brain_level with resumed_experiment_id, choose which iteration to start from, instead of only last best
1101# -1 means just use last best
1102# Usage:
1103# 1) Run one experiment with feature_brain_iterations_save_every_iteration=1 or some other number
1104# 2) Identify which iteration brain dump one wants to restart/refit from
1105# 3) Restart/Refit from original experiment, setting which_iteration_brain to that number in expert settings
1106# Note: If restart from a tuning iteration, this will pull in entire scored tuning population and use that for feature evolution
1107# 
1108#which_iteration_brain = -1
1109
1110# When doing re-fit from feature brain, if change columns or features, population of individuals used to refit from may change order of which was best,
1111# leading to better result chosen (False case).  But sometimes want to see exact same model/features with only one feature added,
1112# and then would need to set this to True case.
1113# E.g. if refit with just 1 extra column and have interpretability=1, then final model will be same features,
1114# with one more engineered feature applied to that new original feature.
1115# 
1116#refit_same_best_individual = false
1117
1118# When doing restart or re-fit of experiment from feature brain,
1119# sometimes user might change data significantly and then warrant
1120# redoing reduction of original features by feature selection, shift detection, and leakage detection.
1121# However, in other cases, if data and all options are nearly (or exactly) identical, then these
1122# steps might change the features slightly (e.g. due to random seed if not setting reproducible mode),
1123# leading to changes in features and model that is refitted.  By default, restart and refit avoid
1124# these steps assuming data and experiment setup have no changed significantly.
1125# If check_distribution_shift is forced to on (instead of auto), then this option is ignored.
1126# In order to ensure exact same final pipeline is fitted, one should also set:
1127# 1) brain_add_features_for_new_columns false
1128# 2) refit_same_best_individual true
1129# 3) feature_brain_reset_score 'off'
1130# 4) force_model_restart_to_defaults false
1131# The score will still be reset if the experiment metric chosen changes,
1132# but changes to the scored model and features will be more frozen in place.
1133# 
1134#restart_refit_redo_origfs_shift_leak = "[]"
1135
1136# Directory, relative to data_directory, to store H2O.ai brain meta model files
1137#brain_rel_dir = "H2O.ai_brain"
1138
1139# Maximum size in bytes the brain will store
1140# We reserve this memory to save data in order to ensure we can retrieve an experiment if
1141# for any reason it gets interrupted.
1142# -1: unlimited
1143# >=0 number of GB to limit brain to
1144#brain_max_size_GB = 20
1145
1146# Whether to take any new columns and add additional features to pipeline, even if doing retrain final model.
1147# In some cases, one might have a new dataset but only want to keep same pipeline regardless of new columns,
1148# in which case one sets this to False.  For example, new data might lead to new dropped features,
1149# due to shift or leak detection.  To avoid change of feature set, one can disable all dropping of columns,
1150# but set this to False to avoid adding any columns as new features,
1151# so pipeline is perfectly preserved when changing data.
1152# 
1153#brain_add_features_for_new_columns = true
1154
1155# If restart/refit and no longer have the original model class available, be conservative
1156# and go back to defaults for that model class.  If False, then try to keep original hyperparameters,
1157# which can fail to work in general.
1158# 
1159#force_model_restart_to_defaults = true
1160
1161# Whether to enable early stopping
1162# Early stopping refers to stopping the feature evolution/engineering process
1163# when there is no performance uplift after a certain number of iterations.
1164# After early stopping has been triggered, Driverless AI will initiate the ensemble
1165# process if selected.
1166#early_stopping = true
1167
1168# Whether to enable early stopping per individual
1169# Each individual in the generic algorithm will stop early if no improvement,
1170# and it will no longer be mutated.
1171# Instead, the best individual will be additionally mutated.
1172#early_stopping_per_individual = true
1173
1174# Minimum number of Driverless AI iterations to stop the feature evolution/engineering
1175# process even if score is not improving. Driverless AI needs to run for at least that many
1176# iterations before deciding to stop. It can be seen a safeguard against suboptimal (early)
1177# convergence.
1178# 
1179#min_dai_iterations = 0
1180
1181# Maximum features per model (and each model within the final model if ensemble) kept.
1182# Keeps top variable importance features, prunes rest away, after each scoring.
1183# Final ensemble will exclude any pruned-away features and only train on kept features,
1184# but may contain a few new features due to fitting on different data view (e.g. new clusters)
1185# Final scoring pipeline will exclude any pruned-away features,
1186# but may contain a few new features due to fitting on different data view (e.g. new clusters)
1187# -1 means no restrictions except internally-determined memory and interpretability restrictions.
1188# Notes:
1189# * If interpretability > remove_scored_0gain_genes_in_postprocessing_above_interpretability, then
1190# every GA iteration post-processes features down to this value just after scoring them.  Otherwise,
1191# only mutations of scored individuals will be pruned (until the final model where limits are strictly applied).
1192# * If ngenes_max is not also limited, then some individuals will have more genes and features until
1193# pruned by mutation or by preparation for final model.
1194# * E.g. to generally limit every iteration to exactly 1 features, one must set nfeatures_max=ngenes_max=1
1195# and remove_scored_0gain_genes_in_postprocessing_above_interpretability=0, but the genetic algorithm
1196# will have a harder time finding good features.
1197# 
1198#nfeatures_max = -1
1199
1200# Maximum genes (transformer instances) per model (and each model within the final model if ensemble) kept.
1201# Controls number of genes before features are scored, so just randomly samples genes if pruning occurs.
1202# If restriction occurs after scoring features, then aggregated gene importances are used for pruning genes.
1203# Instances includes all possible transformers, including original transformer for numeric features.
1204# -1 means no restrictions except internally-determined memory and interpretability restrictions
1205# 
1206#ngenes_max = -1
1207
1208# Like ngenes_max but controls minimum number of genes.
1209# Useful when DAI by default is making too few genes but want many more.
1210# This can be useful when one has few input features, so DAI may remain conservative and not make many transformed features.  But user knows that some transformed features may be useful.
1211# E.g. only target encoding transformer might have been chosen, and one wants DAI to explore many more possible input features at once.
1212#ngenes_min = -1
1213
1214# Minimum genes (transformer instances) per model (and each model within the final model if ensemble) kept.
1215# Instances includes all possible transformers, including original transformer for numeric features.
1216# -1 means no restrictions except internally-determined memory and interpretability restrictions
1217# 
1218#nfeatures_min = -1
1219
1220# Whether to limit feature counts by interpretability setting via features_allowed_by_interpretability
1221#limit_features_by_interpretability = true
1222
1223# Whether to use out-of-fold predictions of Word-based CNN TensorFlow models as transformers for NLP if TensorFlow enabled
1224#enable_tensorflow_textcnn = "auto"
1225
1226# Whether to use out-of-fold predictions of Word-based Bi-GRU TensorFlow models as transformers for NLP if TensorFlow enabled
1227#enable_tensorflow_textbigru = "auto"
1228
1229# Whether to use out-of-fold predictions of Character-level CNN TensorFlow models as transformers for NLP if TensorFlow enabled
1230#enable_tensorflow_charcnn = "auto"
1231
1232# Whether to use pretrained PyTorch models as transformers for NLP tasks. Fits a linear model on top of pretrained embeddings. Requires internet connection. Default of 'auto' means disabled. To enable, set to 'on'. GPU(s) are highly recommended.Reduce string_col_as_text_min_relative_cardinality closer to 0.0 and string_col_as_text_threshold closer to 0.0 to force string column to be treated as text despite low number of uniques.
1233#enable_pytorch_nlp_transformer = "auto"
1234
1235# More rows can slow down the fitting process. Recommended values are less than 100000.
1236#pytorch_nlp_transformer_max_rows_linear_model = 50000
1237
1238# Whether to use pretrained PyTorch models and fine-tune them for NLP tasks. Requires internet connection. Default of 'auto' means disabled. To enable, set to 'on'. These models are only using the first text column, and can be slow to train. GPU(s) are highly recommended.Set string_col_as_text_min_relative_cardinality=0.0 to force string column to be treated as text despite low number of uniques.
1239#enable_pytorch_nlp_model = "auto"
1240
1241# Select which pretrained PyTorch NLP model(s) to use. Non-default ones might have no MOJO support. Requires internet connection. Only if PyTorch models or transformers for NLP are set to 'on'.
1242#pytorch_nlp_pretrained_models = "['bert-base-uncased', 'distilbert-base-uncased', 'bert-base-multilingual-cased']"
1243
1244# Max. number of epochs for TensorFlow models for making NLP features
1245#tensorflow_max_epochs_nlp = 2
1246
1247# Accuracy setting equal and above which will add all enabled TensorFlow NLP models below at start of experiment for text dominated problems
1248# when TensorFlow NLP transformers are set to auto.  If set to on, this parameter is ignored.
1249# Otherwise, at lower accuracy, TensorFlow NLP transformations will only be created as a mutation.
1250# 
1251#enable_tensorflow_nlp_accuracy_switch = 5
1252
1253# Path to pretrained embeddings for TensorFlow NLP models, can be a path in local file system or an S3 location (s3://).
1254# For example, download and unzip https://nlp.stanford.edu/data/glove.6B.zip
1255# tensorflow_nlp_pretrained_embeddings_file_path = /path/on/server/to/glove.6B.300d.txt
1256# 
1257#tensorflow_nlp_pretrained_embeddings_file_path = ""
1258
1259#tensorflow_nlp_pretrained_s3_access_key_id = ""
1260
1261#tensorflow_nlp_pretrained_s3_secret_access_key = ""
1262
1263# Allow training of all weights of the neural network graph, including the pretrained embedding layer weights. If disabled, then the embedding layer is frozen, but all other weights are still fine-tuned.
1264#tensorflow_nlp_pretrained_embeddings_trainable = false
1265
1266#tensorflow_nlp_have_gpus_in_production = false
1267
1268#bert_migration_timeout_secs = 600
1269
1270#enable_bert_transformer_acceptance_test = false
1271
1272#enable_bert_model_acceptance_test = false
1273
1274# Whether to parallelize tokenization for BERT Models/Transformers.
1275#pytorch_tokenizer_parallel = true
1276
1277# Number of epochs for fine-tuning of PyTorch NLP models. Larger values can increase accuracy but take longer to train.
1278#pytorch_nlp_fine_tuning_num_epochs = -1
1279
1280# Batch size for PyTorch NLP models. Larger models and larger batch sizes will use more memory.
1281#pytorch_nlp_fine_tuning_batch_size = -1
1282
1283# Maximum sequence length (padding length) for PyTorch NLP models. Larger models and larger padding lengths will use more memory.
1284#pytorch_nlp_fine_tuning_padding_length = -1
1285
1286# Path to pretrained PyTorch NLP models. Note that this can be either a path in the local file system
1287# (/path/on/server/to/bert_models_folder), an URL or a S3 location (s3://).
1288# To get all models, download http://s3.amazonaws.com/artifacts.h2o.ai/releases/ai/h2o/pretrained/bert_models.zip
1289# and unzip and store it in a directory on the instance where DAI is installed.
1290# ``pytorch_nlp_pretrained_models_dir=/path/on/server/to/bert_models_folder``
1291# 
1292#pytorch_nlp_pretrained_models_dir = ""
1293
1294#pytorch_nlp_pretrained_s3_access_key_id = ""
1295
1296#pytorch_nlp_pretrained_s3_secret_access_key = ""
1297
1298# Fraction of text columns out of all features to be considered a text-dominated problem
1299#text_fraction_for_text_dominated_problem = 0.3
1300
1301# Fraction of text transformers to all transformers above which to trigger that text dominated problem
1302#text_transformer_fraction_for_text_dominated_problem = 0.3
1303
1304# Whether to reduce options for text-dominated models to reduce expense, e.g. disable ensemble, disable genetic algorithm, single identity target encoder for classification, etc.
1305#text_dominated_limit_tuning = true
1306
1307# Whether to reduce options for image-dominated models to reduce expense, e.g. disable ensemble, disable genetic algorithm, single identity target encoder for classification, etc.
1308#image_dominated_limit_tuning = true
1309
1310# Threshold for average string-is-text score as determined by internal heuristics
1311# It decides when a string column will be treated as text (for an NLP problem) or just as
1312# a standard categorical variable.
1313# Higher values will favor string columns as categoricals, lower values will favor string columns as text.
1314# Set string_col_as_text_min_relative_cardinality=0.0 to force string column to be treated as text despite low number of uniques.
1315#string_col_as_text_threshold = 0.3
1316
1317# Threshold for string columns to be treated as text during preview - should be less than string_col_as_text_threshold to allow data with first 20 rows that don't look like text to still work for Text-only transformers (0.0 - text, 1.0 - string)
1318#string_col_as_text_threshold_preview = 0.1
1319
1320# Mininum fraction of unique values for string columns to be considered as possible text (otherwise categorical)
1321#string_col_as_text_min_relative_cardinality = 0.1
1322
1323# Mininum number of uniques for string columns to be considered as possible text (if not already)
1324#string_col_as_text_min_absolute_cardinality = 10000
1325
1326# If disabled, require 2 or more alphanumeric characters for a token in Text (Count and TF/IDF) transformers, otherwise create tokens out of single alphanumeric characters. True means that 'Street 3' is tokenized into 'Street' and '3', while False means that it's tokenized into 'Street'.
1327#tokenize_single_chars = true
1328
1329# Supported image types. URIs with these endings will be considered as image paths (local or remote).
1330#supported_image_types = "['jpg', 'jpeg', 'png', 'bmp', 'ppm', 'tif', 'tiff', 'JPG', 'JPEG', 'PNG', 'BMP', 'PPM', 'TIF', 'TIFF']"
1331
1332# Whether to create absolute paths for images when importing datasets containing images. Can faciliate testing or re-use of frames for scoring.
1333#image_paths_absolute = false
1334
1335# Whether to use pretrained deep learning models for processing of image data as part of the feature engineering pipeline. A column of URIs to images (jpg, png, etc.) will be converted to a numeric representation using ImageNet-pretrained deep learning models. If no GPUs are found, then must be set to 'on' to enable.
1336#enable_tensorflow_image = "auto"
1337
1338# Supported ImageNet pretrained architectures for Image Transformer. Non-default ones will require internet access to download pretrained models from H2O S3 buckets (To get all models, download http://s3.amazonaws.com/artifacts.h2o.ai/releases/ai/h2o/pretrained/dai_image_models_1_11.zip and unzip inside tensorflow_image_pretrained_models_dir).
1339#tensorflow_image_pretrained_models = "['xception']"
1340
1341# Dimensionality of feature (embedding) space created by Image Transformer. If more than one is selected, multiple transformers can be active at the same time.
1342#tensorflow_image_vectorization_output_dimension = "[100]"
1343
1344# Enable fine-tuning of the ImageNet pretrained models used for the Image Transformer. Enabling this will slow down training, but should increase accuracy.
1345#tensorflow_image_fine_tune = false
1346
1347# Number of epochs for fine-tuning of ImageNet pretrained models used for the Image Transformer.
1348#tensorflow_image_fine_tuning_num_epochs = 2
1349
1350# The list of possible image augmentations to apply while fine-tuning the ImageNet pretrained models used for the Image Transformer. Details about individual augmentations could be found here: https://albumentations.ai/docs/.
1351#tensorflow_image_augmentations = "['HorizontalFlip']"
1352
1353# Batch size for Image Transformer. Larger architectures and larger batch sizes will use more memory.
1354#tensorflow_image_batch_size = -1
1355
1356# Path to pretrained Image models.
1357# To get all models, download http://s3.amazonaws.com/artifacts.h2o.ai/releases/ai/h2o/pretrained/dai_image_models_1_11.zip,
1358# then extract it in a directory on the instance where Driverless AI is installed.
1359# 
1360#tensorflow_image_pretrained_models_dir = "./pretrained/image/"
1361
1362# Max. number of seconds to wait for image download if images are provided by URL
1363#image_download_timeout = 60
1364
1365# Maximum fraction of missing elements in a string column for it to be considered as possible image paths (URIs)
1366#string_col_as_image_max_missing_fraction = 0.1
1367
1368# Fraction of (unique) image URIs that need to have valid endings (as defined by string_col_as_image_valid_types) for a string column to be considered as image data
1369#string_col_as_image_min_valid_types_fraction = 0.8
1370
1371# Whether to use GPU(s), if available, to transform images into embeddings with Image Transformer. Can lead to significant speedups.
1372#tensorflow_image_use_gpu = true
1373
1374# Nominally, the time dial controls the search space, with higher time trying more options, but any keys present in this dictionary will override the automatic choices.
1375# e.g. ``params_image_auto_search_space="{'augmentation': ['safe'], 'crop_strategy': ['Resize'], 'optimizer': ['AdamW'], 'dropout': [0.1], 'epochs_per_stage': [5], 'warmup_epochs': [0], 'mixup': [0.0], 'cutmix': [0.0], 'global_pool': ['avg'], 'learning_rate': [3e-4]}"``
1376# Options, e.g. used for time>=8
1377# # Overfit Protection Options:
1378# 'augmentation': ``["safe", "semi_safe", "hard"]``
1379# 'crop_strategy': ``["Resize", "RandomResizedCropSoft", "RandomResizedCropHard"]``
1380# 'dropout': ``[0.1, 0.3, 0.5]``
1381# # Global Pool Options:
1382# avgmax -- sum of AVG and MAX poolings
1383# catavgmax -- concatenation of AVG and MAX poolings
1384# https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/adaptive_avgmax_pool.py
1385# ``'global_pool': ['avg', 'avgmax', 'catavgmax']``
1386# # Regression: No MixUp and CutMix:
1387# ``'mixup': [0.0]``
1388# ``'cutmix': [0.0]``
1389# # Classification: Beta distribution coeff to generate weights for MixUp:
1390# ``'mixup': [0.0, 0.4, 1.0, 3.0]``
1391# ``'cutmix': [0.0, 0.4, 1.0, 3.0]``
1392# # Optimization Options:
1393# ``'epochs_per_stage': [5, 10, 15]``  # from 40 to 135 epochs
1394# ``'warmup_epochs': [0, 0.5, 1]``
1395# ``'optimizer': ["AdamW", "SGD"]``
1396# ``'learning_rate': [1e-3, 3e-4, 1e-4]``
1397#params_image_auto_search_space = "{}"
1398
1399# Nominally, the accuracy dial controls the architectures considered if this is left empty,
1400# but one can choose specific ones.  The options in the list are ordered by complexity.
1401#image_auto_arch = "[]"
1402
1403# Any images smaller are upscaled to the minimum.  Default is 64, but can be as small as 32 given the pooling layers used.
1404#image_auto_min_shape = 64
1405
1406# 0 means automatic based upon time dial of min(1, time//2).
1407#image_auto_num_final_models = 0
1408
1409# 0 means automatic based upon time dial of max(4 * (time - 1), 2).
1410#image_auto_num_models = 0
1411
1412# 0 means automatic based upon time dial of time + 1 if time < 6 else time - 1.
1413#image_auto_num_stages = 0
1414
1415# 0 means automatic based upon time dial or number of models and stages
1416# set by image_auto_num_models and image_auto_num_stages.
1417#image_auto_iterations = 0
1418
1419# 0.0 means automatic based upon the current stage, where stage 0 uses half, stage 1 uses 3/4, and stage 2 uses full image.
1420# One can pass 1.0 to override and always use full image.  0.5 would mean use half.
1421#image_auto_shape_factor = 0.0
1422
1423# Control maximum number of cores to use for image auto model parallel data management. 0 will disable mp: https://pytorch-lightning.readthedocs.io/en/latest/guides/speed.html
1424#max_image_auto_ddp_cores = 10
1425
1426# Percentile value cutoff of input text token lengths for nlp deep learning models
1427#text_dl_token_pad_percentile = 99
1428
1429# Maximum token length of input text to be used in nlp deep learning models
1430#text_dl_token_pad_max = 512
1431
1432# Interpretability setting equal and above which will use automatic monotonicity constraints in
1433# XGBoostGBM/LightGBM/DecisionTree models.
1434# 
1435#monotonicity_constraints_interpretability_switch = 7
1436
1437# For models that support monotonicity constraints, and if enabled, show automatically determined monotonicity constraints for each feature going into the model based on its correlation with the target. 'low' shows only monotonicity constraint direction. 'medium' shows correlation of positively and negatively constraint features. 'high' shows all correlation values.
1438#monotonicity_constraints_log_level = "medium"
1439
1440# Threshold, of Pearson product-moment correlation coefficient between numerical or encoded transformed
1441# feature and target, above (below negative for) which will enforce positive (negative) monotonicity
1442# for XGBoostGBM, LightGBM and DecisionTree models.
1443# Enabled when interpretability >= monotonicity_constraints_interpretability_switch config toml value.
1444# Only if monotonicity_constraints_dict is not provided.
1445# 
1446#monotonicity_constraints_correlation_threshold = 0.1
1447
1448# If enabled, only monotonic features with +1/-1 constraints will be passed to the model(s), and features
1449# without monotonicity constraints (0, as set by monotonicity_constraints_dict or determined automatically)
1450# will be dropped. Otherwise all features will be in the model.
1451# Only active when interpretability >= monotonicity_constraints_interpretability_switch or
1452# monotonicity_constraints_dict is provided.
1453# 
1454#monotonicity_constraints_drop_low_correlation_features = false
1455
1456# Manual override for monotonicity constraints. Mapping of original numeric features to desired constraint
1457# (1 for pos, -1 for neg, or 0 to disable.  True can be set for automatic handling, False is same as 0).
1458# Features that are not listed here will be treated automatically,
1459# and so get no constraint (i.e., 0) if interpretability < monotonicity_constraints_interpretability_switch
1460# and otherwise the constraint is automatically determined from the correlation between each feature and the target.
1461# Example: {'PAY_0': -1, 'PAY_2': -1, 'AGE': -1, 'BILL_AMT1': 1, 'PAY_AMT1': -1}
1462# 
1463#monotonicity_constraints_dict = "{}"
1464
1465# Exploring feature interactions can be important in gaining better predictive performance.
1466# The interaction can take multiple forms (i.e. feature1 + feature2 or feature1 * feature2 + ... featureN)
1467# Although certain machine learning algorithms (like tree-based methods) can do well in
1468# capturing these interactions as part of their training process, still generating them may
1469# help them (or other algorithms) yield better performance.
1470# The depth of the interaction level (as in "up to" how many features may be combined at
1471# once to create one single feature) can be specified to control the complexity of the
1472# feature engineering process.  For transformers that use both numeric and categorical features, this constrains
1473# the number of each type, not the total number. Higher values might be able to make more predictive models
1474# at the expense of time (-1 means automatic).
1475# 
1476#max_feature_interaction_depth = -1
1477
1478# Instead of sampling from min to max (up to max_feature_interaction_depth unless all specified)
1479# columns allowed for each transformer (0), choose fixed non-zero number of columns to use.
1480# Can make same as number of columns to use all columns for each transformers if allowed by each transformer.
1481# -n can be chosen to do 50/50 sample and fixed of n features.
1482# 
1483#fixed_feature_interaction_depth = 0
1484
1485# Accuracy setting equal and above which enables tuning of model parameters
1486# Only applicable if parameter_tuning_num_models=-1 (auto)
1487#tune_parameters_accuracy_switch = 3
1488
1489# Accuracy setting equal and above which enables tuning of target transform for regression.
1490# This is useful for time series when instead of predicting the actual target value, it
1491# might be better to predict a transformed target variable like sqrt(target) or log(target)
1492# as a means to control for outliers.
1493#tune_target_transform_accuracy_switch = 5
1494
1495# Select a target transformation for regression problems. Must be one of: ['auto',
1496# 'identity', 'identity_noclip', 'center', 'standardize', 'unit_box', 'log', 'log_noclip', 'square',
1497# 'sqrt', 'double_sqrt', 'inverse', 'anscombe', 'logit', 'sigmoid'].
1498# If set to 'auto', will automatically pick the best target transformer (if accuracy is set to
1499# tune_target_transform_accuracy_switch or larger, considering interpretability level of each target transformer),
1500# otherwise will fall back to 'identity_noclip' (easiest to interpret, Shapley values are in original space, etc.).
1501# All transformers except for 'center', 'standardize', 'identity_noclip' and 'log_noclip' perform clipping
1502# to constrain the predictions to the domain of the target in the training data. Use 'center', 'standardize',
1503# 'identity_noclip' or 'log_noclip' to disable clipping and to allow predictions outside of the target domain observed in
1504# the training data (for parametric models or custom models that support extrapolation).
1505# 
1506#target_transformer = "auto"
1507
1508# Select list of target transformers to use for tuning. Only for target_transformer='auto' and accuracy >= tune_target_transform_accuracy_switch.
1509# 
1510#target_transformer_tuning_choices = "['identity', 'identity_noclip', 'center', 'standardize', 'unit_box', 'log', 'square', 'sqrt', 'double_sqrt', 'anscombe', 'logit', 'sigmoid']"
1511
1512# Tournament style (method to decide which models are best at each iteration)
1513# 'auto' : Choose based upon accuracy and interpretability
1514# 'uniform' : all individuals in population compete to win as best (can lead to all, e.g. LightGBM models in final ensemble, which may not improve ensemble performance due to lack of diversity)
1515# 'model' : individuals with same model type compete (good if multiple models do well but some models that do not do as well still contribute to improving ensemble)
1516# 'feature' : individuals with similar feature types compete (good if target encoding, frequency encoding, and other feature sets lead to good results)
1517# 'fullstack' : Choose among optimal model and feature types
1518# 'model' and 'feature' styles preserve at least one winner for each type (and so 2 total indivs of each type after mutation)
1519# For each case, a round robin approach is used to choose best scores among type of models to choose from.
1520# If enable_genetic_algorithm=='Optuna', then every individual is self-mutated without any tournament
1521# during the genetic algorithm.  The tournament is only used to prune-down individuals for, e.g.,
1522# tuning -> evolution and evolution -> final model.
1523# 
1524#tournament_style = "auto"
1525
1526# Interpretability above which will use 'uniform' tournament style
1527#tournament_uniform_style_interpretability_switch = 8
1528
1529# Accuracy below which will use uniform style if tournament_style = 'auto' (regardless of other accuracy tournament style switch values)
1530#tournament_uniform_style_accuracy_switch = 6
1531
1532# Accuracy equal and above which uses model style if tournament_style = 'auto'
1533#tournament_model_style_accuracy_switch = 6
1534
1535# Accuracy equal and above which uses feature style if tournament_style = 'auto'
1536#tournament_feature_style_accuracy_switch = 13
1537
1538# Accuracy equal and above which uses fullstack style if tournament_style = 'auto'
1539#tournament_fullstack_style_accuracy_switch = 13
1540
1541# Whether to use penalized score for GA tournament or actual score
1542#tournament_use_feature_penalized_score = true
1543
1544# Whether to keep poor scores for small data (<10k rows) in case exploration will find good model.
1545# sets tournament_remove_poor_scores_before_evolution_model_factor=1.1
1546# tournament_remove_worse_than_constant_before_evolution=false
1547# tournament_keep_absolute_ok_scores_before_evolution_model_factor=1.1
1548# tournament_remove_poor_scores_before_final_model_factor=1.1
1549# tournament_remove_worse_than_constant_before_final_model=true
1550#tournament_keep_poor_scores_for_small_data = true
1551
1552# Factor (compared to best score plus each score) beyond which to drop poorly scoring models before evolution.
1553# This is useful in cases when poorly scoring models take a long time to train.
1554#tournament_remove_poor_scores_before_evolution_model_factor = 0.7
1555
1556# For before evolution after tuning, whether to remove models that are worse than (optimized to scorer) constant prediction model
1557#tournament_remove_worse_than_constant_before_evolution = true
1558
1559# For before evolution after tuning, where on scale of 0 (perfect) to 1 (constant model) to keep ok scores by absolute value.
1560#tournament_keep_absolute_ok_scores_before_evolution_model_factor = 0.2
1561
1562# Factor (compared to best score) beyond which to drop poorly scoring models before building final ensemble.  This is useful in cases when poorly scoring models take a long time to train.
1563#tournament_remove_poor_scores_before_final_model_factor = 0.3
1564
1565# For before final model after evolution, whether to remove models that are worse than (optimized to scorer) constant prediction model
1566#tournament_remove_worse_than_constant_before_final_model = true
1567
1568# Driverless AI uses a genetic algorithm (GA) to find the best features, best models and
1569# best hyper parameters for these models. The GA facilitates getting good results while not
1570# requiring torun/try every possible model/feature/parameter. This version of GA has
1571# reinforcement learning elements - it uses a form of exploration-exploitation to reach
1572# optimum solutions. This means it will capitalise on models/features/parameters that seem # to be working well and continue to exploit them even more, while allowing some room for
1573# trying new (and semi-random) models/features/parameters to avoid settling on a local
1574# minimum.
1575# These models/features/parameters tried are what-we-call individuals of a population. More # individuals connote more models/features/parameters to be tried and compete to find the best # ones.
1576#num_individuals = 2
1577
1578# set fixed number of individuals (if > 0) - useful to compare different hardware configurations.  If want 3 individuals in GA race to be preserved, choose 6, since need 1 mutatable loser per surviving individual.
1579#fixed_num_individuals = 0
1580
1581#max_fold_reps_hard_limit = 20
1582
1583# number of unique targets or folds counts after which switch to faster/simpler non-natural sorting and print outs
1584#sanitize_natural_sort_limit = 1000
1585
1586# number of fold ids to report cardinality for, both most common (head) and least common (tail)
1587#head_tail_fold_id_report_length = 30
1588
1589# Whether target encoding (CV target encoding, weight of evidence, etc.) could be enabled
1590# Target encoding refers to several different feature transformations (primarily focused on
1591# categorical data) that aim to represent the feature using information of the actual
1592# target variable. A simple example can be to use the mean of the target to replace each
1593# unique category of a categorical feature. This type of features can be very predictive,
1594# but are prone to overfitting and require more memory as they need to store mappings of
1595# the unique categories and the target values.
1596# 
1597#enable_target_encoding = "auto"
1598
1599# For target encoding, whether a model is used to compute Ginis for checking sanity of transformer. Requires cvte_cv_in_cv to be enabled. If enabled, CV-in-CV isn't done in case the check fails.
1600#cvte_cv_in_cv_use_model = false
1601
1602# For target encoding,
1603# whether an outer level of cross-fold validation is performed,
1604# in cases when GINI is detected to flip sign (or have inconsistent sign for weight of evidence)
1605# between fit_transform on training, transform on training, and transform on validation data.
1606# The degree to which GINI is poor is also used to perform fold-averaging of look-up tables instead
1607# of using global look-up tables.
1608# 
1609#cvte_cv_in_cv = true
1610
1611# For target encoding,
1612# when an outer level of cross-fold validation is performed,
1613# increase number of outer folds or abort target encoding when GINI between feature and target
1614# are not close between fit_transform on training, transform on training, and transform on validation data.
1615# 
1616#cv_in_cv_overconfidence_protection = "auto"
1617
1618#cv_in_cv_overconfidence_protection_factor = 3.0
1619
1620#enable_lexilabel_encoding = "off"
1621
1622#enable_isolation_forest = "off"
1623
1624# Whether one hot encoding could be enabled.  If auto, then only applied for small data and GLM.
1625#enable_one_hot_encoding = "auto"
1626
1627# Limit number of output features (total number of bins) created by all BinnerTransformers based on this
1628# value, scaled by accuracy, interpretability and dataset size. 0 means unlimited.
1629#binner_cardinality_limiter = 50
1630
1631# Whether simple binning of numeric features should be enabled by default. If auto, then only for
1632# GLM/FTRL/TensorFlow/GrowNet for time-series or for interpretability >= 6. Binning can help linear (or simple)
1633# models by exposing more signal for features that are not linearly correlated with the target. Note that
1634# NumCatTransformer and NumToCatTransformer already do binning, but also perform target encoding, which makes them
1635# less interpretable. The BinnerTransformer is more interpretable, and also works for time series.
1636#enable_binning = "auto"
1637
1638# Tree uses XGBoost to find optimal split points for binning of numeric features.
1639# Quantile use quantile-based binning. Might fall back to quantile-based if too many classes or
1640# not enough unique values.
1641#binner_bin_method = "['tree']"
1642
1643# If enabled, will attempt to reduce the number of bins during binning of numeric features.
1644# Applies to both tree-based and quantile-based bins.
1645#binner_minimize_bins = true
1646
1647# Given a set of bins (cut points along min...max), the encoding scheme converts the original
1648# numeric feature values into the values of the output columns (one column per bin, and one extra bin for
1649# missing values if any).
1650# Piecewise linear is 0 left of the bin, and 1 right of the bin, and grows linearly from 0 to 1 inside the bin.
1651# Binary is 1 inside the bin and 0 outside the bin. Missing value bin encoding is always binary, either 0 or 1.
1652# If no missing values in the data, then there is no missing value bin.
1653# Piecewise linear helps to encode growing values and keeps smooth transitions across the bin
1654# boundaries, while binary is best suited for detecting specific values in the data.
1655# Both are great at providing features to models that otherwise lack non-linear pattern detection.
1656#binner_encoding = "['piecewise_linear', 'binary']"
1657
1658# If enabled (default), include the original feature value as a output feature for the BinnerTransformer.
1659# This ensures that the BinnerTransformer never has less signal than the OriginalTransformer, since they can
1660# be chosen exclusively.
1661# 
1662#binner_include_original = true
1663
1664#isolation_forest_nestimators = 200
1665
1666# Transformer display names to indicate which transformers to use in experiment.
1667# More information for these transformers can be viewed here:
1668# http://docs.h2o.ai/driverless-ai/latest-stable/docs/userguide/transformations.html
1669# This section allows including/excluding these transformations and may be useful when
1670# simpler (more interpretable) models are sought at the expense of accuracy.
1671# the interpretability setting)
1672# for multi-class: '['NumCatTETransformer', 'TextLinModelTransformer',
1673# 'FrequentTransformer', 'CVTargetEncodeTransformer', 'ClusterDistTransformer',
1674# 'WeightOfEvidenceTransformer', 'TruncSVDNumTransformer', 'CVCatNumEncodeTransformer',
1675# 'DatesTransformer', 'TextTransformer', 'OriginalTransformer',
1676# 'NumToCatWoETransformer', 'NumToCatTETransformer', 'ClusterTETransformer',
1677# 'InteractionsTransformer']'
1678# for regression/binary: '['TextTransformer', 'ClusterDistTransformer',
1679# 'OriginalTransformer', 'TextLinModelTransformer', 'NumToCatTETransformer',
1680# 'DatesTransformer', 'WeightOfEvidenceTransformer', 'InteractionsTransformer',
1681# 'FrequentTransformer', 'CVTargetEncodeTransformer', 'NumCatTETransformer',
1682# 'NumToCatWoETransformer', 'TruncSVDNumTransformer', 'ClusterTETransformer',
1683# 'CVCatNumEncodeTransformer']'
1684# This list appears in the experiment logs (search for 'Transformers used')
1685# 
1686#included_transformers = "[]"
1687
1688# Auxiliary to included_transformers
1689# e.g. to disable all Target Encoding: excluded_transformers =
1690# '['NumCatTETransformer', 'CVTargetEncodeF', 'NumToCatTETransformer',
1691# 'ClusterTETransformer']'.
1692# Does not affect transformers used for preprocessing with included_pretransformers.
1693# 
1694#excluded_transformers = "[]"
1695
1696# Exclude list of genes (i.e. genes (built on top of transformers) to not use,
1697# independent of the interpretability setting)
1698# Some transformers are used by multiple genes, so this allows different control over feature engineering
1699# for multi-class: '['InteractionsGene', 'WeightOfEvidenceGene',
1700# 'NumToCatTargetEncodeSingleGene', 'OriginalGene', 'TextGene', 'FrequentGene',
1701# 'NumToCatWeightOfEvidenceGene', 'NumToCatWeightOfEvidenceMonotonicGene', '
1702# CvTargetEncodeSingleGene', 'DateGene', 'NumToCatTargetEncodeMultiGene', '
1703# DateTimeGene', 'TextLinRegressorGene', 'ClusterIDTargetEncodeSingleGene',
1704# 'CvCatNumEncodeGene', 'TruncSvdNumGene', 'ClusterIDTargetEncodeMultiGene',
1705# 'NumCatTargetEncodeMultiGene', 'CvTargetEncodeMultiGene', 'TextLinClassifierGene',
1706# 'NumCatTargetEncodeSingleGene', 'ClusterDistGene']'
1707# for regression/binary: '['CvTargetEncodeSingleGene', 'NumToCatTargetEncodeSingleGene',
1708# 'CvCatNumEncodeGene', 'ClusterIDTargetEncodeSingleGene', 'TextLinRegressorGene',
1709# 'CvTargetEncodeMultiGene', 'ClusterDistGene', 'OriginalGene', 'DateGene',
1710# 'ClusterIDTargetEncodeMultiGene', 'NumToCatTargetEncodeMultiGene',
1711# 'NumCatTargetEncodeMultiGene', 'TextLinClassifierGene', 'WeightOfEvidenceGene',
1712# 'FrequentGene', 'TruncSvdNumGene', 'InteractionsGene', 'TextGene',
1713# 'DateTimeGene', 'NumToCatWeightOfEvidenceGene',
1714# 'NumToCatWeightOfEvidenceMonotonicGene', ''NumCatTargetEncodeSingleGene']'
1715# This list appears in the experiment logs (search for 'Genes used')
1716# e.g. to disable interaction gene, use:  excluded_genes =
1717# '['InteractionsGene']'.
1718# Does not affect transformers used for preprocessing with included_pretransformers.
1719# 
1720#excluded_genes = "[]"
1721
1722# "Include specific models" lets you choose a set of models that will be considered during experiment training. The
1723# individual model settings and its AUTO / ON / OFF mean following: AUTO lets the internal decision mechanisms determine
1724# whether the model should be used during training; ON will try to force the use of the model; OFF turns the model
1725# off during training (it is equivalent of deselecting the model in the "Include specific models" picker).
1726# 
1727#included_models = "[]"
1728
1729# Auxiliary to included_models
1730#excluded_models = "[]"
1731
1732#included_scorers = "[]"
1733
1734# Select transformers to be used for preprocessing before other transformers operate.
1735# Pre-processing transformers can potentially take any original features and output
1736# arbitrary features, which will then be used by the normal layer of transformers
1737# whose selection is controlled by toml included_transformers or via the GUI
1738# "Include specific transformers".
1739# Notes:
1740# 1) preprocessing transformers (and all other layers of transformers) are part of the python and (if applicable) mojo scoring packages.
1741# 2) any BYOR transformer recipe or native DAI transformer can be used as a preprocessing transformer.
1742# So, e.g., a preprocessing transformer can do interactions, string concatenations, date extractions as a preprocessing step,
1743# and next layer of Date and DateTime transformers will use that as input data.
1744# Caveats:
1745# 1) one cannot currently do a time-series experiment on a time_column that hasn't yet been made (setup of experiment only knows about original data, not transformed)
1746# However, one can use a run-time data recipe to (e.g.) convert a float date-time into string date-time, and this will
1747# be used by DAIs Date and DateTime transformers as well as auto-detection of time series.
1748# 2) in order to do a time series experiment with the GUI/client auto-selecting groups, periods, etc. the dataset
1749# must have time column and groups prepared ahead of experiment by user or via a one-time data recipe.
1750# 
1751#included_pretransformers = "[]"
1752
1753# Auxiliary to included_pretransformers
1754#excluded_pretransformers = "[]"
1755
1756#include_all_as_pretransformers_if_none_selected = false
1757
1758#force_include_all_as_pretransformers_if_none_selected = false
1759
1760# Number of full pipeline layers
1761# (not including preprocessing layer when included_pretransformers is not empty).
1762# 
1763#num_pipeline_layers = 1
1764
1765# There are 2 data recipes:
1766# 1) that adds new dataset or modifies dataset outside experiment by file/url (pre-experiment data recipe)
1767# 2) that modifies dataset during experiment and python scoring (run-time data recipe)
1768# This list applies to the 2nd case.  One can use the same data recipe code for either case, but note:
1769# A) the 1st case can make any new data, but is not part of scoring package.
1770# B) the 2nd case modifies data during the experiment, so needs some original dataset.
1771# The recipe can still create all new features, as long as it has same *name* for:
1772# target, weight_column, fold_column, time_column, time group columns.
1773# 
1774#included_datas = "[]"
1775
1776# Auxiliary to included_datas
1777#excluded_datas = "[]"
1778
1779# Custom individuals to use in experiment.
1780# DAI contains most information about model type, model hyperparameters, data science types for input features, transformers used, and transformer parameters an Individual Recipe (an object that is evolved by mutation within the context of DAI's genetic algorithm).
1781# Every completed experiment auto-generates python code for the experiment that corresponds to the individual(s) used to build the final model.  This auto-generated python code can be edited offline and uploaded as a recipe, or it can be edited within the custom recipe management editor and saved.  This allowed one a code-first access to a significant portion of DAI's internal transformer and model generation.
1782# Choices are:
1783# * Empty means all individuals are freshly generated and treated by DAI's AutoML as a container of model and transformer choices.
1784# * Recipe display names of custom individuals, usually chosen via the UI.  If the number of included custom individuals is less than DAI would need, then the remaining individuals are freshly generated.
1785# The expert experiment-level option fixed_num_individuals can be used to enforce how many individuals to use in evolution stage.
1786# The expert experiment-level option fixed_ensemble_level can be used to enforce how many individuals (each with one base model) will be used in the final model.
1787# These individuals act in similar way as the feature brain acts for restart and retrain/refit, and one can retrain/refit custom individuals (i.e. skip the tuning and evolution stages) to use them in building a final model.
1788# See toml make_python_code for more details.
1789#included_individuals = "[]"
1790
1791# Auxiliary to included_individuals
1792#excluded_individuals = "[]"
1793
1794# Whether to generate python code for the best individuals for the experiment.
1795# This python code contains a CustomIndividual class that is a recipe that can be edited and customized.  The CustomIndividual class itself can also be customized for expert use.
1796# By default, 'auto' means on.
1797# At the end of an experiment, the summary zip contains auto-generated python code for the individuals used in the experiment, including the last best population (best_population_indivXX.py where XX iterates the population), last best individual (best_individual.py), final base models (final_indivYY.py where YY iterates the final base models).
1798# The summary zip also contains an example_indiv.py file that generates other transformers that may be useful that did not happen to be used in the experiment.
1799# In addition, the GUI and python client allow one to generate custom individuals from an aborted or finished experiment.
1800# For finished experiments, this will provide a zip file containing the final_indivYY.py files, and for aborted experiments this will contain the best population and best individual files.
1801# See included_individuals for more details.
1802#make_python_code = "auto"
1803
1804# Whether to generate json code for the best individuals for the experiment.
1805# This python code contains the essential attributes from the internal DAI
1806# individual class.  Reading the json code as a recipe is not supported.
1807# By default, 'auto' means off.
1808# 
1809#make_json_code = "auto"
1810
1811# Maximum number of genes to make for example auto-generated custom individual,
1812# called example_indiv.py in the summary zip file.
1813# 
1814#python_code_ngenes_max = 100
1815
1816# Minimum number of genes to make for example auto-generated custom individual,
1817# called example_indiv.py in the summary zip file.
1818# 
1819#python_code_ngenes_min = 100
1820
1821# Select the scorer to optimize the binary probability threshold that is being used in related Confusion Matrix based scorers that are trivial to optimize otherwise: Precision, Recall, FalsePositiveRate, FalseDiscoveryRate, FalseOmissionRate, TrueNegativeRate, FalseNegativeRate, NegativePredictiveValue. Use F1 if the target class matters more, and MCC if all classes are equally important. AUTO will try to sync the threshold scorer with the scorer used for the experiment, otherwise falls back to F1. The optimized threshold is also used for creating labels in addition to probabilities in MOJO/Python scorers.
1822#threshold_scorer = "AUTO"
1823
1824# Auxiliary to included_scorers
1825#excluded_scorers = "[]"
1826
1827# Whether to enable constant models ('auto'/'on'/'off')
1828#enable_constant_model = "auto"
1829
1830# Whether to enable Decision Tree models ('auto'/'on'/'off').  'auto' disables decision tree unless only non-constant model chosen.
1831#enable_decision_tree = "auto"
1832
1833# Whether to enable GLM models ('auto'/'on'/'off')
1834#enable_glm = "auto"
1835
1836# Whether to enable XGBoost GBM models ('auto'/'on'/'off')
1837#enable_xgboost_gbm = "auto"
1838
1839# Whether to enable LightGBM models ('auto'/'on'/'off')
1840#enable_lightgbm = "auto"
1841
1842# Whether to enable TensorFlow models ('auto'/'on'/'off')
1843#enable_tensorflow = "auto"
1844
1845# Whether to enable PyTorch-based GrowNet models ('auto'/'on'/'off')
1846#enable_grownet = "auto"
1847
1848# Whether to enable FTRL support (follow the regularized leader) model ('auto'/'on'/'off')
1849#enable_ftrl = "auto"
1850
1851# Whether to enable RuleFit support (beta version, no mojo) ('auto'/'on'/'off')
1852#enable_rulefit = "auto"
1853
1854# Whether to enable automatic addition of zero-inflated models for regression problems with zero-inflated target values that meet certain conditions: y >= 0, y.std() > y.mean()
1855#enable_zero_inflated_models = "auto"
1856
1857# Whether to use dask_cudf even for 1 GPU.  If False, will use plain cudf.
1858#use_dask_for_1_gpu = false
1859
1860# Number of retrials for dask fit to protect against known xgboost issues https://github.com/dmlc/xgboost/issues/6272 https://github.com/dmlc/xgboost/issues/6551
1861#dask_retrials_allreduce_empty_issue = 5
1862
1863# Whether to enable XGBoost RF mode without early stopping.
1864# Disabled unless switched on.
1865# 
1866#enable_xgboost_rf = "auto"
1867
1868# Whether to enable dask_cudf (multi-GPU) version of XGBoost GBM/RF.
1869# Disabled unless switched on.
1870# Only applicable for single final model without early stopping.  No Shapley possible.
1871# 
1872#enable_xgboost_gbm_dask = "auto"
1873
1874# Whether to enable multi-node LightGBM.
1875# Disabled unless switched on.
1876# 
1877#enable_lightgbm_dask = "auto"
1878
1879# If num_inner_hyperopt_trials_prefinal > 0,
1880# then whether to do hyper parameter tuning during leakage/shift detection.
1881# Might be useful to find non-trivial leakage/shift, but usually not necessary.
1882# 
1883#hyperopt_shift_leak = false
1884
1885# If num_inner_hyperopt_trials_prefinal > 0,
1886# then whether to do hyper parameter tuning during leakage/shift detection,
1887# when checking each column.
1888# 
1889#hyperopt_shift_leak_per_column = false
1890
1891# Number of trials for Optuna hyperparameter optimization for tuning and evolution models.
1892# 0 means no trials.
1893# For small data, 100 is ok choice,
1894# while for larger data smaller values are reasonable if need results quickly.
1895# If using RAPIDS or DASK, hyperparameter optimization keeps data on GPU entire time.
1896# Currently applies to XGBoost GBM/Dart and LightGBM.
1897# Useful when there is high overhead of DAI outside inner model fit/predict,
1898# so this tunes without that overhead.
1899# However, can overfit on a single fold when doing tuning or evolution,
1900# and if using CV then averaging the fold hyperparameters can lead to unexpected results.
1901# 
1902#num_inner_hyperopt_trials_prefinal = 0
1903
1904# Number of trials for Optuna hyperparameter optimization for final models.
1905# 0 means no trials.
1906# For small data, 100 is ok choice,
1907# while for larger data smaller values are reasonable if need results quickly.
1908# Applies to final model only even if num_inner_hyperopt_trials=0.
1909# If using RAPIDS or DASK, hyperparameter optimization keeps data on GPU entire time.
1910# Currently applies to XGBoost GBM/Dart and LightGBM.
1911# Useful when there is high overhead of DAI outside inner model fit/predict,
1912# so this tunes without that overhead.
1913# However, for final model each fold is independently optimized and can overfit on each fold,
1914# after which predictions are averaged
1915# (so no issue with averaging hyperparameters when doing CV with tuning or evolution).
1916# 
1917#num_inner_hyperopt_trials_final = 0
1918
1919# Number of individuals in final model (all folds/repeats for given base model) to
1920# optimize with Optuna hyperparameter tuning.
1921# -1 means all.
1922# 0 is same as choosing no Optuna trials.
1923# Might be only beneficial to optimize hyperparameters of best individual (i.e. value of 1) in ensemble.
1924# 
1925#num_hyperopt_individuals_final = -1
1926
1927# Optuna Pruner to use (applicable to XGBoost and LightGBM that support Optuna callbacks).  To disable choose None.
1928#optuna_pruner = "MedianPruner"
1929
1930# Set Optuna constructor arguments for particular applicable pruners.
1931# https://optuna.readthedocs.io/en/stable/reference/pruners.html
1932# 
1933#optuna_pruner_kwargs = "{'n_startup_trials': 5, 'n_warmup_steps': 20, 'interval_steps': 20, 'percentile': 25.0, 'min_resource': 'auto', 'max_resource': 'auto', 'reduction_factor': 4, 'min_early_stopping_rate': 0, 'n_brackets': 4, 'min_early_stopping_rate_low': 0, 'upper': 1.0, 'lower': 0.0}"
1934
1935# Optuna Pruner to use (applicable to XGBoost and LightGBM that support Optuna callbacks).
1936#optuna_sampler = "TPESampler"
1937
1938# Set Optuna constructor arguments for particular applicable samplers.
1939# https://optuna.readthedocs.io/en/stable/reference/samplers.html
1940# 
1941#optuna_sampler_kwargs = "{}"
1942
1943# Whether to enable Optuna's XGBoost Pruning callback to abort unpromising runs.  Not done if tuning learning rate.
1944#enable_xgboost_hyperopt_callback = true
1945
1946# Whether to enable Optuna's LightGBM Pruning callback to abort unpromising runs.  Not done if tuning learning rate.
1947#enable_lightgbm_hyperopt_callback = true
1948
1949# Whether to enable XGBoost Dart models ('auto'/'on'/'off')
1950#enable_xgboost_dart = "auto"
1951
1952# Whether to enable dask_cudf (multi-GPU) version of XGBoost Dart.
1953# Disabled unless switched on.
1954# If have only 1 GPU, then only uses dask_cudf if use_dask_for_1_gpu is True
1955# Only applicable for single final model without early stopping.  No Shapley possible.
1956# 
1957#enable_xgboost_dart_dask = "auto"
1958
1959# Whether to enable dask_cudf (multi-GPU) version of XGBoost RF.
1960# Disabled unless switched on.
1961# If have only 1 GPU, then only uses dask_cudf if use_dask_for_1_gpu is True
1962# Only applicable for single final model without early stopping.  No Shapley possible.
1963# 
1964#enable_xgboost_rf_dask = "auto"
1965
1966# Number of GPUs to use per model hyperopt training task.  Set to -1 for all GPUs.
1967# For example, when this is set to -1 and there are 4 GPUs available, all of them can be used for the training of a single model across a Dask cluster.
1968# Ignored if GPUs disabled or no GPUs on system.
1969# In multinode context, this refers to the per-node value.
1970# 
1971#num_gpus_per_hyperopt_dask = -1
1972
1973# Whether to use (and expect exists) xgbfi feature interactions for xgboost.
1974#use_xgboost_xgbfi = false
1975
1976# Which boosting types to enable for LightGBM (gbdt = boosted trees, rf_early_stopping = random forest with early stopping rf = random forest (no early stopping), dart = drop-out boosted trees with no early stopping
1977#enable_lightgbm_boosting_types = "['gbdt']"
1978
1979# Whether to enable automatic class weighting for imbalanced multiclass problems. Can make worse probabilities, but improve confusion-matrix based scorers for rare classes without the need to manually calibrate probabilities or fine-tune the label creation process.
1980#enable_lightgbm_multiclass_balancing = "auto"
1981
1982# Whether to enable LightGBM categorical feature support (runs in CPU mode even if GPUs enabled, and no MOJO built)
1983#enable_lightgbm_cat_support = false
1984
1985# Whether to enable LightGBM linear_tree handling
1986# (only CPU mode currently, no L1 regularization -- mae objective, and no MOJO build).
1987# 
1988#enable_lightgbm_linear_tree = false
1989
1990# Whether to enable LightGBM extra trees mode to help avoid overfitting
1991#enable_lightgbm_extra_trees = false
1992
1993# basic: as fast as when no constraints applied, but over-constrains the predictions.
1994# intermediate: very slightly slower, but much less constraining while still holding monotonicity and should be more accurate than basic.
1995# advanced: slower, but even more accurate than intermediate.
1996# 
1997#lightgbm_monotone_constraints_method = "intermediate"
1998
1999# Forbids any monotone splits on the first x (rounded down) level(s) of the tree.
2000# The penalty applied to monotone splits on a given depth is a continuous,
2001# increasing function the penalization parameter.
2002# https://lightgbm.readthedocs.io/en/latest/Parameters.html#monotone_penalty
2003# 
2004#lightgbm_monotone_penalty = 0.0
2005
2006# Whether to enable LightGBM CUDA implementation instead of OpenCL.
2007# CUDA with LightGBM only supported for Pascal+ (compute capability >=6.0)
2008#enable_lightgbm_cuda_support = false
2009
2010# Whether to show constant models in iteration panel even when not best model.
2011#show_constant_model = false
2012
2013#drop_constant_model_final_ensemble = true
2014
2015#xgboost_rf_exact_threshold_num_rows_x_cols = 10000
2016
2017# Select objectives allowed for XGBoost.
2018# Added to allowed mutations (the default reg:squarederror is in sample list 3 times)
2019# Note: logistic, tweedie, gamma, poisson are only valid for targets with positive values.
2020# Note: The objective relates to the form of the (regularized) loss function,
2021# used to determine the split with maximum information gain,
2022# while the metric is the non-regularized metric
2023# measured on the validation set (external or internally generated by DAI).
2024# 
2025#xgboost_reg_objectives = "['reg:squarederror']"
2026
2027# Select metrics allowed for XGBoost.
2028# Added to allowed mutations (the default rmse and mae are in sample list twice).
2029# Note: tweedie, gamma, poisson are only valid for targets with positive values.
2030# 
2031#xgboost_reg_metrics = "['rmse', 'mae']"
2032
2033# Select which objectives allowed for XGBoost.
2034# Added to allowed mutations (all evenly sampled).
2035#xgboost_binary_metrics = "['logloss', 'auc', 'aucpr', 'error']"
2036
2037# Select objectives allowed for LightGBM.
2038# Added to allowed mutations (the default mse is in sample list 2 times if selected).
2039# "binary" refers to logistic regression.
2040# Note: If choose quantile/huber or fair and data is not normalized,
2041# recommendation is to use params_lightgbm to specify reasonable
2042# value of alpha (for quantile or huber) or fairc (for fair) to LightGBM.
2043# Note: mse is same as rmse correponding to L2 loss.  mae is L1 loss.
2044# Note: tweedie, gamma, poisson are only valid for targets with positive values.
2045# Note: The objective relates to the form of the (regularized) loss function,
2046# used to determine the split with maximum information gain,
2047# while the metric is the non-regularized metric
2048# measured on the validation set (external or internally generated by DAI).
2049# 
2050#lightgbm_reg_objectives = "['mse', 'mae']"
2051
2052# Select metrics allowed for LightGBM.
2053# Added to allowed mutations (the default rmse is in sample list three times if selected).
2054# Note: If choose huber or fair and data is not normalized,
2055# recommendation is to use params_lightgbm to specify reasonable
2056# value of alpha (for huber or quantile) or fairc (for fair) to LightGBM.
2057# Note: tweedie, gamma, poisson are only valid for targets with positive values.
2058# 
2059#lightgbm_reg_metrics = "['rmse', 'mse', 'mae']"
2060
2061# Select objectives allowed for LightGBM.
2062# Added to allowed mutations (the default binary is in sample list 2 times if selected)
2063#lightgbm_binary_objectives = "['binary', 'xentropy']"
2064
2065# Select which binary metrics allowed for LightGBM.
2066# Added to allowed mutations (all evenly sampled).
2067#lightgbm_binary_metrics = "['binary', 'binary', 'auc']"
2068
2069# Select which metrics allowed for multiclass LightGBM.
2070# Added to allowed mutations (evenly sampled if selected).
2071#lightgbm_multi_metrics = "['multiclass', 'multi_error']"
2072
2073# tweedie_variance_power parameters to try for XGBoostModel and LightGBMModel if tweedie is used.
2074# First value is default.
2075#tweedie_variance_power_list = "[1.5, 1.2, 1.9]"
2076
2077# huber parameters to try for LightGBMModel if huber is used.
2078# First value is default.
2079#huber_alpha_list = "[0.9, 0.3, 0.5, 0.6, 0.7, 0.8, 0.1, 0.99]"
2080
2081# fair c parameters to try for LightGBMModel if fair is used.
2082# First value is default.
2083#fair_c_list = "[1.0, 0.1, 0.5, 0.9]"
2084
2085# poisson max_delta_step parameters to try for LightGBMModel if poisson is used.
2086# First value is default.
2087#poisson_max_delta_step_list = "[0.7, 0.9, 0.5, 0.2]"
2088
2089# quantile alpha parameters to try for LightGBMModel if quantile is used.
2090# First value is default.
2091#quantile_alpha = "[0.9, 0.95, 0.99, 0.6]"
2092
2093# Default reg_lambda regularization for GLM.
2094#reg_lambda_glm_default = 0.0004
2095
2096#lossguide_drop_factor = 4.0
2097
2098#lossguide_max_depth_extend_factor = 8.0
2099
2100# Parameters for LightGBM to override DAI parameters
2101# e.g. ``'eval_metric'`` instead of ``'metric'`` should be used
2102# e.g. ``params_lightgbm="{'objective': 'binary', 'n_estimators': 100, 'max_leaves': 64, 'random_state': 1234}"``
2103# e.g. ``params_lightgbm="{'n_estimators': 600, 'learning_rate': 0.1, 'reg_alpha': 0.0, 'reg_lambda': 0.5, 'gamma': 0, 'max_depth': 0, 'max_bin': 128, 'max_leaves': 256, 'scale_pos_weight': 1.0, 'max_delta_step': 3.469919910597877, 'min_child_weight': 1, 'subsample': 0.9, 'colsample_bytree': 0.3, 'tree_method': 'gpu_hist', 'grow_policy': 'lossguide', 'min_data_in_bin': 3, 'min_child_samples': 5, 'early_stopping_rounds': 20, 'num_classes': 2, 'objective': 'binary', 'eval_metric': 'binary', 'random_state': 987654, 'early_stopping_threshold': 0.01, 'monotonicity_constraints': False, 'silent': True, 'debug_verbose': 0, 'subsample_freq': 1}"``
2104# avoid including "system"-level parameters like ``'n_gpus': 1, 'gpu_id': 0, , 'n_jobs': 1, 'booster': 'lightgbm'``
2105# also likely should avoid parameters like: 'objective': 'binary', unless one really knows what one is doing (e.g. alternative objectives)
2106# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2107# And see: https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst
2108# Can also pass objective parameters if choose (or in case automatically chosen) certain objectives
2109# https://lightgbm.readthedocs.io/en/latest/Parameters.html#metric-parameters
2110#params_lightgbm = "{}"
2111
2112# Parameters for XGBoost to override DAI parameters
2113# similar parameters as LightGBM since LightGBM parameters are transcribed from XGBoost equivalent versions
2114# e.g. ``params_xgboost="{'n_estimators': 100, 'max_leaves': 64, 'max_depth': 0, 'random_state': 1234}"``
2115# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2116#params_xgboost = "{}"
2117
2118# Like params_xgboost but for XGBoost random forest.
2119#params_xgboost_rf = "{}"
2120
2121# Like params_xgboost but for XGBoost's dart method
2122#params_dart = "{}"
2123
2124# Parameters for TensorFlow to override DAI parameters
2125# e.g. ``params_tensorflow="{'lr': 0.01, 'add_wide': False, 'add_attention': True, 'epochs': 30, 'layers': (100, 100), 'activation': 'selu', 'batch_size': 64, 'chunk_size': 1000, 'dropout': 0.3, 'strategy': '1cycle', 'l1': 0.0, 'l2': 0.0, 'ort_loss': 0.5, 'ort_loss_tau': 0.01, 'normalize_type': 'streaming'}"``
2126# See: https://keras.io/ , e.g. for activations: https://keras.io/activations/
2127# Example layers: ``(500, 500, 500), (100, 100, 100), (100, 100), (50, 50)``
2128# Strategies: ``'1cycle'`` or ``'one_shot'``, See: https://github.com/fastai/fastai
2129# 'one_shot" is not allowed for ensembles.
2130# normalize_type: 'streaming' or 'global' (using sklearn StandardScaler)
2131# 
2132#params_tensorflow = "{}"
2133
2134# Parameters for XGBoost's gblinear to override DAI parameters
2135# e.g. ``params_gblinear="{'n_estimators': 100}"``
2136# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2137#params_gblinear = "{}"
2138
2139# Parameters for Decision Tree to override DAI parameters
2140# parameters should be given as XGBoost equivalent unless unique LightGBM parameter
2141# e.g. ``'eval_metric'`` instead of ``'metric'`` should be used
2142# e.g. ``params_decision_tree="{'objective': 'binary', 'n_estimators': 100, 'max_leaves': 64, 'random_state': 1234}"``
2143# e.g. ``params_decision_tree="{'n_estimators': 1, 'learning_rate': 1, 'reg_alpha': 0.0, 'reg_lambda': 0.5, 'gamma': 0, 'max_depth': 0, 'max_bin': 128, 'max_leaves': 256, 'scale_pos_weight': 1.0, 'max_delta_step': 3.469919910597877, 'min_child_weight': 1, 'subsample': 0.9, 'colsample_bytree': 0.3, 'tree_method': 'gpu_hist', 'grow_policy': 'lossguide', 'min_data_in_bin': 3, 'min_child_samples': 5, 'early_stopping_rounds': 20, 'num_classes': 2, 'objective': 'binary', 'eval_metric': 'logloss', 'random_state': 987654, 'early_stopping_threshold': 0.01, 'monotonicity_constraints': False, 'silent': True, 'debug_verbose': 0, 'subsample_freq': 1}"``
2144# avoid including "system"-level parameters like ``'n_gpus': 1, 'gpu_id': 0, , 'n_jobs': 1, 'booster': 'lightgbm'``
2145# also likely should avoid parameters like: ``'objective': 'binary:logistic'``, unless one really knows what one is doing (e.g. alternative objectives)
2146# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2147# And see: https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst
2148# Can also pass objective parameters if choose (or in case automatically chosen) certain objectives
2149# https://lightgbm.readthedocs.io/en/latest/Parameters.html#metric-parameters
2150#params_decision_tree = "{}"
2151
2152# Parameters for Rulefit to override DAI parameters
2153# e.g. ``params_rulefit="{'max_leaves': 64}"``
2154# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2155#params_rulefit = "{}"
2156
2157# Parameters for FTRL to override DAI parameters
2158#params_ftrl = "{}"
2159
2160# Parameters for GrowNet to override DAI parameters
2161#params_grownet = "{}"
2162
2163# How to handle tomls like params_tune_lightgbm.
2164# override: For any key in the params_tune_ toml dict, use the list of values instead of DAI's list of values.
2165# override_and_first_as_default: like override, but also use first entry in tuple/list (if present) as override as replacement for (e.g.) params_lightgbm when using params_tune_lightgbm.
2166# exclusive: Only tune the keys in the params_tune_ toml dict, unless no keys are present.  Otherwise use DAI's default values.
2167# exclusive_and_first_as_default: Like exclusive but same first as default behavior as override_and_first_as_default.
2168# In order to fully control hyperparameter tuning, either one should set "override" mode and include every hyperparameter and at least one value in each list within the dictionary, or choose "exclusive" and then rely upon DAI unchanging default values for any keys not given.
2169# For custom recipes, one can use recipe_dict to pass hyperparameters and if using the "get_one()" function in a custom recipe, and if user_tune passed contains the hyperparameter dictionary equivalent of params_tune_ tomls, then this params_tune_mode will also work for custom recipes.
2170#params_tune_mode = "override_and_first_as_default"
2171
2172# Whether to adjust GBM trees, learning rate, and early_stopping_rounds for GBM models or recipes with _is_gbm=True.
2173# True: auto mode, that changes trees/LR/stopping if tune_learning_rate=false and early stopping is supported by the model and model is GBM or from custom individual with parameter in adjusted_params.
2174# False: disable any adjusting from tuning-evolution into final model.
2175# Setting this to false is required if (e.g.) one changes params_lightgbm or params_tune_lightgbm and wanted to preserve the tuning-evolution values into the final model.
2176# One should also set tune_learning_rate to true to tune the learning_rate, else it will be fixed to some single value.
2177#params_final_auto_adjust = true
2178
2179# Dictionary of key:lists of values to use for LightGBM tuning, overrides DAI's choice per key
2180# e.g. ``params_tune_lightgbm="{'min_child_samples': [1,2,5,100,1000], 'min_data_in_bin': [1,2,3,10,100,1000]}"``
2181#params_tune_lightgbm = "{}"
2182
2183# Like params_tune_lightgbm but for XGBoost
2184# e.g. ``params_tune_xgboost="{'max_leaves': [8, 16, 32, 64]}"``
2185#params_tune_xgboost = "{}"
2186
2187# Like params_tune_lightgbm but for XGBoost random forest
2188# e.g. ``params_tune_xgboost_rf="{'max_leaves': [8, 16, 32, 64]}"``
2189#params_tune_xgboost_rf = "{}"
2190
2191# Dictionary of key:lists of values to use for LightGBM Decision Tree tuning, overrides DAI's choice per key
2192# e.g. ``params_tune_decision_tree="{'min_child_samples': [1,2,5,100,1000], 'min_data_in_bin': [1,2,3,10,100,1000]}"``
2193#params_tune_decision_tree = "{}"
2194
2195# Like params_tune_lightgbm but for XGBoost's Dart
2196# e.g. ``params_tune_dart="{'max_leaves': [8, 16, 32, 64]}"``
2197#params_tune_dart = "{}"
2198
2199# Like params_tune_lightgbm but for TensorFlow
2200# e.g. ``params_tune_tensorflow="{'layers': [(10,10,10), (10, 10, 10, 10)]}"``
2201#params_tune_tensorflow = "{}"
2202
2203# Like params_tune_lightgbm but for gblinear
2204# e.g. ``params_tune_gblinear="{'reg_lambda': [.01, .001, .0001, .0002]}"``
2205#params_tune_gblinear = "{}"
2206
2207# Like params_tune_lightgbm but for rulefit
2208# e.g. ``params_tune_rulefit="{'max_depth': [4, 5, 6]}"``
2209#params_tune_rulefit = "{}"
2210
2211# Like params_tune_lightgbm but for ftrl
2212#params_tune_ftrl = "{}"
2213
2214# Like params_tune_lightgbm but for GrowNet
2215# e.g. ``params_tune_grownet="{'input_dropout': [0.2, 0.5]}"``
2216#params_tune_grownet = "{}"
2217
2218# Whether to force max_leaves and max_depth to be 0 if grow_policy is depthwise and lossguide, respectively.
2219#params_tune_grow_policy_simple_trees = true
2220
2221# Maximum number of GBM trees or GLM iterations. Can be reduced for lower accuracy and/or higher interpretability.
2222# Early-stopping usually chooses less. Ignored if fixed_max_nestimators is > 0.
2223# 
2224#max_nestimators = 3000
2225
2226# Fixed maximum number of GBM trees or GLM iterations. If > 0, ignores max_nestimators and disables automatic reduction
2227# due to lower accuracy or higher interpretability. Early-stopping usually chooses less.
2228# 
2229#fixed_max_nestimators = -1
2230
2231# LightGBM dart mode and normal rf mode do not use early stopping,
2232# and they will sample from these values for n_estimators.
2233# XGBoost Dart mode will also sample from these n_estimators.
2234# Also applies to XGBoost Dask models that do not yet support early stopping or callbacks.
2235# For default parameters it chooses first value in list, while mutations sample from the list.
2236# 
2237#n_estimators_list_no_early_stopping = "[50, 100, 150, 200, 250, 300]"
2238
2239# Lower limit on learning rate for final ensemble GBM models.
2240# In some cases, the maximum number of trees/iterations is insufficient for the final learning rate,
2241# which can lead to no early stopping triggered and poor final model performance.
2242# Then, one can try increasing the learning rate by raising this minimum,
2243# or one can try increasing the maximum number of trees/iterations.
2244# 
2245#min_learning_rate_final = 0.01
2246
2247# Upper limit on learning rate for final ensemble GBM models
2248#max_learning_rate_final = 0.05
2249
2250# factor by which max_nestimators is reduced for tuning and feature evolution
2251#max_nestimators_feature_evolution_factor = 0.2
2252
2253# Lower limit on learning rate for feature engineering GBM models
2254#min_learning_rate = 0.05
2255
2256# Upper limit on learning rate for GBM models
2257# If want to override min_learning_rate and min_learning_rate_final, set this to smaller value
2258# 
2259#max_learning_rate = 0.5
2260
2261# Whether to lock learning rate, tree count, early stopping rounds for GBM algorithms to the final model values.
2262#lock_ga_to_final_trees = false
2263
2264# Whether to tune learning rate for GBM algorithms (if not doing just single final model).
2265# If tuning with Optuna, might help isolate optimal learning rate.
2266# 
2267#tune_learning_rate = false
2268
2269# Max. number of epochs for TensorFlow and FTRL models
2270#max_epochs = 50
2271
2272# Number of epochs for TensorFlow when larger data size.
2273#max_epochs_tf_big_data = 5
2274
2275# Maximum tree depth (and corresponding max max_leaves as 2**max_max_depth)
2276#max_max_depth = 12
2277
2278# Default max_bin for tree methods
2279#default_max_bin = 256
2280
2281# Default max_bin for LightGBM (64 recommended for GPU LightGBM for speed)
2282#default_lightgbm_max_bin = 249
2283
2284# Maximum max_bin for tree features
2285#max_max_bin = 256
2286
2287# Minimum max_bin for any tree
2288#min_max_bin = 32
2289
2290# Amount of memory which can handle max_bin = 256 can handle 125 columns and max_bin = 32 for 1000 columns
2291# As available memory on system goes higher than this scale, can handle proportionally more columns at higher max_bin
2292# Currently set to 10GB
2293#scale_mem_for_max_bin = 10737418240
2294
2295# Factor by which rf gets more depth than gbdt
2296#factor_rf = 1.25
2297
2298# Whether TensorFlow will use all CPU cores, or if it will split among all transformers.  Only for transformers, not TensorFlow model.
2299#tensorflow_use_all_cores = true
2300
2301# Whether TensorFlow will use all CPU cores if reproducible is set, or if it will split among all transformers
2302#tensorflow_use_all_cores_even_if_reproducible_true = false
2303
2304# Whether to disable TensorFlow memory optimizations. Can help fix tensorflow.python.framework.errors_impl.AlreadyExistsError
2305#tensorflow_disable_memory_optimization = true
2306
2307# How many cores to use for each TensorFlow model, regardless if GPU or CPU based (0 = auto mode)
2308#tensorflow_cores = 0
2309
2310# For TensorFlow models, maximum number of cores to use if tensorflow_cores=0 (auto mode), because TensorFlow model is inefficient at using many cores.  See also max_fit_cores for all models.
2311#tensorflow_model_max_cores = 4
2312
2313# How many cores to use for each Bert Model and Transformer, regardless if GPU or CPU based (0 = auto mode)
2314#bert_cores = 0
2315
2316# Whether Bert will use all CPU cores, or if it will split among all transformers.  Only for transformers, not Bert model.
2317#bert_use_all_cores = true
2318
2319# For Bert models, maximum number of cores to use if bert_cores=0 (auto mode), because Bert model is inefficient at using many cores.  See also max_fit_cores for all models.
2320#bert_model_max_cores = 8
2321
2322# Max number of rules to be used for RuleFit models (-1 for all)
2323#rulefit_max_num_rules = -1
2324
2325# Max tree depth for RuleFit models
2326#rulefit_max_tree_depth = 6
2327
2328# Max number of trees for RuleFit models
2329#rulefit_max_num_trees = 500
2330
2331# Enable One-Hot-Encoding (which does binning to limit to number of bins to no more than 100 anyway) for categorical columns with fewer than this many unique values
2332# Set to 0 to disable
2333#one_hot_encoding_cardinality_threshold = 50
2334
2335# How many levels to choose one-hot by default instead of other encodings, restricted down to 10x less (down to 2 levels) when number of columns able to be used with OHE exceeds 500. Note the total number of bins is reduced if bigger data independently of this.
2336#one_hot_encoding_cardinality_threshold_default_use = 40
2337
2338# Treat text columns also as categorical columns if the cardinality is <= this value.
2339# Set to 0 to treat text columns only as text.
2340#text_as_categorical_cardinality_threshold = 1000
2341
2342# If num_as_cat is true, then treat numeric columns also as categorical columns if the cardinality is > this value.
2343# Setting to 0 allows all numeric to be treated as categorical if num_as_cat is True.
2344#numeric_as_categorical_cardinality_threshold = 2
2345
2346# If num_as_cat is true, then treat numeric columns also as categorical columns to possibly one-hot encode if the cardinality is > this value.
2347# Setting to 0 allows all numeric to be treated as categorical to possibly ohe-hot encode if num_as_cat is True.
2348#numeric_as_ohe_categorical_cardinality_threshold = 2
2349
2350#one_hot_encoding_show_actual_levels_in_features = false
2351
2352# Fixed ensemble_level
2353# -1 = auto, based upon ensemble_accuracy_switch, accuracy, size of data, etc.
2354# 0 = No ensemble, only final single model on validated iteration/tree count
2355# 1 = 1 model, multiple ensemble folds (cross-validation)
2356# >=2 = >=2 models, multiple ensemble folds (cross-validation)
2357# 
2358#fixed_ensemble_level = -1
2359
2360# If enabled, use cross-validation to determine optimal parameters for single final model,
2361# and to be able to create training holdout predictions.
2362#cross_validate_single_final_model = true
2363
2364# Model to combine base model predictions, for experiments that create a final pipeline
2365# consisting of multiple base models.
2366# blender: Creates a linear blend with non-negative weights that add to 1 (blending) - recommended
2367# extra_trees: Creates a tree model to non-linearly combine the base models (stacking) - experimental, and recommended to also set enable cross_validate_meta_learner.
2368# neural_net: Creates a neural net model to non-linearly combine the base models (stacking) - experimental, and recommended to also set enable cross_validate_meta_learner.
2369# 
2370#ensemble_meta_learner = "blender"
2371
2372# If enabled, use cross-validation to create an ensemble for the meta learner itself. Especially recommended for
2373# ``ensemble_meta_learner='extra_trees'``, to make unbiased training holdout predictions.
2374# Will disable MOJO if enabled. Not needed for ``ensemble_meta_learner='blender'``."
2375# 
2376#cross_validate_meta_learner = false
2377
2378# Number of models to tune during pre-evolution phase
2379# Can make this lower to avoid excessive tuning, or make higher to do enhanced tuning.
2380# ``-1 : auto``
2381# 
2382#parameter_tuning_num_models = -1
2383
2384# Number of models (out of all parameter_tuning_num_models) to have as SEQUENCE instead of random features/parameters.
2385# ``-1 : auto, use at least one default individual per model class tuned``
2386# 
2387#parameter_tuning_num_models_sequence = -1
2388
2389# Number of models to add during tuning that cover other cases, like for TS having no TE on time column groups.
2390# ``-1 : auto, adds additional models to protect against overfit on high-gain training features.``
2391# 
2392#parameter_tuning_num_models_extra = -1
2393
2394# Dictionary of model class name (keys) and number (values) of instances.
2395#num_tuning_instances = "{}"
2396
2397#validate_meta_learner = true
2398
2399#validate_meta_learner_extra = false
2400
2401# Specify the fixed number of cross-validation folds (if >= 2) for feature evolution. (The actual number of splits allowed can be less and is determined at experiment run-time).
2402#fixed_num_folds_evolution = -1
2403
2404# Specify the fixed number of cross-validation folds (if >= 2) for the final model. (The actual number of splits allowed can be less and is determined at experiment run-time).
2405#fixed_num_folds = -1
2406
2407# set "on" to force only first fold for models - useful for quick runs regardless of data
2408#fixed_only_first_fold_model = "auto"
2409
2410# Set the number of repeated cross-validation folds for feature evolution and final models (if > 0), 0 is default. Only for ensembles that do cross-validation (so no external validation and not time-series), not for single final models.
2411#fixed_fold_reps = 0
2412
2413#num_fold_ids_show = 10
2414
2415#fold_scores_instability_warning_threshold = 0.25
2416
2417# Upper limit on the number of rows x number of columns for feature evolution (applies to both training and validation/holdout splits)
2418# feature evolution is the process that determines which features will be derived.
2419# Depending on accuracy settings, a fraction of this value will be used
2420# 
2421#feature_evolution_data_size = 300000000
2422
2423# Upper limit on the number of rows x number of columns for training final pipeline.
2424# 
2425#final_pipeline_data_size = 1000000000
2426
2427# Whether to automatically limit validation data size using feature_evolution_data_size (giving max_rows_feature_evolution shown in logs) for tuning-evolution, and using final_pipeline_data_size, max_validation_to_training_size_ratio_for_final_ensemble for final model.
2428#limit_validation_size = true
2429
2430# Smaller values can speed up final pipeline model training, as validation data is only used for early stopping.
2431# Note that final model predictions and scores will always be provided on the full dataset provided.
2432# 
2433#max_validation_to_training_size_ratio_for_final_ensemble = 2.0
2434
2435# Ratio of minority to majority class of the target column beyond which stratified sampling is done for binary classification. Otherwise perform random sampling. Set to 0 to always do random sampling. Set to 1 to always do stratified sampling.
2436#force_stratified_splits_for_imbalanced_threshold_binary = 0.01
2437
2438#force_stratified_splits_for_binary_max_rows = 1000000
2439
2440# Specify whether to do stratified sampling for validation fold creation for iid regression problems. Otherwise perform random sampling.
2441#stratify_for_regression = true
2442
2443# Sampling method for imbalanced binary classification problems. Choices are:
2444# "auto": sample both classes as needed, depending on data
2445# "over_under_sampling": over-sample the minority class and under-sample the majority class, depending on data
2446# "under_sampling": under-sample the majority class to reach class balance
2447# "off": do not perform any sampling
2448# 
2449#imbalance_sampling_method = "off"
2450
2451# For smaller data, there's no generally no benefit in using imbalanced sampling methods.
2452#imbalance_sampling_threshold_min_rows_original = 100000
2453
2454# For imbalanced binary classification: ratio of majority to minority class equal and above which to enable
2455# special imbalanced models with sampling techniques (specified by imbalance_sampling_method) to attempt to improve model performance.
2456# 
2457#imbalance_ratio_sampling_threshold = 5
2458
2459# For heavily imbalanced binary classification: ratio of majority to minority class equal and above which to enable only
2460# special imbalanced models on full original data, without upfront sampling.
2461# 
2462#heavy_imbalance_ratio_sampling_threshold = 25
2463
2464# Special handling can include special models, special scorers, special feature engineering.
2465# 
2466#imbalance_ratio_multiclass_threshold = 5
2467
2468# Special handling can include special models, special scorers, special feature engineering.
2469# 
2470#heavy_imbalance_ratio_multiclass_threshold = 25
2471
2472# -1: automatic
2473#imbalance_sampling_number_of_bags = -1
2474
2475# -1: automatic
2476#imbalance_sampling_max_number_of_bags = 10
2477
2478# Only for shift/leakage/tuning/feature evolution models. Not used for final models. Final models can
2479# be limited by imbalance_sampling_max_number_of_bags.
2480#imbalance_sampling_max_number_of_bags_feature_evolution = 3
2481
2482# Max. size of data sampled during imbalanced sampling (in terms of dataset size),
2483# controls number of bags (approximately). Only for imbalance_sampling_number_of_bags == -1.
2484#imbalance_sampling_max_multiple_data_size = 1.0
2485
2486# Rank averaging can be helpful when ensembling diverse models when ranking metrics like AUC/Gini
2487# metrics are optimized. No MOJO support yet.
2488#imbalance_sampling_rank_averaging = "auto"
2489
2490# A value of 0.5 means that models/algorithms will be presented a balanced target class distribution
2491# after applying under/over-sampling techniques on the training data. Sometimes it makes sense to
2492# choose a smaller value like 0.1 or 0.01 when starting from an extremely imbalanced original target
2493# distribution. -1.0: automatic
2494#imbalance_sampling_target_minority_fraction = -1.0
2495
2496# For binary classification: ratio of majority to minority class equal and above which to notify
2497# of imbalance in GUI to say slightly imbalanced.
2498# More than ``imbalance_ratio_sampling_threshold`` will say problem is imbalanced.
2499# 
2500#imbalance_ratio_notification_threshold = 2.0
2501
2502# List of possible bins for FTRL (largest is default best value)
2503#nbins_ftrl_list = "[1000000, 10000000, 100000000]"
2504
2505# Samples the number of automatic FTRL interactions terms to no more than this value (for each of 2nd, 3rd, 4th order terms)
2506#ftrl_max_interaction_terms_per_degree = 10000
2507
2508# List of possible bins for target encoding (first is default value)
2509#te_bin_list = "[25, 10, 100, 250]"
2510
2511# List of possible bins for weight of evidence encoding (first is default value)
2512# If only want one value: woe_bin_list = [2]
2513#woe_bin_list = "[25, 10, 100, 250]"
2514
2515# List of possible bins for ohe hot encoding (first is default value).  If left as default, the actual list is changed for given data size and dials.
2516#ohe_bin_list = "[10, 25, 50, 75, 100]"
2517
2518# List of max possible number of bins for numeric binning (first is default value). If left as default, the actual list is changed for given data size and dials. The binner will automatically reduce the number of bins based on predictive power.
2519#binner_bin_list = "[5, 10, 20]"
2520
2521# If dataset has more columns, then will check only first such columns. Set to 0 to disable.
2522#drop_redundant_columns_limit = 1000
2523
2524# Whether to drop columns with constant values
2525#drop_constant_columns = true
2526
2527# Whether to detect duplicate rows in training, validation and testing datasets. Done after doing type detection and dropping of redundant or missing columns across datasets, just before the experiment starts, still before leakage detection. Any further dropping of columns can change the amount of duplicate rows. Informative only, if want to drop rows in training data, make sure to check the drop_duplicate_rows setting. Uses a sample size, given by detect_duplicate_rows_max_rows_x_cols.
2528#detect_duplicate_rows = true
2529
2530#drop_duplicate_rows_timeout = 60
2531
2532# Whether to drop duplicate rows in training data. Done at the start of Driverless AI, only considering columns to drop as given by the user, not considering validation or training datasets or leakage or redundant columns. Any further dropping of columns can change the amount of duplicate rows. Time limited by drop_duplicate_rows_timeout seconds.
2533# 'auto': "off""
2534# 'weight': If duplicates, then convert dropped duplicates into a weight column for training.  Useful when duplicates are added to preserve some distribution of instances expected.  Only allowed if no weight columnn is present, else duplicates are just dropped.
2535# 'drop': Drop any duplicates, keeping only first instances.
2536# 'off': Do not drop any duplicates.  This may lead to over-estimation of accuracy.
2537#drop_duplicate_rows = "auto"
2538
2539# If > 0, then acts as sampling size for informative duplicate row detection. If set to 0, will do checks for all dataset sizes.
2540#detect_duplicate_rows_max_rows_x_cols = 10000000
2541
2542# Whether to drop columns that appear to be an ID
2543#drop_id_columns = true
2544
2545# Whether to avoid dropping any columns (original or derived)
2546#no_drop_features = false
2547
2548# Direct control over columns to drop in bulk so can copy-paste large lists instead of selecting each one separately in GUI
2549#cols_to_drop = "[]"
2550
2551#cols_to_drop_sanitized = "[]"
2552
2553# Control over columns to group by for CVCatNumEncode Transformer, default is empty list that means DAI automatically searches all columns,
2554# selected randomly or by which have top variable importance.
2555# The CVCatNumEncode Transformer takes a list of categoricals (or these cols_to_group_by) and uses those columns
2556# as new feature to perform aggregations on (agg_funcs_for_group_by).
2557#cols_to_group_by = "[]"
2558
2559#cols_to_group_by_sanitized = "[]"
2560
2561# Whether to sample from given features to group by (True) or to always group by all features (False) when using cols_to_group_by.
2562#sample_cols_to_group_by = false
2563
2564# Aggregation functions to use for groupby operations for CVCatNumEncode Transformer, see also cols_to_group_by and sample_cols_to_group_by.
2565#agg_funcs_for_group_by = "['mean', 'sd', 'min', 'max', 'count']"
2566
2567# Out of fold aggregations ensure less overfitting, but see less data in each fold.  For controlling how many folds used by CVCatNumEncode Transformer.
2568#folds_for_group_by = 5
2569
2570# Control over columns to force-in.  Forced-in features are are handled by the most interpretable transformer allowed by experiment
2571# options, and they are never removed (although model may assign 0 importance to them still).
2572# Transformers used by default include:
2573# OriginalTransformer for numeric,
2574# CatOriginalTransformer or FrequencyTransformer for categorical,
2575# TextOriginalTransformer for text,
2576# DateTimeOriginalTransformer for date-times,
2577# DateOriginalTransformer for dates,
2578# ImageOriginalTransformer or ImageVectorizerTransformer for images,
2579# etc.
2580#cols_to_force_in = "[]"
2581
2582#cols_to_force_in_sanitized = "[]"
2583
2584# Strategy to apply when doing mutations on transformers.
2585# Sample mode is default, with tendency to sample transformer parameters.
2586# Batched mode tends to do multiple types of the same transformation together.
2587# Full mode does even more types of the same transformation together.
2588# 
2589#mutation_mode = "sample"
2590
2591# 'baseline': Explore exemplar set of models with baselines as reference.
2592# 'random': Explore 10 random seeds for same setup.  Useful since nature of genetic algorithm is noisy and repeats might get better results, or one can ensemble the custom individuals from such repeats.
2593# 'line': Explore good model with all features and original features with all models.  Useful as first exploration.
2594# 'line_all': Like 'line', but enable all models and transformers possible instead of only what base experiment setup would have inferred.
2595# 'product': Explore one-by-one Cartesian product of each model and transformer.  Useful for exhaustive exploration.
2596#leaderboard_mode = "baseline"
2597
2598# Controls whether users can launch an experiment in Leaderboard mode form the UI.
2599#leaderboard_off = false
2600
2601# Allows control over default accuracy knob setting.
2602# If default models are too complex, set to -1 or -2, etc.
2603# If default models are not accurate enough, set to 1 or 2, etc.
2604# 
2605#default_knob_offset_accuracy = 0
2606
2607# Allows control over default time knob setting.
2608# If default experiments are too slow, set to -1 or -2, etc.
2609# If default experiments finish too fast, set to 1 or 2, etc.
2610# 
2611#default_knob_offset_time = 0
2612
2613# Allows control over default interpretability knob setting.
2614# If default models are too simple, set to -1 or -2, etc.
2615# If default models are too complex, set to 1 or 2, etc.
2616# 
2617#default_knob_offset_interpretability = 0
2618
2619# Whether to enable checking text for shift, currently only via label encoding.
2620#shift_check_text = false
2621
2622# Whether to use LightGBM random forest mode without early stopping for shift detection.
2623#use_rf_for_shift_if_have_lgbm = true
2624
2625# Normalized training variable importance above which to check the feature for shift
2626# Useful to avoid checking likely unimportant features
2627#shift_key_features_varimp = 0.01
2628
2629# Whether to only check certain features based upon the value of shift_key_features_varimp
2630#shift_check_reduced_features = true
2631
2632# Number of trees to use to train model to check shift in distribution
2633# No larger than max_nestimators
2634#shift_trees = 100
2635
2636# The value of max_bin to use for trees to use to train model to check shift in distribution
2637#shift_max_bin = 256
2638
2639# The min. value of max_depth to use for trees to use to train model to check shift in distribution
2640#shift_min_max_depth = 4
2641
2642# The max. value of max_depth to use for trees to use to train model to check shift in distribution
2643#shift_max_max_depth = 8
2644
2645# If distribution shift detection is enabled, show features for which shift AUC is above this value
2646# (AUC of a binary classifier that predicts whether given feature value belongs to train or test data)
2647#detect_features_distribution_shift_threshold_auc = 0.55
2648
2649# Minimum number of features to keep, keeping least shifted feature at least if 1
2650#drop_features_distribution_shift_min_features = 1
2651
2652# Shift beyond which shows HIGH notification, else MEDIUM
2653#shift_high_notification_level = 0.8
2654
2655# Whether to enable checking text for leakage, currently only via label encoding.
2656#leakage_check_text = true
2657
2658# Normalized training variable importance (per 1 minus AUC/R2 to control for leaky varimp dominance) above which to check the feature for leakage
2659# Useful to avoid checking likely unimportant features
2660#leakage_key_features_varimp = 0.001
2661
2662# Like leakage_key_features_varimp, but applies if early stopping disabled when can trust multiple leaks to get uniform varimp.
2663#leakage_key_features_varimp_if_no_early_stopping = 0.05
2664
2665# Whether to only check certain features based upon the value of leakage_key_features_varimp.  If any feature has AUC near 1, will consume all variable importance, even if another feature is also leaky.  So False is safest option, but True generally good if many columns.
2666#leakage_check_reduced_features = true
2667
2668# Whether to use LightGBM random forest mode without early stopping for leakage detection.
2669#use_rf_for_leakage_if_have_lgbm = true
2670
2671# Number of trees to use to train model to check for leakage
2672# No larger than max_nestimators
2673#leakage_trees = 100
2674
2675# The value of max_bin to use for trees to use to train model to check for leakage
2676#leakage_max_bin = 256
2677
2678# The value of max_depth to use for trees to use to train model to check for leakage
2679#leakage_min_max_depth = 6
2680
2681# The value of max_depth to use for trees to use to train model to check for leakage
2682#leakage_max_max_depth = 8
2683
2684# When leakage detection is enabled, if AUC (R2 for regression) on original data (label-encoded)
2685# is above or equal to this value, then trigger per-feature leakage detection
2686# 
2687#detect_features_leakage_threshold_auc = 0.95
2688
2689# When leakage detection is enabled, show features for which AUC (R2 for regression,
2690# for whether that predictor/feature alone predicts the target) is above or equal to this value.
2691# Feature is dropped if AUC/R2 is above or equal to drop_features_leakage_threshold_auc
2692# 
2693#detect_features_per_feature_leakage_threshold_auc = 0.8
2694
2695# Minimum number of features to keep, keeping least leakage feature at least if 1
2696#drop_features_leakage_min_features = 1
2697
2698# Ratio of train to validation holdout when testing for leakage
2699#leakage_train_test_split = 0.25
2700
2701# Whether to enable detailed traces (in GUI Trace)
2702#detailed_traces = false
2703
2704# Whether to enable debug log level (in log files)
2705#debug_log = false
2706
2707# Whether to add logging of system information such as CPU, GPU, disk space at the start of each experiment log. Same information is already logged in system logs.
2708#log_system_info_per_experiment = true
2709
2710#check_system = true
2711
2712#check_system_basic = true
2713
2714# How close to the optimal value (usually 1 or 0) does the validation score need to be to be considered perfect (to stop the experiment)?
2715#abs_tol_for_perfect_score = 0.0001
2716
2717# Timeout in seconds to wait for data ingestion.
2718#data_ingest_timeout = 86400.0
2719
2720# How many seconds to allow mutate to take, nominally only takes few seconds at most.  But on busy system doing many individuals, might take longer.  Optuna sometimes live lock hangs in scipy random distribution maker.
2721#mutate_timeout = 600
2722
2723# Whether to trust GPU locking for submission of GPU jobs to limit memory usage.
2724# If False, then wait for as GPU submissions to be less than number of GPUs,
2725# even if later jobs could be purely CPU jobs that did not need to wait.
2726# Only applicable if not restricting number of GPUs via num_gpus_per_experiment,
2727# else have to use resources instead of relying upon locking.
2728# 
2729#gpu_locking_trust_pool_submission = true
2730
2731# Whether to steal GPU locks when process is neither on GPU PID list nor using CPU resources at all (e.g. sleeping).  Only steal from multi-GPU locks that are incomplete.  Prevents deadlocks in case multi-GPU model hangs.
2732#gpu_locking_free_dead = true
2733
2734#tensorflow_allow_cpu_only = false
2735
2736#check_pred_contribs_sum = false
2737
2738#debug_daimodel_level = 0
2739
2740#debug_debug_xgboost_splits = false
2741
2742#log_predict_info = true
2743
2744#log_fit_info = true
2745
2746# Amount of time to stall (in seconds) before killing the job (assumes it hung). Reference time is scaled by train data shape of rows * cols to get used stalled_time_kill
2747#stalled_time_kill_ref = 440.0
2748
2749# Amount of time between checks for some process taking long time, every cycle full process list will be dumped to console or experiment logs if possible.
2750#long_time_psdump = 1800
2751
2752# Whether to dump ps every long_time_psdump
2753#do_psdump = false
2754
2755# Whether to check every long_time_psdump seconds and SIGUSR1 to all children to see where maybe stuck or taking long time.
2756#livelock_signal = false
2757
2758# Value to override number of sockets, in case DAIs determination is wrong, for non-trivial systems.  0 means auto.
2759#num_cpu_sockets_override = 0
2760
2761# Value to override number of GPUs, in case DAIs determination is wrong, for non-trivial systems.  -1 means auto.Can also set min_num_cores_per_gpu=-1 to allowany number of GPUs for each experiment regardlessof number of cores.
2762#num_gpus_override = -1
2763
2764# Whether to show GPU usage only when locking.  'auto' means 'on' if num_gpus_override is different than actual total visible GPUs, else it means 'off'
2765#show_gpu_usage_only_if_locked = "auto"
2766
2767# Show inapplicable models in preview, to be sure not missing models one could have used
2768#show_inapplicable_models_preview = false
2769
2770# Show inapplicable transformers in preview, to be sure not missing transformers one could have used
2771#show_inapplicable_transformers_preview = false
2772
2773# Show warnings for models (image auto, Dask multinode/multi-GPU) if conditions are met to use but not chosen to avoid missing models that could benefit accuracy/performance
2774#show_warnings_preview = false
2775
2776# Show warnings for models that have no transformers for certain features.
2777#show_warnings_preview_unused_map_features = true
2778
2779# Up to how many input features to determine, during GUI/client preview, unused features. Too many slows preview down.
2780#max_cols_show_unused_features = 1000
2781
2782# Up to how many input features to show transformers used for each input feature.
2783#max_cols_show_feature_transformer_mapping = 1000
2784
2785# Up to how many input features to show, in preview, that are unused features.
2786#warning_unused_feature_show_max = 3
2787
2788#interaction_finder_max_rows_x_cols = 200000.0
2789
2790#interaction_finder_corr_threshold = 0.95
2791
2792# Required GINI relative improvement for InteractionTransformer.
2793# If GINI is not better than this relative improvement compared to original features considered
2794# in the interaction, then the interaction is not returned.  If noisy data, and no clear signal
2795# in interactions but still want interactions, then can decrease this number.
2796#interaction_finder_gini_rel_improvement_threshold = 0.5
2797
2798# Number of transformed Interactions to make as best out of many generated trial interactions.
2799#interaction_finder_return_limit = 5
2800
2801# Whether to enable bootstrap sampling. Provides error bars to validation and test scores based on the standard error of the bootstrap mean.
2802#enable_bootstrap = true
2803
2804# Minimum number of bootstrap samples to use for estimating score and its standard deviation
2805# Actual number of bootstrap samples will vary between the min and max,
2806# depending upon row count (more rows, fewer samples) and accuracy settings (higher accuracy, more samples)
2807# 
2808#min_bootstrap_samples = 1
2809
2810# Maximum number of bootstrap samples to use for estimating score and its standard deviation
2811# Actual number of bootstrap samples will vary between the min and max,
2812# depending upon row count (more rows, fewer samples) and accuracy settings (higher accuracy, more samples)
2813# 
2814#max_bootstrap_samples = 100
2815
2816# Minimum fraction of row size to take as sample size for bootstrap estimator
2817# Actual sample size used for bootstrap estimate will vary between the min and max,
2818# depending upon row count (more rows, smaller sample size) and accuracy settings (higher accuracy, larger sample size)
2819# 
2820#min_bootstrap_sample_size_factor = 1.0
2821
2822# Maximum fraction of row size to take as sample size for bootstrap estimator
2823# Actual sample size used for bootstrap estimate will vary between the min and max,
2824# depending upon row count (more rows, smaller sample size) and accuracy settings (higher accuracy, larger sample size)
2825# 
2826#max_bootstrap_sample_size_factor = 10.0
2827
2828# Seed to use for final model bootstrap sampling, -1 means use experiment-derived seed.
2829# E.g. one can retrain final model with different seed to get different final model error bars for scores.
2830# 
2831#bootstrap_final_seed = -1
2832
2833# Benford's law: mean absolute deviance threshold equal and above which integer valued columns are treated as categoricals too
2834#benford_mad_threshold_int = 0.03
2835
2836# Benford's law: mean absolute deviance threshold equal and above which real valued columns are treated as categoricals too
2837#benford_mad_threshold_real = 0.1
2838
2839# Variable importance below which feature is dropped (with possible replacement found that is better)
2840# This also sets overall scale for lower interpretability settings.
2841# Set to lower value if ok with many weak features despite choosing high interpretability,
2842# or if see drop in performance due to the need for weak features.
2843# 
2844#varimp_threshold_at_interpretability_10 = 0.001
2845
2846# Whether to avoid setting stabilize_varimp=false and stabilize_fs=false for time series experiments.
2847#allow_stabilize_varimp_for_ts = false
2848
2849# Variable importance is used by genetic algorithm to decide which features are useful,
2850# so this can stabilize the feature selection by the genetic algorithm.
2851# This is by default disabled for time series experiments, which can have real diverse behavior in each split.
2852# But in some cases feature selection is improved in presence of highly shifted variables that are not handled
2853# by lag transformers and one can set allow_stabilize_varimp_for_ts=true.
2854# 
2855#stabilize_varimp = true
2856
2857# Whether to take minimum (True) or mean (False) of delta improvement in score when aggregating feature selection scores across multiple folds/depths.
2858# Delta improvement of score corresponds to original metric minus metric of shuffled feature frame if maximizing metric,
2859# and corresponds to negative of such a score difference if minimizing.
2860# Feature selection by permutation importance considers the change in score after shuffling a feature, and using minimum operation
2861# ignores optimistic scores in favor of pessimistic scores when aggregating over folds.
2862# Note, if using tree methods, multiple depths may be fitted, in which case regardless of this toml setting,
2863# only features that are kept for all depths are kept by feature selection.
2864# If interpretability >= config toml value of fs_data_vary_for_interpretability, then half data (or setting of fs_data_frac)
2865# is used as another fit, in which case regardless of this toml setting,
2866# only features that are kept for all data sizes are kept by feature selection.
2867# Note: This is disabled for small data since arbitrary slices of small data can lead to disjoint features being important and only aggregated average behavior has signal.
2868# 
2869#stabilize_fs = true
2870
2871# Whether final pipeline uses fixed features for some transformers that would normally
2872# perform search, such as InteractionsTransformer.
2873# Use what learned from tuning and evolution (True) or to freshly search for new features (False).
2874# This can give a more stable pipeline, especially for small data or when using interaction transformer
2875# as pretransformer in multi-layer pipeline.
2876# 
2877#stabilize_features = true
2878
2879#fraction_std_bootstrap_ladder_factor = 0.01
2880
2881#bootstrap_ladder_samples_limit = 10
2882
2883#features_allowed_by_interpretability = "{1: 10000000, 2: 10000, 3: 1000, 4: 500, 5: 300, 6: 200, 7: 150, 8: 100, 9: 80, 10: 50, 11: 50, 12: 50, 13: 50}"
2884
2885#nfeatures_max_threshold = 200
2886
2887#rdelta_percent_score_penalty_per_feature_by_interpretability = "{1: 0.0, 2: 0.1, 3: 1.0, 4: 2.0, 5: 5.0, 6: 10.0, 7: 20.0, 8: 30.0, 9: 50.0, 10: 100.0, 11: 100.0, 12: 100.0, 13: 100.0}"
2888
2889#drop_low_meta_weights = true
2890
2891#meta_weight_allowed_by_interpretability = "{1: 1E-7, 2: 1E-5, 3: 1E-4, 4: 1E-3, 5: 1E-2, 6: 0.03, 7: 0.05, 8: 0.08, 9: 0.10, 10: 0.15, 11: 0.15, 12: 0.15, 13: 0.15}"
2892
2893#meta_weight_allowed_for_reference = 1.0
2894
2895#feature_cost_mean_interp_for_penalty = 5
2896
2897#features_cost_per_interp = 0.25
2898
2899#varimp_threshold_shift_report = 0.3
2900
2901#apply_featuregene_limits_after_tuning = true
2902
2903#remove_scored_0gain_genes_in_postprocessing_above_interpretability = 13
2904
2905#remove_scored_0gain_genes_in_postprocessing_above_interpretability_final_population = 2
2906
2907#remove_scored_by_threshold_genes_in_postprocessing_above_interpretability_final_population = 7
2908
2909#show_full_pipeline_details = false
2910
2911#num_transformed_features_per_pipeline_show = 10
2912
2913#fs_data_vary_for_interpretability = 7
2914
2915#fs_data_frac = 0.5
2916
2917#many_columns_count = 400
2918
2919#columns_count_interpretable = 200
2920
2921#round_up_indivs_for_busy_gpus = true
2922
2923#tuning_share_varimp = "best"
2924
2925# Graphviz is an optional requirement for native installations (RPM/DEP/Tar-SH, outside of Docker)to convert .dot files into .png files for pipeline visualizations as part of experiment artifacts
2926#require_graphviz = true
2927
2928# Unnormalized probability to add genes or instances of transformers with specific attributes.
2929# If no genes can be added, other mutations
2930# (mutating models hyper parmaters, pruning genes, pruning features, etc.) are attempted.
2931# 
2932#prob_add_genes = 0.5
2933
2934# Unnormalized probability, conditioned on prob_add_genes,
2935# to add genes or instances of transformers with specific attributes
2936# that have shown to be beneficial to other individuals within the population.
2937# 
2938#prob_addbest_genes = 0.5
2939
2940# Unnormalized probability to prune genes or instances of transformers with specific attributes.
2941# If a variety of transformers with many attributes exists, default value is reasonable.
2942# However, if one has fixed set of transformers that should not change or no new transformer attributes
2943# can be added, then setting this to 0.0 is reasonable to avoid undesired loss of transformations.
2944# 
2945#prob_prune_genes = 0.5
2946
2947# Unnormalized probability change model hyper parameters.
2948# 
2949#prob_perturb_xgb = 0.25
2950
2951# Unnormalized probability to prune features that have low variable importance, as opposed to pruning entire instances of genes/transformers when prob_prune_genes used.
2952# If prob_prune_genes=0.0 and prob_prune_by_features==0.0 and prob_prune_by_top_features==0.0, then genes/transformers and transformed features are only pruned if they are:
2953# 1) inconsistent with the genome
2954# 2) inconsistent with the column data types
2955# 3) had no signal (for interactions and cv_in_cv for target encoding)
2956# 4) transformation failed
2957# E.g. these are toml settings are then ignored:
2958# 1) ngenes_max
2959# 2) limit_features_by_interpretability
2960# 3) varimp_threshold_at_interpretability_10
2961# 4) features_allowed_by_interpretability
2962# 5) remove_scored_0gain_genes_in_postprocessing_above_interpretability
2963# 6) nfeatures_max_threshold
2964# 7) features_cost_per_interp
2965# So this acts similar to no_drop_features, except no_drop_features also applies to shift and leak detection, constant columns are not dropped, ID columns are not dropped.
2966#prob_prune_by_features = 0.25
2967
2968# Unnormalized probability to prune features that have high variable importance,
2969# in case they have high gain but negaive perfomrance on validation and would otherwise maintain poor validation scores.
2970# Similar to prob_prune_by_features but for high gain features.
2971#prob_prune_by_top_features = 0.25
2972
2973# Maximum number of high gain features to prune for each mutation call, to control behavior of prob_prune_by_top_features.
2974#max_num_prune_by_top_features = 1
2975
2976# Like prob_prune_genes but only for pretransformers, i.e. those transformers in layers except last layer that connects to model.
2977#prob_prune_pretransformer_genes = 0.5
2978
2979# Like prob_prune_by_features but only for pretransformers, i.e. those transformers in layers except last layer that connects to model.
2980#prob_prune_pretransformer_by_features = 0.25
2981
2982# Like prob_prune_by_top_features but only for pretransformers, i.e. those transformers in layers except last layer that connects to model.
2983#prob_prune_pretransformer_by_top_features = 0.25
2984
2985# When doing restart, retrain, refit, reset these individual parameters to new toml values.
2986#override_individual_from_toml_list = "['prob_perturb_xgb', 'prob_add_genes', 'prob_addbest_genes', 'prob_prune_genes', 'prob_prune_by_features', 'prob_prune_by_top_features', 'prob_prune_pretransformer_genes', 'prob_prune_pretransformer_by_features', 'prob_prune_pretransformer_by_top_features']"
2987
2988# Max. number of trees to use for all tree model predictions. For testing, when predictions don't matter. -1 means disabled.
2989#fast_approx_max_num_trees_ever = -1
2990
2991# Max. number of trees to use for fast_approx=True (e.g., for AutoDoc/MLI).
2992#fast_approx_num_trees = 250
2993
2994# Whether to speed up fast_approx=True further, by using only one fold out of all cross-validation folds (e.g., for AutoDoc/MLI).
2995#fast_approx_do_one_fold = true
2996
2997# Whether to speed up fast_approx=True further, by using only one model out of all ensemble models (e.g., for AutoDoc/MLI).
2998#fast_approx_do_one_model = false
2999
3000# Max. number of trees to use for fast_approx_contribs=True (e.g., for 'Fast Approximation' in GUI when making Shapley predictions, and for AutoDoc/MLI).
3001#fast_approx_contribs_num_trees = 50
3002
3003# Whether to speed up fast_approx_contribs=True further, by using only one fold out of all cross-validation folds (e.g., for 'Fast Approximation' in GUI when making Shapley predictions, and for AutoDoc/MLI).
3004#fast_approx_contribs_do_one_fold = true
3005
3006# Whether to speed up fast_approx_contribs=True further, by using only one model out of all ensemble models (e.g., for 'Fast Approximation' in GUI when making Shapley predictions, and for AutoDoc/MLI).
3007#fast_approx_contribs_do_one_model = true
3008
3009# Approximate interval between logging of progress updates when making predictions. >=0 to enable, -1 to disable.
3010#prediction_logging_interval = 300
3011
3012# Whether to use exploit-explore logic like DAI 1.8.x.  False will explore more.
3013#use_187_prob_logic = true
3014
3015# Whether to enable cross-validated OneHotEncoding+LinearModel transformer
3016#enable_ohe_linear = false
3017
3018#max_absolute_feature_expansion = 1000
3019
3020#booster_for_fs_permute = "auto"
3021
3022#model_class_name_for_fs_permute = "auto"
3023
3024#switch_from_tree_to_lgbm_if_can = true
3025
3026#model_class_name_for_shift = "auto"
3027
3028#model_class_name_for_leakage = "auto"
3029
3030#default_booster = "lightgbm"
3031
3032#default_model_class_name = "LightGBMModel"
3033
3034#num_as_cat_false_if_ohe = true
3035
3036#no_ohe_try = true
3037
3038# Number of classes above which to include TensorFlow (if TensorFlow is enabled),
3039# even if not used exclusively.
3040# For small data this is decreased by tensorflow_num_classes_small_data_factor,
3041# and for bigger data, this is increased by tensorflow_num_classes_big_data_reduction_factor.
3042#tensorflow_added_num_classes_switch = 5
3043
3044# Number of classes above which to only use TensorFlow (if TensorFlow is enabled),
3045# instead of others models set on 'auto' (models set to 'on' are still used).
3046# Up to tensorflow_num_classes_switch_but_keep_lightgbm, keep LightGBM.
3047# If small data, this is increased by tensorflow_num_classes_small_data_factor.
3048#tensorflow_num_classes_switch = 10
3049
3050#tensorflow_num_classes_switch_but_keep_lightgbm = 15
3051
3052#tensorflow_num_classes_small_data_factor = 3
3053
3054#tensorflow_num_classes_big_data_reduction_factor = 6
3055
3056# Compute empirical prediction intervals (based on holdout predictions).
3057#prediction_intervals = true
3058
3059# Confidence level for prediction intervals.
3060#prediction_intervals_alpha = 0.9
3061
3062# Appends one extra output column with predicted target class (after the per-class probabilities).
3063# Uses argmax for multiclass, and the threshold defined by the optimal scorer controlled by the
3064# 'threshold_scorer' expert setting for binary problems. This setting controls the training, validation and test
3065# set predictions (if applicable) that are created by the experiment. MOJO, scoring pipeline and client APIs
3066# control this behavior via their own version of this parameter.
3067#pred_labels = true
3068
3069# Class count above which do not use TextLin Transformer.
3070#textlin_num_classes_switch = 5
3071
3072#text_gene_dim_reduction_choices = "[50]"
3073
3074#text_gene_max_ngram = "[1, 2, 3]"
3075
3076# Max size (in tokens) of the vocabulary created during fitting of Tfidf/Count based text
3077# transformers (not CNN/BERT). If multiple values are provided, will use the first one for initial models, and use remaining
3078# values during parameter tuning and feature evolution. Values smaller than 10000 are recommended for speed,
3079# and a reasonable set of choices include: 100, 1000, 5000, 10000, 50000, 100000, 500000.
3080#text_transformers_max_vocabulary_size = "[1000, 5000]"
3081
3082# Enables caching of BERT embeddings by temporally saving the embedding vectors to the experiment directory. Set to -1 to cache all text, set to 0 to disable caching.
3083#number_of_texts_to_cache_in_bert_transformer = -1
3084
3085# Modify early stopping behavior for tree-based models (LightGBM, XGBoostGBM, CatBoost) such
3086# that training score (on training data, not holdout) and validation score differ no more than this absolute value
3087# (i.e., stop adding trees once abs(train_score - valid_score) > max_abs_score_delta_train_valid).
3088# Keep in mind that the meaning of this value depends on the chosen scorer and the dataset (i.e., 0.01 for
3089# LogLoss is different than 0.01 for MSE). Experimental option, only for expert use to keep model complexity low.
3090# To disable, set to 0.0
3091#max_abs_score_delta_train_valid = 0.0
3092
3093# Modify early stopping behavior for tree-based models (LightGBM, XGBoostGBM, CatBoost) such
3094# that training score (on training data, not holdout) and validation score differ no more than this relative value
3095# (i.e., stop adding trees once abs(train_score - valid_score) > max_rel_score_delta_train_valid * abs(train_score)).
3096# Keep in mind that the meaning of this value depends on the chosen scorer and the dataset (i.e., 0.01 for
3097# LogLoss is different than 0.01 for MSE). Experimental option, only for expert use to keep model complexity low.
3098# To disable, set to 0.0
3099#max_rel_score_delta_train_valid = 0.0
3100
3101# Whether to search for optimal lambda for given alpha for XGBoost GLM.
3102# If 'auto', disabled if training data has more rows * cols than final_pipeline_data_size or for multiclass experiments.
3103# Disabled always for ensemble_level = 0.
3104# Not always a good approach, can be slow for little payoff compared to grid search.
3105# 
3106#glm_lambda_search = "auto"
3107
3108# If XGBoost GLM lambda search is enabled, whether to do search by the eval metric (True)
3109# or using the actual DAI scorer (False).
3110#glm_lambda_search_by_eval_metric = false
3111
3112#gbm_early_stopping_rounds_min = 1
3113
3114#gbm_early_stopping_rounds_max = 10000000000
3115
3116# Whether to enable early stopping threshold for LightGBM, varying by accuracy.
3117# Stops training once validation score changes by less than the threshold.
3118# This leads to fewer trees, usually avoiding wasteful trees, but may lower accuracy.
3119# However, it may also improve generalization by avoiding fine-tuning to validation set.
3120# 0 leads to value of 0 used, i.e. disabled
3121# > 0 means non-automatic mode using that *relative* value, scaled by first tree results of the metric for any metric.
3122# -1 means always enable, but the threshold itself is automatic (lower the accuracy, the larger the threshold).
3123# -2 means fully automatic mode, i.e. disabled unless reduce_mojo_size is true.  In true, the lower the accuracy, the larger the threshold.
3124# NOTE: Automatic threshold is set so relative value of metric's min_delta in LightGBM's callback for early stopping is:
3125# if accuracy <= 1:
3126# early_stopping_threshold = 1e-1
3127# elif accuracy <= 4:
3128# early_stopping_threshold = 1e-2
3129# elif accuracy <= 7:
3130# early_stopping_threshold = 1e-3
3131# elif accuracy <= 9:
3132# early_stopping_threshold = 1e-4
3133# else:
3134# early_stopping_threshold = 0
3135# 
3136#enable_early_stopping_threshold = -2.0
3137
3138#glm_optimal_refit = true
3139
3140# Max. number of top variable importances to save per iteration (GUI can only display a max. of 14)
3141#max_varimp_to_save = 100
3142
3143# Max. number of top variable importances to show in logs during feature evolution
3144#max_num_varimp_to_log = 10
3145
3146# Max. number of top variable importance shifts to show in logs and GUI after final model built
3147#max_num_varimp_shift_to_log = 10
3148
3149# Skipping just avoids the failed transformer.
3150# Sometimes python multiprocessing swallows exceptions,
3151# so skipping and logging exceptions is also more reliable way to handle them.
3152# Recipe can raise h2oaicore.systemutils.IgnoreError to ignore error and avoid logging error.
3153# Features that fail are pruned from the individual.
3154# If that leaves no features in the individual, then backend tuning, feature/model tuning, final model building, etc.
3155# will still fail since DAI should not continue if all features are from a failed state.
3156# 
3157#skip_transformer_failures = true
3158
3159# Skipping just avoids the failed model.  Failures are logged depending upon detailed_skip_failure_messages_level."
3160# Recipe can raise h2oaicore.systemutils.IgnoreError to ignore error and avoid logging error.
3161# 
3162#skip_model_failures = true
3163
3164# Skipping just avoids the failed scorer if among many scorers.  Failures are logged depending upon detailed_skip_failure_messages_level."
3165# Recipe can raise h2oaicore.systemutils.IgnoreError to ignore error and avoid logging error.
3166# Default is True to avoid failing in, e.g., final model building due to a single scorer.
3167# 
3168#skip_scorer_failures = true
3169
3170# Skipping avoids the failed recipe.  Failures are logged depending upon detailed_skip_failure_messages_level."
3171# Default is False because runtime data recipes are one-time at start of experiment and expected to work by default.
3172# 
3173#skip_data_recipe_failures = false
3174
3175# Whether can skip final model transformer failures for layer > first layer for multi-layer pipeline.
3176#can_skip_final_upper_layer_failures = true
3177
3178# How much verbosity to log failure messages for failed and then skipped transformers or models.
3179# Full failures always go to disk as *.stack files,
3180# which upon completion of experiment goes into details folder within experiment log zip file.
3181# 
3182#detailed_skip_failure_messages_level = 1
3183
3184# Whether to not just log errors of recipes (models and transformers) but also show high-level notification in GUI.
3185# 
3186#notify_failures = true
3187
3188# Instructions for 'Add to config.toml via toml string' in GUI expert page
3189# Self-referential toml parameter, for setting any other toml parameters as string of tomls separated by
3190# (spaces around
3191# are ok).
3192# Useful when toml parameter is not in expert mode but want per-experiment control.
3193# Setting this will override all other choices.
3194# In expert page, each time expert options saved, the new state is set without memory of any prior settings.
3195# The entered item is a fully compliant toml string that would be processed directly by toml.load().
3196# One should include 2 double quotes around the entire setting, or double quotes need to be escaped.
3197# One enters into the expert page text as follows:
3198# e.g. ``enable_glm="off"
3199# enable_xgboost_gbm="off"
3200# enable_lightgbm="on"``
3201# e.g. ``""enable_glm="off"
3202# enable_xgboost_gbm="off"
3203# enable_lightgbm="off"
3204# enable_tensorflow="on"""``
3205# e.g. ``fixed_num_individuals=4``
3206# e.g. ``params_lightgbm="{'objective':'poisson'}"``
3207# e.g. ``""params_lightgbm="{'objective':'poisson'}"""``
3208# e.g. ``max_cores=10
3209# data_precision="float32"
3210# max_rows_feature_evolution=50000000000
3211# ensemble_accuracy_switch=11
3212# feature_engineering_effort=1
3213# target_transformer="identity"
3214# tournament_feature_style_accuracy_switch=5
3215# params_tensorflow="{'layers': (100, 100, 100, 100, 100, 100)}"``
3216# e.g. ""max_cores=10
3217# data_precision="float32"
3218# max_rows_feature_evolution=50000000000
3219# ensemble_accuracy_switch=11
3220# feature_engineering_effort=1
3221# target_transformer="identity"
3222# tournament_feature_style_accuracy_switch=5
3223# params_tensorflow="{'layers': (100, 100, 100, 100, 100, 100)}"""
3224# If you see: "toml.TomlDecodeError" then ensure toml is set correctly.
3225# When set in the expert page of an experiment, these changes only affect experiments and not the server
3226# Usually should keep this as empty string in this toml file.
3227# 
3228#config_overrides = ""
3229
3230# Whether to dump every scored individual's variable importance to csv/tabulated/json file produces files like:
3231# individual_scored_id%d.iter%d.<hash>.features.txt for transformed features.
3232# individual_scored_id%d.iter%d.<hash>.features_orig.txt for original features.
3233# individual_scored_id%d.iter%d.<hash>.coefs.txt for absolute importance of transformed features.
3234# There are txt, tab.txt, and json formats for some files, and "best_" prefix means it is the best individual for that iteration
3235# The hash in the name matches the hash in the files produced by dump_modelparams_every_scored_indiv=true that can be used to track mutation history.
3236#dump_varimp_every_scored_indiv = false
3237
3238# Whether to dump every scored individual's model parameters to csv/tabulated/json file
3239# produces files like: individual_scored.params.[txt, csv, json].
3240# Each individual has a hash that matches the hash in the filenames produced if dump_varimp_every_scored_indiv=true,
3241# and the "unchanging hash" is the first parent hash (None if that individual is the first parent itself).
3242# These hashes can be used to track the history of the mutations.
3243# 
3244#dump_modelparams_every_scored_indiv = true
3245
3246# Number of features to show in model dump every scored individual
3247#dump_modelparams_every_scored_indiv_feature_count = 3
3248
3249# Number of past mutations to show in model dump every scored individual
3250#dump_modelparams_every_scored_indiv_mutation_count = 3
3251
3252# Whether to append (false) or have separate files, files like: individual_scored_id%d.iter%d*params*, (true) for modelparams every scored indiv
3253#dump_modelparams_separate_files = false
3254
3255# Whether to dump every scored fold's timing and feature info to a *timings*.txt file
3256# 
3257#dump_trans_timings = false
3258
3259# whether to delete preview timings if wrote transformer timings
3260#delete_preview_trans_timings = true
3261
3262# Attempt to create at most this many exemplars (actual rows behaving like cluster centroids) for the Aggregator
3263# algorithm in unsupervised experiment mode.
3264# 
3265#unsupervised_aggregator_n_exemplars = 100
3266
3267# Attempt to create at least this many clusters for clustering algorithm in unsupervised experiment mode.
3268# 
3269#unsupervised_clustering_min_clusters = 2
3270
3271# Attempt to create no more than this many clusters for clustering algorithm in unsupervised experiment mode.
3272# 
3273#unsupervised_clustering_max_clusters = 10
3274
3275#use_random_text_file = false
3276
3277#runtime_estimation_train_frame = ""
3278
3279#enable_bad_scorer = false
3280
3281#debug_col_dict_prefix = ""
3282
3283#return_early_debug_col_dict_prefix = false
3284
3285#return_early_debug_preview = false
3286
3287#wizard_random_attack = false
3288
3289#wizard_enable_back_button = true
3290
3291#wizard_deployment = ""
3292
3293#wizard_repro_level = -1
3294
3295#wizard_sample_size = 100000
3296
3297#wizard_model = "rf"
3298
3299# Maximum number of columns to start an experiment. This threshold exists to constraint the # complexity and the length of the Driverless AI's processes.
3300#wizard_max_cols = 100000
3301
3302# How many seconds to allow preview to take for Wizard.
3303#wizard_timeout_preview = 30
3304
3305# How many seconds to allow leakage detection to take for Wizard.
3306#wizard_timeout_leakage = 60
3307
3308# How many seconds to allow duplicate row detection to take for Wizard.
3309#wizard_timeout_dups = 30
3310
3311# How many seconds to allow variable importance calculation to take for Wizard.
3312#wizard_timeout_varimp = 30
3313
3314# How many seconds to allow dataframe schema calculation to take for Wizard.
3315#wizard_timeout_schema = 60
3316
3317#max_reorder_experiments = 100
3318
3319# Default the upper bound number of experiments owned per user. Negative value means infinite quota.
3320#default_experiments_quota_per_user = -1
3321
3322# Dictionary of key:list of experiments quota values for users, overrides above defaults with specified set of users
3323# e.g: ``override_experiments_quota_for_users="{'user1':10,'user2':20,'user3':30}"`` to set user1 with 10 experiments quota,
3324# user2 with 20 experiments quota and user3 with 30 experiments quota.
3325# 
3326#override_experiments_quota_for_users = "{}"
3327
3328# authentication_method
3329# unvalidated : Accepts user id and password. Does not validate password.
3330# none: Does not ask for user id or password. Authenticated as admin.
3331# openid: Users OpenID Connect provider for authentication. See additional OpenID settings below.
3332# oidc: Renewed OpenID Connect authentication using authorization code flow. See additional OpenID settings below.
3333# pam: Accepts user id and password. Validates user with operating system.
3334# ldap: Accepts user id and password. Validates against an ldap server. Look
3335# for additional settings under LDAP settings.
3336# local: Accepts a user id and password. Validated against an htpasswd file provided in local_htpasswd_file.
3337# ibm_spectrum_conductor: Authenticate with IBM conductor auth api.
3338# tls_certificate: Authenticate with Driverless by providing a TLS certificate.
3339# jwt: Authenticate by JWT obtained from the request metadata.
3340# 
3341#authentication_method = "unvalidated"
3342
3343# Additional authentication methods that will be enabled for for the clients.Login forms for each method will be available on the``/login/<authentication_method>`` path.Comma separated list.
3344#additional_authentication_methods = "[]"
3345
3346# The default amount of time in hours before a user is signed out and must log in again. This setting is used when a default timeout value is not provided by ``authentication_method``.
3347#authentication_default_timeout_hours = 72.0
3348
3349# When enabled, the user's session is automatically prolonged, even when they are not interacting directly with the application.
3350#authentication_gui_polling_prolongs_session = false
3351
3352# OpenID Connect Settings:
3353# Refer to the OpenID Connect Basic Client Implementation Guide for details on how OpenID authentication flow works
3354# https://openid.net/specs/openid-connect-basic-1_0.html
3355# base server URI to the OpenID Provider server (ex: https://oidp.ourdomain.com
3356#auth_openid_provider_base_uri = ""
3357
3358# URI to pull OpenID config data from (you can extract most of required OpenID config from this url)
3359# usually located at: /auth/realms/master/.well-known/openid-configuration
3360#auth_openid_configuration_uri = ""
3361
3362# URI to start authentication flow
3363#auth_openid_auth_uri = ""
3364
3365# URI to make request for token after callback from OpenID server was received
3366#auth_openid_token_uri = ""
3367
3368# URI to get user information once access_token has been acquired (ex: list of groups user belongs to will be provided here)
3369#auth_openid_userinfo_uri = ""
3370
3371# URI to logout user
3372#auth_openid_logout_uri = ""
3373
3374# callback URI that OpenID provide will use to send 'authentication_code'
3375# This is OpenID callback endpoint in Driverless AI. Most OpenID providers need this to be HTTPs.
3376# (ex. https://driverless.ourdomin.com/openid/callback)
3377#auth_openid_redirect_uri = ""
3378
3379# OAuth2 grant type (usually authorization_code for OpenID, can be access_token also)
3380#auth_openid_grant_type = ""
3381
3382# OAuth2 response type (usually code)
3383#auth_openid_response_type = ""
3384
3385# Client ID registered with OpenID provider
3386#auth_openid_client_id = ""
3387
3388# Client secret provided by OpenID provider when registering Client ID
3389#auth_openid_client_secret = ""
3390
3391# Scope of info (usually openid). Can be list of more than one, space delimited, possible
3392# values listed at https://openid.net/specs/openid-connect-basic-1_0.html#Scopes
3393#auth_openid_scope = ""
3394
3395# What key in user_info JSON should we check to authorize user
3396#auth_openid_userinfo_auth_key = ""
3397
3398# What value should the key have in user_info JSON in order to authorize user
3399#auth_openid_userinfo_auth_value = ""
3400
3401# Key that specifies username in user_info JSON (we will use the value of this key as username in Driverless AI)
3402#auth_openid_userinfo_username_key = ""
3403
3404# Quote method from urllib.parse used to encode payload dict in Authentication Request
3405#auth_openid_urlencode_quote_via = "quote"
3406
3407# Key in Token Response JSON that holds the value for access token expiry
3408#auth_openid_access_token_expiry_key = "expires_in"
3409
3410# Key in Token Response JSON that holds the value for access token expiry
3411#auth_openid_refresh_token_expiry_key = "refresh_expires_in"
3412
3413# Expiration time in seconds for access token
3414#auth_openid_token_expiration_secs = 3600
3415
3416# Enables advanced matching for OpenID Connect authentication.
3417# When enabled ObjectPath (<http://objectpath.org/>) expression is used to
3418# evaluate the user identity.
3419# 
3420#auth_openid_use_objectpath_match = false
3421
3422# ObjectPath (<http://objectpath.org/>) expression that will be used
3423# to evaluate whether user is allowed to login into Driverless.
3424# Any expression that evaluates to True means user is allowed to log in.
3425# Examples:
3426# Simple claim equality: `$.our_claim is "our_value"`
3427# List of claims contains required value: `"expected_role" in @.roles`
3428# 
3429#auth_openid_use_objectpath_expression = ""
3430
3431# Sets token introspection URL for OpenID Connect authentication. (needs to be an absolute URL) Needs to be set when API token introspection is enabled. Is used to get the token TTL when set and IDP does not provide expires_in field in the token endpoint response.
3432#auth_openid_token_introspection_url = ""
3433
3434# Sets an URL where the user is being redirected after being logged out when set. (needs to be an absolute URL)
3435#auth_openid_end_session_endpoint_url = ""
3436
3437# If set, server will use these scopes when it asks for the token on the login. (space separated list)
3438#auth_openid_default_scopes = ""
3439
3440# Specifies the source from which user identity and username is retrieved.
3441# Currently supported sources are:
3442# user_info: Retrieves username from UserInfo endpoint response
3443# id_token: Retrieves username from ID Token using
3444# `auth_openid_id_token_username_key` claim
3445# 
3446#auth_oidc_identity_source = "userinfo"
3447
3448# Claim of preferred username in a message holding the user identity, which will be used as a username in application. The user identity source is specified by `auth_oidc_identity_source`, and can be e.g. UserInfo endpoint response or ID Token
3449#auth_oidc_username_claim = ""
3450
3451# OpenID-Connect Issuer URL, which is used for automatic provider infodiscovery. E.g. https://login.microsoftonline.com/<client-id>/v2.0
3452#auth_oidc_issuer_url = ""
3453
3454# OpenID-Connect Token endpoint URL. Setting this is optional and if it's empty, it'll be automatically set by provider info discovery.
3455#auth_oidc_token_endpoint_url = ""
3456
3457# OpenID-Connect Token introspection endpoint URL. Setting this is optional and if it's empty, it'll be automatically set by provider info discovery.
3458#auth_oidc_introspection_endpoint_url = ""
3459
3460# Absolute URL to which user is redirected, after they log out from the application, in case OIDC authentication is used. Usually this is absolute URL of DriverlessAI Login page e.g. https://1.2.3.4:12345/login
3461#auth_oidc_post_logout_url = ""
3462
3463# Key-value mapping of extra HTTP query parameters in an OIDC authorization request.
3464#auth_oidc_authorization_query_params = "{}"
3465
3466# When set to True, will skip cert verification.
3467#auth_oidc_skip_cert_verification = false
3468
3469# When set will use this value as the location for the CA cert, this takes precedence over auth_oidc_skip_cert_verification.
3470#auth_oidc_ca_cert_location = ""
3471
3472# Enables option to use Bearer token for authentication with the RPC endpoint.
3473#api_token_introspection_enabled = false
3474
3475# Sets the method that is used to introspect the bearer token.
3476# OAUTH2_TOKEN_INTROSPECTION: Uses  OAuth 2.0 Token Introspection (RPC 7662)
3477# endpoint to introspect the bearer token.
3478# This useful when 'openid' is used as the authentication method.
3479# Uses 'auth_openid_client_id' and 'auth_openid_client_secret' and to
3480# authenticate with the authorization server and
3481# `auth_openid_token_introspection_url` to perform the introspection.
3482# 
3483#api_token_introspection_method = "OAUTH2_TOKEN_INTROSPECTION"
3484
3485# Sets the minimum of the scopes that the access token needs to have
3486# in order to pass the introspection. Space separated./
3487# This is passed to the introspection endpoint and also verified after response
3488# for the servers that don't enforce scopes.
3489# Keeping this empty turns any the verification off.
3490# 
3491#api_token_oauth2_scopes = ""
3492
3493# Which field of the response returned by the token introspection endpoint should be used as a username.
3494#api_token_oauth2_username_field_name = "username"
3495
3496# Enables the option to initiate a PKCE flow from the UI in order to obtaintokens usable with Driverless clients
3497#oauth2_client_tokens_enabled = false
3498
3499# Sets up client id that will be used in the OAuth 2.0 Authorization Code Flow to obtain the tokens. Client needs to be public and be able to use PKCE with S256 code challenge.
3500#oauth2_client_tokens_client_id = ""
3501
3502# Sets up the absolute url to the authorize endpoint.
3503#oauth2_client_tokens_authorize_url = ""
3504
3505# Sets up the absolute url to the token endpoint.
3506#oauth2_client_tokens_token_url = ""
3507
3508# Sets up the absolute url to the token introspection endpoint.It's displayed in the UI so that clients can inspect the token expiration.
3509#oauth2_client_tokens_introspection_url = ""
3510
3511# Sets up the absolute to the redirect url where Driverless handles the redirect part of the Authorization Code Flow. this <Driverless base url>/oauth2/client_token
3512#oauth2_client_tokens_redirect_url = ""
3513
3514# Sets up the scope for the requested tokens. Space seprated list.
3515#oauth2_client_tokens_scope = "openid profile ai.h2o.storage"
3516
3517# ldap server domain or ip
3518#ldap_server = ""
3519
3520# ldap server port
3521#ldap_port = ""
3522
3523# Complete DN of the LDAP bind user
3524#ldap_bind_dn = ""
3525
3526# Password for the LDAP bind
3527#ldap_bind_password = ""
3528
3529# Provide Cert file location
3530#ldap_tls_file = ""
3531
3532# use true to use ssl or false
3533#ldap_use_ssl = false
3534
3535# the location in the DIT where the search will start
3536#ldap_search_base = ""
3537
3538# A string that describes what you are searching for. You can use Pythonsubstitution to have this constructed dynamically.(only {{DAI_USERNAME}} is supported)
3539#ldap_search_filter = ""
3540
3541# ldap attributes to return from search
3542#ldap_search_attributes = ""
3543
3544# specify key to find user name
3545#ldap_user_name_attribute = ""
3546
3547# When using this recipe, needs to be set to "1"
3548#ldap_recipe = "0"
3549
3550# Deprecated do not use
3551#ldap_user_prefix = ""
3552
3553# Deprecated, Use ldap_bind_dn
3554#ldap_search_user_id = ""
3555
3556# Deprecated, ldap_bind_password
3557#ldap_search_password = ""
3558
3559# Deprecated, use ldap_search_base instead
3560#ldap_ou_dn = ""
3561
3562# Deprecated, use ldap_base_dn
3563#ldap_dc = ""
3564
3565# Deprecated, use ldap_search_base
3566#ldap_base_dn = ""
3567
3568# Deprecated, use ldap_search_filter
3569#ldap_base_filter = ""
3570
3571# Path to the CRL file that will be used to verify client certificate.
3572#auth_tls_crl_file = ""
3573
3574# What field of the subject would used as source for username or other values used for further validation.
3575#auth_tls_subject_field = "CN"
3576
3577# Regular expression that will be used to parse subject field to obtain the username or other values used for further validation.
3578#auth_tls_field_parse_regexp = "(?P<username>.*)"
3579
3580# Sets up the way how user identity would be obtained
3581# REGEXP_ONLY: Will use 'auth_tls_subject_field' and 'auth_tls_field_parse_regexp'
3582# to extract the username from the client certificate.
3583# LDAP_LOOKUP: Will use LDAP server to lookup for the username.
3584# 'auth_tls_ldap_server', 'auth_tls_ldap_port',
3585# 'auth_tls_ldap_use_ssl', 'auth_tls_ldap_tls_file',
3586# 'auth_tls_ldap_bind_dn', 'auth_tls_ldap_bind_password'
3587# options are used to establish the connection with the LDAP server.
3588# 'auth_tls_subject_field' and 'auth_tls_field_parse_regexp'
3589# options are used to parse the certificate.
3590# 'auth_tls_ldap_search_base', 'auth_tls_ldap_search_filter', and
3591# 'auth_tls_ldap_username_attribute' options are used to do the
3592# lookup.
3593# 
3594#auth_tls_user_lookup = "REGEXP_ONLY"
3595
3596# Hostname or IP address of the LDAP server used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3597#auth_tls_ldap_server = ""
3598
3599# Port of the LDAP server used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3600#auth_tls_ldap_port = ""
3601
3602# Whether to SSL to when connecting to the LDAP server used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3603#auth_tls_ldap_use_ssl = false
3604
3605# Path to the SSL certificate used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3606#auth_tls_ldap_tls_file = ""
3607
3608# Complete DN of the LDAP bind user used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3609#auth_tls_ldap_bind_dn = ""
3610
3611# Password for the LDAP bind used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3612#auth_tls_ldap_bind_password = ""
3613
3614# Location in the DIT where the search will start used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3615#auth_tls_ldap_search_base = ""
3616
3617# LDAP filter that will be used to lookup for the user
3618# with LDAP_LOOKUP with 'tls_certificate' authentication method.
3619# Can be built dynamically using the named capturing groups from the
3620# 'auth_tls_field_parse_regexp' for substitution.
3621# Example:
3622# ``auth_tls_field_parse_regexp="\w+ (?P<id>\d+)"``
3623# ``auth_tls_ldap_search_filter="(&(objectClass=person)(id={{id}}))"``
3624# 
3625#auth_tls_ldap_search_filter = ""
3626
3627# Specified what LDAP record attribute will be used as username with LDAP_LOOKUP with 'tls_certificate' authentication method.
3628#auth_tls_ldap_username_attribute = ""
3629
3630# Sets optional additional lookup filter that is performed after the
3631# user is found. This can be used for example to check whether the is member of
3632# particular group.
3633# Filter can be built dynamically from the attributes returned by the lookup.
3634# Authorization fails when search does not return any entry. If one ore more
3635# entries are returned authorization succeeds.
3636# Example:
3637# ``auth_tls_field_parse_regexp="\w+ (?P<id>\d+)"``
3638# ``ldap_search_filter="(&(objectClass=person)(id={{id}}))"``
3639# ``auth_tls_ldap_authorization_lookup_filter="(&(objectClass=group)(member=uid={{uid}},dc=example,dc=com))"``
3640# If this option is empty no additional lookup is done and just a successful user
3641# lookup is enough to authorize the user.
3642# 
3643#auth_tls_ldap_authorization_lookup_filter = ""
3644
3645# Base DN where to start the Authorization lookup. Used when 'auth_tls_ldap_authorization_lookup_filter' is set.
3646#auth_tls_ldap_authorization_search_base = ""
3647
3648# Sets up the way how the token will picked from the request
3649# COOKIE: Will use 'auth_jwt_cookie_name' cookie content parsed with
3650# 'auth_jwt_source_parse_regexp' to obtain the token content.
3651# HEADER: Will use 'auth_jwt_header_name' header value parsed with
3652# 'auth_jwt_source_parse_regexp' to obtain the token content.
3653# 
3654#auth_jwt_token_source = "HEADER"
3655
3656# Specifies name of the cookie that will be used to obtain JWT.
3657#auth_jwt_cookie_name = ""
3658
3659# Specifies name http header that will be used to obtain JWT
3660#auth_jwt_header_name = ""
3661
3662# Regular expression that will be used to parse JWT source. Expression is in Python syntax and must contain named group 'token' with capturing the token value.
3663#auth_jwt_source_parse_regexp = "(?P<token>.*)"
3664
3665# Which JWT claim will be used as username for Driverless.
3666#auth_jwt_username_claim_name = "sub"
3667
3668# Whether to verify the signature of the JWT.
3669#auth_jwt_verify = true
3670
3671# Signature algorithm that will be used to verify the signature according to RFC 7518.
3672#auth_jwt_algorithm = "HS256"
3673
3674# Specifies the secret content for HMAC or public key for RSA and DSA signature algorithms.
3675#auth_jwt_secret = ""
3676
3677# Number of seconds after JWT still can be accepted if when already expired
3678#auth_jwt_exp_leeway_seconds = 0
3679
3680# List of accepted 'aud' claims for the JWTs. When empty, anyaudience is accepted
3681#auth_jwt_required_audience = "[]"
3682
3683# Value of the 'iss' claim that JWTs need to have in order to be accepted.
3684#auth_jwt_required_issuer = ""
3685
3686# Local password file
3687# Generating a htpasswd file: see syntax below
3688# ``htpasswd -B '<location_to_place_htpasswd_file>' '<username>'``
3689# note: -B forces use of brcypt, a secure encryption method
3690#local_htpasswd_file = ""
3691
3692# Specify the name of the report.
3693#autodoc_report_name = "report"
3694
3695# AutoDoc template path. Provide the full path to your custom AutoDoc template or leave as 'default'to generate the standard AutoDoc.
3696#autodoc_template = ""
3697
3698# Location of the additional AutoDoc templates
3699#autodoc_additional_template_folder = ""
3700
3701# Specify the AutoDoc output type.
3702#autodoc_output_type = "docx"
3703
3704# Specify the type of sub-templates to use.
3705# Options are 'auto', 'docx' or  'md'.
3706#autodoc_subtemplate_type = "auto"
3707
3708# Specify the maximum number of classes in the confusion
3709# matrix.
3710#autodoc_max_cm_size = 10
3711
3712# Specify the number of top features to display in
3713# the document. setting to -1 disables this restriction.
3714#autodoc_num_features = 50
3715
3716# Specify the minimum relative importance in order
3717# for a feature to be displayed. autodoc_min_relative_importance
3718# must be a float >= 0 and <= 1.
3719#autodoc_min_relative_importance = 0.003
3720
3721# Whether to compute permutation based feature
3722# importance.
3723#autodoc_include_permutation_feature_importance = false
3724
3725# Number of permutations to make per feature when computing
3726# feature importance.
3727#autodoc_feature_importance_num_perm = 1
3728
3729# Name of the scorer to be used to calculate feature
3730# importance. Leave blank to use experiments default scorer.
3731#autodoc_feature_importance_scorer = ""
3732
3733# The autodoc_pd_max_rows configuration controls the
3734# number of rows shown for the partial dependence plots (PDP) and Shapley
3735# values summary plot in the AutoDoc. Random sampling is used for
3736# datasets with more than the autodoc_pd_max_rows limit.
3737#autodoc_pd_max_rows = 10000
3738
3739# Maximum number of seconds Partial Dependency computation
3740# can take when generating report. Set to -1 for no time limit.
3741#autodoc_pd_max_runtime = 45
3742
3743# Whether to enable fast approximation for predictions that are needed for the
3744# generation of partial dependence plots. Can help when want to create many PDP
3745# plots in short time. Amount of approximation is controlled by fast_approx_num_trees,
3746# fast_approx_do_one_fold, fast_approx_do_one_model experiment expert settings.
3747# 
3748#autodoc_pd_fast_approx = true
3749
3750# Max number of unique values for integer/real columns to be treated as categoricals (test applies to first statistical_threshold_data_size_small rows only)
3751# Similar to max_int_as_cat_uniques used for experiment, but here used to control PDP making.
3752#autodoc_pd_max_int_as_cat_uniques = 50
3753
3754# Number of standard deviations outside of the range of
3755# a column to include in partial dependence plots. This shows how the
3756# model will react to data it has not seen before.
3757#autodoc_out_of_range = 3
3758
3759# Specify the number of rows to include in PDP and ICE plot
3760# if individual rows are not specified.
3761#autodoc_num_rows = 0
3762
3763# Whether to include population stability index if
3764# experiment is binary classification/regression.
3765#autodoc_population_stability_index = false
3766
3767# Number of quantiles to use for population stability index
3768# .
3769#autodoc_population_stability_index_n_quantiles = 10
3770
3771# Whether to include prediction statistics information if
3772# experiment is binary classification/regression.
3773#autodoc_prediction_stats = false
3774
3775# Number of quantiles to use for prediction statistics.
3776#autodoc_prediction_stats_n_quantiles = 20
3777
3778# Whether to include response rates information if
3779# experiment is binary classification.
3780#autodoc_response_rate = false
3781
3782# Number of quantiles to use for response rates information
3783# .
3784#autodoc_response_rate_n_quantiles = 10
3785
3786# Whether to show the Gini Plot.
3787#autodoc_gini_plot = false
3788
3789# Show Shapley values results in the AutoDoc.
3790#autodoc_enable_shapley_values = true
3791
3792# The number feature in a KLIME global GLM coefficients
3793# table. Must be an integer greater than 0 or -1. To
3794# show all features set to -1.
3795#autodoc_global_klime_num_features = 10
3796
3797# Set the number of KLIME global GLM coefficients tables. Set
3798# to 1 to show one table with coefficients sorted by absolute
3799# value. Set to 2 to two tables one with the top positive
3800# coefficients and one with the top negative coefficients.
3801#autodoc_global_klime_num_tables = 1
3802
3803# Number of features to be show in data summary. Value
3804# must be an integer. Values lower than 1, f.e. 0 or -1, indicate that
3805# all columns should be shown.
3806#autodoc_data_summary_col_num = -1
3807
3808# Whether to show all config settings. If False, only
3809# the changed settings (config overrides) are listed, otherwise all
3810# settings are listed.
3811#autodoc_list_all_config_settings = false
3812
3813# Line length of the keras model architecture summary. Must
3814# be an integer greater than 0 or -1. To use the default line length set
3815# value -1.
3816#autodoc_keras_summary_line_length = -1
3817
3818# Maximum number of lines shown for advanced transformer
3819# architecture in the Feature section. Note that the full architecture
3820# can be found in the Appendix.
3821#autodoc_transformer_architecture_max_lines = 30
3822
3823# Show full NLP/Image transformer architecture in
3824# the Appendix.
3825#autodoc_full_architecture_in_appendix = false
3826
3827# Specify whether to show the full glm coefficient
3828# table(s) in the appendix. coef_table_appendix_results_table must be
3829# a boolean: True to show tables in appendix, False to not show them
3830# .
3831#autodoc_coef_table_appendix_results_table = false
3832
3833# Set the number of models for which a glm coefficients
3834# table is shown in the AutoDoc. coef_table_num_models must
3835# be -1 or an integer >= 1 (-1 shows all models).
3836#autodoc_coef_table_num_models = 1
3837
3838# Set the number of folds per model for which a glm
3839# coefficients table is shown in the AutoDoc.
3840# coef_table_num_folds must be -1 or an integer >= 1
3841# (-1 shows all folds per model).
3842#autodoc_coef_table_num_folds = -1
3843
3844# Set the number of coefficients to show within a glm
3845# coefficients table in the AutoDoc. coef_table_num_coef, controls
3846# the number of rows shown in a glm table and must be -1 or
3847# an integer >= 1 (-1 shows all coefficients).
3848#autodoc_coef_table_num_coef = 50
3849
3850# Set the number of classes to show within a glm
3851# coefficients table in the AutoDoc. coef_table_num_classes controls
3852# the number of class-columns shown in a glm table and must be -1 or
3853# an integer >= 4 (-1 shows all classes).
3854#autodoc_coef_table_num_classes = 9
3855
3856# When histogram plots are available: The number of
3857# top (default 10) features for which to show histograms.
3858#autodoc_num_histogram_plots = 10
3859
3860#pdp_max_threads = -1
3861
3862# If True, will force AutoDoc to run in only the main server, not on remote workers in case of a multi-node setup
3863#autodoc_force_singlenode = false
3864
3865# IP address and port of autoviz process.
3866#vis_server_ip = "127.0.0.1"
3867
3868# IP and port of autoviz process.
3869#vis_server_port = 12346
3870
3871# Maximum number of columns autoviz will work with.
3872# If dataset has more columns than this number,
3873# autoviz will pick columns randomly, prioritizing numerical columns
3874# 
3875#autoviz_max_num_columns = 50
3876
3877#autoviz_max_aggregated_rows = 500
3878
3879# When enabled, experiment will try to use feature transformations recommended by Autoviz
3880#autoviz_enable_recommendations = true
3881
3882# Key-value pairs of column names, and transformations that Autoviz recommended
3883#autoviz_recommended_transformation = "{}"
3884
3885#autoviz_enable_transformer_acceptance_tests = false
3886
3887# Enable custom recipes.
3888#enable_custom_recipes = true
3889
3890# Enable uploading of custom recipes from local file system.
3891#enable_custom_recipes_upload = true
3892
3893# Enable downloading of custom recipes from external URL.
3894#enable_custom_recipes_from_url = true
3895
3896# Enable upload recipe files to be zip, containing custom recipe(s) in root folder,
3897# while any other code or auxiliary files must be in some sub-folder.
3898# 
3899#enable_custom_recipes_from_zip = true
3900
3901#must_have_custom_transformers = false
3902
3903#must_have_custom_transformers_2 = false
3904
3905#must_have_custom_transformers_3 = false
3906
3907#must_have_custom_models = false
3908
3909#must_have_custom_scorers = false
3910
3911# When set to true, it enable downloading custom recipes third party packages from the web, otherwise the python environment will be transferred from main worker.
3912#enable_recreate_custom_recipes_env = true
3913
3914#extra_migration_custom_recipes_missing_modules = false
3915
3916# Include custom recipes in default inclusion lists (warning: enables all custom recipes)
3917#include_custom_recipes_by_default = false
3918
3919#force_include_custom_recipes_by_default = false
3920
3921# Whether to enable use of H2O recipe server.  In some casees, recipe server (started at DAI startup) may enter into an unstable state, and this might affect other experiments.  Then one can avoid triggering use of the recipe server by setting this to false.
3922#enable_h2o_recipes = true
3923
3924# URL of H2O instance for use by transformers, models, or scorers.
3925#h2o_recipes_url = "None"
3926
3927# IP of H2O instance for use by transformers, models, or scorers.
3928#h2o_recipes_ip = "None"
3929
3930# Port of H2O instance for use by transformers, models, or scorers. No other instances must be on that port or on next port.
3931#h2o_recipes_port = 50361
3932
3933# Name of H2O instance for use by transformers, models, or scorers.
3934#h2o_recipes_name = "None"
3935
3936# Number of threads for H2O instance for use by transformers, models, or scorers. -1 for all.
3937#h2o_recipes_nthreads = 8
3938
3939# Log Level of H2O instance for use by transformers, models, or scorers.
3940#h2o_recipes_log_level = "None"
3941
3942# Maximum memory size of H2O instance for use by transformers, models, or scorers.
3943#h2o_recipes_max_mem_size = "None"
3944
3945# Minimum memory size of H2O instance for use by transformers, models, or scorers.
3946#h2o_recipes_min_mem_size = "None"
3947
3948# General user overrides of kwargs dict to pass to h2o.init() for recipe server.
3949#h2o_recipes_kwargs = "{}"
3950
3951# Number of trials to give h2o-3 recipe server to start.
3952#h2o_recipes_start_trials = 5
3953
3954# Number of seconds to sleep before starting h2o-3 recipe server.
3955#h2o_recipes_start_sleep0 = 1
3956
3957# Number of seconds to sleep between trials of starting h2o-3 recipe server.
3958#h2o_recipes_start_sleep = 5
3959
3960# Lock source for recipes to a specific github repo.
3961# If True then all custom recipes must come from the repo specified in setting: custom_recipes_git_repo
3962#custom_recipes_lock_to_git_repo = false
3963
3964# If custom_recipes_lock_to_git_repo is set to True, only this repo can be used to pull recipes from
3965#custom_recipes_git_repo = "https://github.com/h2oai/driverlessai-recipes"
3966
3967# Branch constraint for recipe source repo. Any branch allowed if unset or None
3968#custom_recipes_git_branch = "None"
3969
3970#custom_recipes_excluded_filenames_from_repo_download = "[]"
3971
3972#allow_old_recipes_use_datadir_as_data_directory = true
3973
3974# Internal helper to allow memory of if changed recipe
3975#last_recipe = ""
3976
3977# Dictionary to control recipes for each experiment and particular custom recipes.
3978# E.g. if inserting into the GUI as any toml string, can use:
3979# ""recipe_dict="{'key1': 2, 'key2': 'value2'}"""
3980# E.g. if putting into config.toml as a dict, can use:
3981# recipe_dict="{'key1': 2, 'key2': 'value2'}"
3982# 
3983#recipe_dict = "{}"
3984
3985# Dictionary to control some mutation parameters.
3986# E.g. if inserting into the GUI as any toml string, can use:
3987# ""mutation_dict="{'key1': 2, 'key2': 'value2'}"""
3988# E.g. if putting into config.toml as a dict, can use:
3989# mutation_dict="{'key1': 2, 'key2': 'value2'}"
3990# 
3991#mutation_dict = "{}"
3992
3993#enable_custom_transformers = true
3994
3995#enable_custom_pretransformers = true
3996
3997#enable_custom_models = true
3998
3999#enable_custom_scorers = true
4000
4001#enable_custom_datas = true
4002
4003#enable_custom_explainers = true
4004
4005#enable_custom_individuals = true
4006
4007#enable_connectors_recipes = true
4008
4009# Whether to validate recipe names provided in included lists, like included_models,
4010# or (if False) whether to just log warning to server logs and ignore any invalid names of recipes.
4011# 
4012#raise_on_invalid_included_list = false
4013
4014#contrib_relative_directory = "contrib"
4015
4016# location of custom recipes packages installed (relative to data_directory)
4017# We will try to install packages dynamically, but can also do (before or after server started):
4018# (inside docker running docker instance if running docker, or as user server is running as (e.g. dai user) if deb/tar native installation:
4019# PYTHONPATH=<full tmp dir>/<contrib_env_relative_directory>/lib/python3.6/site-packages/ <path to dai>dai-env.sh python -m pip install --prefix=<full tmp dir>/<contrib_env_relative_directory> <packagename> --upgrade --upgrade-strategy only-if-needed --log-file pip_log_file.log
4020# where <path to dai> is /opt/h2oai/dai/ for native rpm/deb installation
4021# Note can also install wheel files if <packagename> is name of wheel file or archive.
4022# 
4023#contrib_env_relative_directory = "contrib/env"
4024
4025# List of package versions to ignore.  Useful when small version change but likely to function still with old package version.
4026# 
4027#ignore_package_version = "[]"
4028
4029# List of package versions to remove if encounter conflict.  Useful when want new version of package, and old recipes likely to function still.
4030# 
4031#clobber_package_version = "['catboost', 'h2o_featurestore']"
4032
4033# List of package versions to remove if encounter conflict.
4034# Useful when want new version of package, and old recipes likely to function still.
4035# Also useful when do not need to use old versions of recipes even if they would no longer function.
4036# 
4037#swap_package_version = "{'catboost==0.26.1': 'catboost==1.2.5', 'catboost==0.25.1': 'catboost==1.2.5', 'catboost==0.24.1': 'catboost==1.2.5', 'catboost==1.0.4': 'catboost==1.2.5', 'catboost==1.0.5': 'catboost==1.2.5', 'catboost==1.0.6': 'catboost==1.2.5', 'catboost': 'catboost==1.2.5'}"
4038
4039# If user uploads recipe with changes to package versions,
4040# allow upgrade of package versions.
4041# If DAI protected packages are attempted to be changed, can try using pip_install_options toml with ['--no-deps'].
4042# Or to ignore entirely DAI versions of packages, can try using pip_install_options toml with ['--ignore-installed'].
4043# Any other experiments relying on recipes with such packages will be affected, use with caution.
4044#allow_version_change_user_packages = false
4045
4046# pip install retry for call to pip.  Sometimes need to try twice
4047#pip_install_overall_retries = 2
4048
4049# pip install verbosity level (number of -v's given to pip, up to 3
4050#pip_install_verbosity = 2
4051
4052# pip install timeout in seconds, Sometimes internet issues would mean want to fail faster
4053#pip_install_timeout = 15
4054
4055# pip install retry count
4056#pip_install_retries = 5
4057
4058# Whether to use DAI constraint file to help pip handle versions.  pip can make mistakes and try to install updated packages for no reason.
4059#pip_install_use_constraint = true
4060
4061# pip install options: string of list of other options, e.g. ['--proxy', 'http://user:password@proxyserver:port']
4062#pip_install_options = "[]"
4063
4064# Whether to enable basic acceptance testing.  Tests if can pickle the state, etc.
4065#enable_basic_acceptance_tests = true
4066
4067# Whether acceptance tests should run for custom genes / models / scorers / etc.
4068#enable_acceptance_tests = true
4069
4070#acceptance_tests_use_weather_data = false
4071
4072#acceptance_tests_mojo_benchmark = false
4073
4074# Whether to skip disabled recipes (True) or fail and show GUI message (False).
4075#skip_disabled_recipes = false
4076
4077# Minutes to wait until a recipe's acceptance testing is aborted.  A recipe is rejected if acceptance
4078# testing is enabled and times out.
4079# One may also set timeout for a specific recipe by setting the class's staticmethod function called
4080# acceptance_test_timeout to return number of minutes to wait until timeout doing acceptance testing.
4081# This timeout does not include the time to install required packages.
4082# 
4083#acceptance_test_timeout = 20.0
4084
4085# Whether to re-check recipes during server startup (if per_user_directories == false)
4086# or during user login (if per_user_directories == true).
4087# If any inconsistency develops, the bad recipe will be removed during re-doing acceptance testing.  This process
4088# can make start-up take alot longer for many recipes, but in LTS releases the risk of recipes becoming out of date
4089# is low.  If set to false, will disable acceptance re-testing during sever start but note that previews or experiments may fail if those inconsistent recipes are used.
4090# Such inconsistencies can occur when API changes for recipes or more aggressive acceptance tests are performed.
4091# 
4092#contrib_reload_and_recheck_server_start = true
4093
4094# Whether to at least install packages required for recipes during server startup (if per_user_directories == false)
4095# or during user login (if per_user_directories == true).
4096# Important to keep True so any later use of recipes (that have global packages installed) will work.
4097# 
4098#contrib_install_packages_server_start = true
4099
4100# Whether to re-check recipes after uploaded from main server to worker in multinode.
4101# Expensive for every task that has recipes to do this.
4102#contrib_reload_and_recheck_worker_tasks = false
4103
4104#data_recipe_isolate = true
4105
4106# Space-separated string list of URLs for recipes that are loaded at user login time
4107#server_recipe_url = ""
4108
4109#num_rows_acceptance_test_custom_transformer = 200
4110
4111#num_rows_acceptance_test_custom_model = 100
4112
4113# List of recipes (per dict key by type) that are applicable for given experiment. This is especially relevant
4114# for situations such as new `experiment with same params` where the user should be able to
4115# use the same recipe versions as the parent experiment if he/she wishes to.
4116# 
4117#recipe_activation = "{'transformers': [], 'models': [], 'scorers': [], 'data': [], 'individuals': []}"
4118
4119# File System Support
4120# upload : standard upload feature
4121# file : local file system/server file system
4122# hdfs : Hadoop file system, remember to configure the HDFS config folder path and keytab below
4123# dtap : Blue Data Tap file system, remember to configure the DTap section below
4124# s3 : Amazon S3, optionally configure secret and access key below
4125# gcs : Google Cloud Storage, remember to configure gcs_path_to_service_account_json below
4126# gbq : Google Big Query, remember to configure gcs_path_to_service_account_json below
4127# minio : Minio Cloud Storage, remember to configure secret and access key below
4128# snow : Snowflake Data Warehouse, remember to configure Snowflake credentials below (account name, username, password)
4129# kdb : KDB+ Time Series Database, remember to configure KDB credentials below (hostname and port, optionally: username, password, classpath, and jvm_args)
4130# azrbs : Azure Blob Storage, remember to configure Azure credentials below (account name, account key)
4131# jdbc: JDBC Connector, remember to configure JDBC below. (jdbc_app_configs)
4132# hive: Hive Connector, remember to configure Hive below. (hive_app_configs)
4133# recipe_file: Custom recipe file upload
4134# recipe_url: Custom recipe upload via url
4135# h2o_drive: H2O Drive, remember to configure `h2o_drive_endpoint_url` below
4136# feature_store: Feature Store, remember to configure feature_store_endpoint_url below
4137# databricks: Databricks Delta Table connector.
4138# 
4139#enabled_file_systems = "['upload', 'file', 'hdfs', 's3', 'recipe_file', 'recipe_url']"
4140
4141#max_files_listed = 100
4142
4143# The option disable access to DAI data_directory from file browser
4144#file_hide_data_directory = true
4145
4146# Enable usage of path filters
4147#file_path_filtering_enabled = false
4148
4149# List of absolute path prefixes to restrict access to in file system browser.
4150# First add the following environment variable to your command line to enable this feature:
4151# file_path_filtering_enabled=true
4152# This feature can be used in the following ways (using specific path or using logged user's directory):
4153# file_path_filter_include="['/data/stage']"
4154# file_path_filter_include="['/data/stage','/data/prod']"
4155# file_path_filter_include=/home/{{DAI_USERNAME}}/
4156# file_path_filter_include="['/home/{{DAI_USERNAME}}/','/data/stage','/data/prod']"
4157# 
4158#file_path_filter_include = "[]"
4159
4160# (Required) HDFS connector
4161# Specify HDFS Auth Type, allowed options are:
4162# noauth : (default) No authentication needed
4163# principal : Authenticate with HDFS with a principal user (DEPRECTATED - use `keytab` auth type)
4164# keytab : Authenticate with a Key tab (recommended). If running
4165# DAI as a service, then the Kerberos keytab needs to
4166# be owned by the DAI user.
4167# keytabimpersonation : Login with impersonation using a keytab
4168#hdfs_auth_type = "noauth"
4169
4170# Kerberos app principal user. Required when hdfs_auth_type='keytab'; recommended otherwise.
4171#hdfs_app_principal_user = ""
4172
4173# Deprecated - Do Not Use, login user is taken from the user name from login
4174#hdfs_app_login_user = ""
4175
4176# JVM args for HDFS distributions, provide args seperate by space
4177# -Djava.security.krb5.conf=<path>/krb5.conf
4178# -Dsun.security.krb5.debug=True
4179# -Dlog4j.configuration=file:///<path>log4j.properties
4180#hdfs_app_jvm_args = ""
4181
4182# hdfs class path
4183#hdfs_app_classpath = ""
4184
4185# List of supported DFS schemas. Ex. "['hdfs://', 'maprfs://', 'swift://']"
4186# Supported schemas list is used as an initial check to ensure valid input to connector
4187# 
4188#hdfs_app_supported_schemes = "['hdfs://', 'maprfs://', 'swift://']"
4189
4190# Maximum number of files viewable in connector ui. Set to larger number to view more files
4191#hdfs_max_files_listed = 100
4192
4193# Starting HDFS path displayed in UI HDFS browser
4194#hdfs_init_path = "hdfs://"
4195
4196# Starting HDFS path for the artifacts upload operations
4197#hdfs_upload_init_path = "hdfs://"
4198
4199# Enables the multi-user mode for MapR integration, which allows to have MapR ticket per user.
4200#enable_mapr_multi_user_mode = false
4201
4202# Blue Data DTap connector settings are similar to HDFS connector settings.
4203# Specify DTap Auth Type, allowed options are:
4204# noauth : No authentication needed
4205# principal : Authenticate with DTab with a principal user
4206# keytab : Authenticate with a Key tab (recommended). If running
4207# DAI as a service, then the Kerberos keytab needs to
4208# be owned by the DAI user.
4209# keytabimpersonation : Login with impersonation using a keytab
4210# NOTE: "hdfs_app_classpath" and "core_site_xml_path" are both required to be set for DTap connector
4211#dtap_auth_type = "noauth"
4212
4213# Dtap (HDFS) config folder path , can contain multiple config files
4214#dtap_config_path = ""
4215
4216# Path of the principal key tab file, dtap_key_tab_path is deprecated. Please use dtap_keytab_path
4217#dtap_key_tab_path = ""
4218
4219# Path of the principal key tab file
4220#dtap_keytab_path = ""
4221
4222# Kerberos app principal user (recommended)
4223#dtap_app_principal_user = ""
4224
4225# Specify the user id of the current user here as user@realm
4226#dtap_app_login_user = ""
4227
4228# JVM args for DTap distributions, provide args seperate by space
4229#dtap_app_jvm_args = ""
4230
4231# DTap (HDFS) class path. NOTE: set 'hdfs_app_classpath' also
4232#dtap_app_classpath = ""
4233
4234# Starting DTAP path displayed in UI DTAP browser
4235#dtap_init_path = "dtap://"
4236
4237# S3 Connector credentials
4238#aws_access_key_id = ""
4239
4240# S3 Connector credentials
4241#aws_secret_access_key = 
4242
4243# S3 Connector credentials
4244#aws_role_arn = ""
4245
4246# What region to use when none is specified in the s3 url.
4247# Ignored when aws_s3_endpoint_url is set.
4248# 
4249#aws_default_region = ""
4250
4251# Sets endpoint URL that will be used to access S3.
4252#aws_s3_endpoint_url = ""
4253
4254# If set to true S3 Connector will try to to obtain credentials associated with
4255# the role attached to the EC2 instance.
4256#aws_use_ec2_role_credentials = false
4257
4258# Starting S3 path displayed in UI S3 browser
4259#s3_init_path = "s3://"
4260
4261# S3 Connector will skip cert verification if this is set to true, (mostly used for S3-like connectors, e.g. Ceph)
4262#s3_skip_cert_verification = false
4263
4264# path/to/cert/bundle.pem - A filename of the CA cert bundle to use for the S3 connector
4265#s3_connector_cert_location = ""
4266
4267# GCS Connector credentials
4268# example (suggested) -- '/licenses/my_service_account_json.json'
4269#gcs_path_to_service_account_json = ""
4270
4271# GCS Connector service account credentials in JSON, this configuration takes precedence over gcs_path_to_service_account_json.
4272#gcs_service_account_json = "{}"
4273
4274# GCS Connector impersonated account
4275#gbq_access_impersonated_account = ""
4276
4277# Starting GCS path displayed in UI GCS browser
4278#gcs_init_path = "gs://"
4279
4280# Space-seperated list of OAuth2 scopes for the access token used to authenticate in Google Cloud Storage
4281#gcs_access_token_scopes = ""
4282
4283# When ``google_cloud_use_oauth`` is enabled, Google Cloud client cannot automatically infer the default project, thus it must be explicitly specified
4284#gcs_default_project_id = ""
4285
4286# Space-seperated list of OAuth2 scopes for the access token used to authenticate in Google BigQuery
4287#gbq_access_token_scopes = ""
4288
4289# By default the DriverlessAI Google Cloud Storage and BigQuery connectors are using service account file to retrieve authentication credentials.When enabled, the Storage and BigQuery connectors will use OAuth2 user access tokens to authenticate in Google Cloud instead.
4290#google_cloud_use_oauth = false
4291
4292# Minio Connector credentials
4293#minio_endpoint_url = ""
4294
4295# Minio Connector credentials
4296#minio_access_key_id = ""
4297
4298# Minio Connector credentials
4299#minio_secret_access_key = 
4300
4301# Minio Connector will skip cert verification if this is set to true
4302#minio_skip_cert_verification = false
4303
4304# path/to/cert/bundle.pem - A filename of the CA cert bundle to use for the Minio connector
4305#minio_connector_cert_location = ""
4306
4307# Starting Minio path displayed in UI Minio browser
4308#minio_init_path = "/"
4309
4310# H2O Drive server endpoint URL
4311#h2o_drive_endpoint_url = ""
4312
4313# Space seperated list of OpenID scopes for the access token used by the H2O Drive connector
4314#h2o_drive_access_token_scopes = ""
4315
4316# Maximum duration (in seconds) for a session with the H2O Drive
4317#h2o_drive_session_duration = 10800
4318
4319# Recommended Provide: url, user, password
4320# Optionally Provide: account, user, password
4321# Example URL: https://<snowflake_account>.<region>.snowflakecomputing.com
4322# Snowflake Connector credentials
4323#snowflake_url = ""
4324
4325# Snowflake Connector credentials
4326#snowflake_user = ""
4327
4328# Snowflake Connector credentials
4329#snowflake_password = ""
4330
4331# Snowflake Connector credentials
4332#snowflake_account = ""
4333
4334# Snowflake Connector authenticator, can be used when Snowflake is using native SSO with Okta.
4335# E.g.: snowflake_authenticator = "https://<okta_account_name>.okta.com"
4336# 
4337#snowflake_authenticator = ""
4338
4339# Keycloak endpoint for retrieving external IdP tokens for Snowflake. (https://www.keycloak.org/docs/latest/server_admin/#retrieving-external-idp-tokens)
4340#snowflake_keycloak_broker_token_endpoint = ""
4341
4342# Token type that should be used from the response from Keycloak endpoint for retrieving external IdP tokens for Snowflake. See `snowflake_keycloak_broker_token_endpoint`.
4343#snowflake_keycloak_broker_token_type = "access_token"
4344
4345# ID of the OAuth client configured in H2O Secure Store for authentication with Snowflake.
4346#snowflake_h2o_secure_store_oauth_client_id = ""
4347
4348# Snowflake hostname to connect to when running Driverless AI in Snowpark Container Services.
4349#snowflake_host = ""
4350
4351# Snowflake port to connect to when running Driverless AI in Snowpark Container Services.
4352#snowflake_port = ""
4353
4354# Snowflake filepath that stores the token of the session, when running
4355# Driverless AI in Snowpark Container Services.
4356# E.g.: snowflake_session_token_filepath = "/snowflake/session/token"
4357# 
4358#snowflake_session_token_filepath = ""
4359
4360# Setting to allow or disallow Snowflake connector from using Snowflake stages during queries.
4361# True - will permit the connector to use stages and generally improves performance. However,
4362# if the Snowflake user does not have permission to create/use stages will end in errors.
4363# False - will prevent the connector from using stages, thus Snowflake users without permission
4364# to create/use stages will have successful queries, however may significantly negatively impact
4365# query performance.
4366# 
4367#snowflake_allow_stages = true
4368
4369# Sets the number of rows to be fetched by Snowflake cursor at one time. This is only used if setting
4370# `snowflake_allow_stages` is set to False, may help with performance depending on the type and size
4371# of data being queried.
4372# 
4373#snowflake_batch_size = 10000
4374
4375# KDB Connector credentials
4376#kdb_user = ""
4377
4378# KDB Connector credentials
4379#kdb_password = ""
4380
4381# KDB Connector credentials
4382#kdb_hostname = ""
4383
4384# KDB Connector credentials
4385#kdb_port = ""
4386
4387# KDB Connector credentials
4388#kdb_app_classpath = ""
4389
4390# KDB Connector credentials
4391#kdb_app_jvm_args = ""
4392
4393# Account name for Azure Blob Store Connector
4394#azure_blob_account_name = ""
4395
4396# Account key for Azure Blob Store Connector
4397#azure_blob_account_key = 
4398
4399# Connection string for Azure Blob Store Connector
4400#azure_connection_string = 
4401
4402# SAS token for Azure Blob Store Connector
4403#azure_sas_token = 
4404
4405# Starting Azure blob store path displayed in UI Azure blob store browser
4406#azure_blob_init_path = "https://"
4407
4408# When enabled, Azure Blob Store Connector will use access token derived  from the credentials received on login with OpenID Connect.
4409#azure_blob_use_access_token = false
4410
4411# Configures the scopes for the access token used by Azure Blob Store  Connector when the azure_blob_use_access_token us enabled. (space separated list)
4412#azure_blob_use_access_token_scopes = "https://storage.azure.com/.default"
4413
4414# Sets the source of the access token for accessing the Azure bob store
4415# KEYCLOAK: Will exchange the session access token for the federated
4416# refresh token with Keycloak and use it to obtain the access token
4417# directly with the Azure AD.
4418# SESSION: Will use the access token derived  from the credentials
4419# received on login with OpenID Connect.
4420# 
4421#azure_blob_use_access_token_source = "SESSION"
4422
4423# Application (client) ID registered on Azure AD when the KEYCLOAK source is enabled.
4424#azure_blob_keycloak_aad_client_id = ""
4425
4426# Application (client) secret when the KEYCLOAK source is enabled.
4427#azure_blob_keycloak_aad_client_secret = ""
4428
4429# A URL that identifies a token authority. It should be of the format https://login.microsoftonline.com/your_tenant
4430#azure_blob_keycloak_aad_auth_uri = ""
4431
4432# Keycloak Endpoint for Retrieving External IDP Tokens (https://www.keycloak.org/docs/latest/server_admin/#retrieving-external-idp-tokens)
4433#azure_blob_keycloak_broker_token_endpoint = ""
4434
4435# (DEPRECATED, use azure_blob_use_access_token and
4436# azure_blob_use_access_token_source="KEYCLOAK" instead.)
4437# (When enabled only DEPRECATED options azure_ad_client_id,
4438# azure_ad_client_secret, azure_ad_auth_uri and
4439# azure_keycloak_idp_token_endpoint will be effective)
4440# This is equivalent to setting
4441# azure_blob_use_access_token_source = "KEYCLOAK"
4442# and setting azure_blob_keycloak_aad_client_id,
4443# azure_blob_keycloak_aad_client_secret,
4444# azure_blob_keycloak_aad_auth_uri and
4445# azure_blob_keycloak_broker_token_endpoint
4446# options.
4447# )
4448# If true, enable the Azure Blob Storage Connector to use Azure AD tokens
4449# obtained from the Keycloak for auth.
4450# 
4451#azure_enable_token_auth_aad = false
4452
4453# (DEPRECATED, use azure_blob_keycloak_aad_client_id instead.) Application (client) ID registered on Azure AD
4454#azure_ad_client_id = ""
4455
4456# (DEPRECATED, use azure_blob_keycloak_aad_client_secret instead.) Application Client Secret
4457#azure_ad_client_secret = ""
4458
4459# (DEPRECATED, use azure_blob_keycloak_aad_auth_uri instead)A URL that identifies a token authority. It should be of the format https://login.microsoftonline.com/your_tenant
4460#azure_ad_auth_uri = ""
4461
4462# (DEPRECATED, use azure_blob_use_access_token_scopes instead.)Scopes requested to access a protected API (a resource).
4463#azure_ad_scopes = "[]"
4464
4465# (DEPRECATED, use azure_blob_keycloak_broker_token_endpoint instead.)Keycloak Endpoint for Retrieving External IDP Tokens (https://www.keycloak.org/docs/latest/server_admin/#retrieving-external-idp-tokens)
4466#azure_keycloak_idp_token_endpoint = ""
4467
4468# ID of the application's Microsoft Entra tenant, also called its 'directory' ID.
4469# This is used for Azure Workload Identity.
4470# 
4471#azure_workload_identity_tenant_id = ""
4472
4473# The client ID of a Microsoft Entra app registration.
4474# This is used for Azure Workload Identity.
4475# 
4476#azure_workload_identity_client_id = ""
4477
4478# The path to a file containing a Kubernetes service account token that authenticates the identity.
4479# This is used for Azure Workload Identity.
4480# 
4481#azure_workload_identity_token_file_path = ""
4482
4483# Desired scopes for the access token when the Databricks connector is using
4484# Azure Workflow Identity authentication. At least one scope should be specified.
4485# For more information about scopes, see https://learn.microsoft.com/entra/identity-platform/scopes-oidc.
4486# 
4487#databricks_azure_workload_identity_scopes = ""
4488
4489# Name of the Databricks workspace instance. Please refer
4490# https://learn.microsoft.com/en-us/azure/databricks/workspace/workspace-details
4491# on how to obtains the name of your Databricks workspace instance.
4492# 
4493#databricks_workspace_instance_name = ""
4494
4495# Configuration for JDBC Connector.
4496# JSON/Dictionary String with multiple keys.
4497# Format as a single line without using carriage returns (the following example is formatted for readability).
4498# Use triple quotations to ensure that the text is read as a single string.
4499# Example:
4500# '{
4501# "postgres": {
4502# "url": "jdbc:postgresql://ip address:port/postgres",
4503# "jarpath": "/path/to/postgres_driver.jar",
4504# "classpath": "org.postgresql.Driver"
4505# },
4506# "mysql": {
4507# "url":"mysql connection string",
4508# "jarpath": "/path/to/mysql_driver.jar",
4509# "classpath": "my.sql.classpath.Driver"
4510# }
4511# }'
4512# 
4513#jdbc_app_configs = "{}"
4514
4515# extra jvm args for jdbc connector
4516#jdbc_app_jvm_args = "-Xmx4g"
4517
4518# alternative classpath for jdbc connector
4519#jdbc_app_classpath = ""
4520
4521# Configuration for Hive Connector.
4522# Note that inputs are similar to configuring HDFS connectivity.
4523# important keys:
4524# * hive_conf_path - path to hive configuration, may have multiple files. typically: hive-site.xml, hdfs-site.xml, etc
4525# * auth_type - one of `noauth`, `keytab`, `keytabimpersonation` for kerberos authentication
4526# * keytab_path - path to the kerberos keytab to use for authentication, can be "" if using `noauth` auth_type
4527# * principal_user - Kerberos app principal user. Required when using auth_type `keytab` or `keytabimpersonation`
4528# JSON/Dictionary String with multiple keys. Example:
4529# '{
4530# "hive_connection_1": {
4531# "hive_conf_path": "/path/to/hive/conf",
4532# "auth_type": "one of ['noauth', 'keytab', 'keytabimpersonation']",
4533# "keytab_path": "/path/to/<filename>.keytab",
4534# "principal_user": "hive/localhost@EXAMPLE.COM",
4535# },
4536# "hive_connection_2": {
4537# "hive_conf_path": "/path/to/hive/conf_2",
4538# "auth_type": "one of ['noauth', 'keytab', 'keytabimpersonation']",
4539# "keytab_path": "/path/to/<filename_2>.keytab",
4540# "principal_user": "my_user/localhost@EXAMPLE.COM",
4541# }
4542# }'
4543# 
4544#hive_app_configs = "{}"
4545
4546# Extra jvm args for hive connector
4547#hive_app_jvm_args = "-Xmx4g"
4548
4549# Alternative classpath for hive connector. Can be used to add additional jar files to classpath.
4550#hive_app_classpath = ""
4551
4552# Replace all the downloads on the experiment page to exports and allow users to push to the artifact store configured with artifacts_store
4553#enable_artifacts_upload = false
4554
4555# Artifacts store.
4556# file_system: stores artifacts on a file system directory denoted by artifacts_file_system_directory.
4557# s3: stores artifacts to S3 bucket.
4558# bitbucket: stores data into Bitbucket repository.
4559# azure: stores data into Azure Blob Store.
4560# hdfs: stores data into a Hadoop distributed file system location.
4561# 
4562#artifacts_store = "file_system"
4563
4564# Decide whether to skip cert verification for Bitbucket when using a repo with HTTPS
4565#bitbucket_skip_cert_verification = false
4566
4567# Local temporary directory to clone artifacts to, relative to data_directory
4568#bitbucket_tmp_relative_dir = "local_git_tmp"
4569
4570# File system location where artifacts will be copied in case artifacts_store is set to file_system
4571#artifacts_file_system_directory = "tmp"
4572
4573# AWS S3 bucket used for experiment artifact export.
4574#artifacts_s3_bucket = ""
4575
4576# Azure Blob Store credentials used for experiment artifact export
4577#artifacts_azure_blob_account_name = ""
4578
4579# Azure Blob Store credentials used for experiment artifact export
4580#artifacts_azure_blob_account_key = 
4581
4582# Azure Blob Store connection string used for experiment artifact export
4583#artifacts_azure_connection_string = 
4584
4585# Azure Blob Store SAS token used for experiment artifact export
4586#artifacts_azure_sas_token = 
4587
4588# Git auth user
4589#artifacts_git_user = "git"
4590
4591# Git auth password
4592#artifacts_git_password = ""
4593
4594# Git repo where artifacts will be pushed upon and upload
4595#artifacts_git_repo = ""
4596
4597# Git branch on the remote repo where artifacts are pushed
4598#artifacts_git_branch = "dev"
4599
4600# File location for the ssh private key used for git authentication
4601#artifacts_git_ssh_private_key_file_location = ""
4602
4603# Feature Store server endpoint URL
4604#feature_store_endpoint_url = ""
4605
4606# Enable TLS communication between DAI and the Feature Store server
4607#feature_store_enable_tls = false
4608
4609# Path to the client certificate to authenticate with the Feature Store server. This is only effective when feature_store_enable_tls=True.
4610#feature_store_tls_cert_path = ""
4611
4612# A list of access token scopes used by the Feature Store connector to authenticate. (Space separate list)
4613#feature_store_access_token_scopes = ""
4614
4615# When defined, will be used as an alternative recipe implementation for the FeatureStore connector.
4616#feature_store_custom_recipe_location = ""
4617
4618# If enabled, GPT functionalities such as summarization would be available. If `openai_api_secret_key` config is provided, OpenAI API would be used. Make sure this does not break your internal policy.
4619#enable_gpt = false
4620
4621# OpenAI API secret key. Beware that if this config is set and `enable_gpt` is `true`, we will send some metadata about datasets and experiments to OpenAI (during dataset and experiment summarization). Make sure that passing such data to OpenAI does not break your internal policy.
4622#openai_api_secret_key = 
4623
4624# OpenAI model to use.
4625#openai_api_model = "gpt-4"
4626
4627# h2oGPT URL endpoint that will be used for GPT-related purposes (e.g. summarization). If both `h2ogpt_url` and `openai_api_secret_key` are provided, we will use only h2oGPT URL.
4628#h2ogpt_url = ""
4629
4630# The h2oGPT Key required for specific h2oGPT URLs, enabling authorized access for GPT-related tasks like summarization.
4631#h2ogpt_key = 
4632
4633# Name of the h2oGPT model that should be used. If not specified the default model in the h2oGPT will be used.
4634#h2ogpt_model_name = ""
4635
4636# Default AWS credentials to be used for scorer deployments.
4637#deployment_aws_access_key_id = ""
4638
4639# Default AWS credentials to be used for scorer deployments.
4640#deployment_aws_secret_access_key = ""
4641
4642# AWS S3 bucket to be used for scorer deployments.
4643#deployment_aws_bucket_name = ""
4644
4645# Approximate upper limit of time for Triton to take to compute latency and throughput performance numbers when performing 'Benchmark' operations for a deployment. Higher values result in more accurate performance numbers.
4646#triton_benchmark_runtime = 5
4647
4648# Approximate upper limit of time for Triton to take to compute latency and throughput performance numbers after loading up the deployment, per model. Higher values result in more accurate performance numbers.
4649#triton_quick_test_runtime = 2
4650
4651# Number of Triton deployments to show per page of the Deploy Wizard
4652#deploy_wizard_num_per_page = 10
4653
4654# Whether to allow user to change non-server toml parameters per experiment in expert page.
4655#allow_config_overrides_in_expert_page = true
4656
4657# Maximum number of columns in each head and tail to log when ingesting data or running experiment on data.
4658#max_cols_log_headtail = 1000
4659
4660# Maximum number of columns in each head and tail to show in GUI, useful when head or tail has all necessary columns, but too many for UI or web server to handle.
4661# -1 means no limit.
4662# A reasonable value is 500, after which web server or browser can become overloaded and use too much memory.
4663# Some values of column counts in UI may not show up correctly, and some dataset details functions may not work.
4664# To select (from GUI or client) any columns as being target, weight column, fold column, time column, time column groups, or dropped columns, the dataset should have those columns within the selected head or tail set of columns.
4665#max_cols_gui_headtail = 1000
4666
4667# Supported file formats (file name endings must match for files to show up in file browser)
4668#supported_file_types = "['csv', 'tsv', 'txt', 'dat', 'tgz', 'gz', 'bz2', 'zip', 'xz', 'xls', 'xlsx', 'jay', 'feather', 'bin', 'arff', 'parquet', 'pkl', 'orc', 'avro']"
4669
4670# Supported file formats of data recipe files (file name endings must match for files to show up in file browser)
4671#recipe_supported_file_types = "['py', 'pyc', 'zip']"
4672
4673# By default, only supported file types (based on the file extensions listed above) will be listed for import into DAI
4674# Some data pipelines generate parquet files without any extensions. Enabling the below option will cause files
4675# without an extension to be listed in the file import dialog.
4676# DAI will import files without extensions as parquet files; if cannot be imported, an error is generated
4677# 
4678#list_files_without_extensions = false
4679
4680# Allow using browser localstorage, to improve UX.
4681#allow_localstorage = true
4682
4683# Allow original dataset columns to be present in downloaded predictions CSV
4684#allow_orig_cols_in_predictions = true
4685
4686# Allow the browser to store e.g. login credentials in login form (set to false for higher security)
4687#allow_form_autocomplete = true
4688
4689# Enable Projects workspace (alpha version, for evaluation)
4690#enable_projects = true
4691
4692# Default application language - options are 'en', 'ja', 'cn', 'ko'
4693#app_language = "en"
4694
4695# If true, Logout button is not visible in the GUI.
4696#disablelogout = false
4697
4698# Local path to the location of the Driverless AI Python Client. If empty, will download from s3
4699#python_client_path = ""
4700
4701# If disabled, server won't verify if WHL package specified in `python_client_path` is valid DAI python client. Default True
4702#python_client_verify_integrity = true
4703
4704# When enabled, new experiment requires to specify expert name
4705#gui_require_experiment_name = false
4706
4707# When disabled, Deploy option will be disabled on finished experiment page
4708#gui_enable_deploy_button = true
4709
4710# Display experiment tour
4711#enable_gui_product_tour = true
4712
4713# Whether user can download dataset as csv file
4714#enable_dataset_downloading = true
4715
4716# If enabled, user can export experiment as a Zip file
4717#enable_experiment_export = true
4718
4719# If enabled, user can import experiments, exported as Zip files from DriverlessAI
4720#enable_experiment_import = true
4721
4722# (EXPERIMENTAL) If enabled, user can launch experiment via new `Predict Wizard` options, which navigates to the new Nitro wizard.
4723#enable_experiment_wizard = true
4724
4725# (EXPERIMENTAL) If enabled, user can do joins via new `Join Wizard` options, which navigates to the new Nitro wizard.
4726#enable_join_wizard = true
4727
4728# URL address of the H2O AI link
4729#hac_link_url = "https://www.h2o.ai/freetrial/?utm_source=dai&ref=dai"
4730
4731#show_all_filesystems = false
4732
4733# Switches Driverless AI to use H2O.ai License Management Server to manage licenses/permission to use software
4734#enable_license_manager = false
4735
4736# Address at which to communicate with H2O.ai License Management Server.
4737# Requires above value, `enable_license_manager` set to True.
4738# Format: {http/https}://{ip address}:{port number}
4739# 
4740#license_manager_address = "http://127.0.0.1:9999"
4741
4742# Name of license manager project that Driverless AI will attempt to retrieve leases from.
4743# NOTE: requires an active license within the License Manager Server to function properly
4744# 
4745#license_manager_project_name = "default"
4746
4747# Number of milliseconds a lease for users will be expected to last,
4748# if using the H2O.ai License Manager server, before the lease REQUIRES renewal.
4749# Default: 3600000 (1 hour) = 1 hour * 60 min / hour * 60 sec / min * 1000 milliseconds / sec
4750# 
4751#license_manager_lease_duration = 3600000
4752
4753# Number of milliseconds a lease for Driverless AI worker nodes will be expected to last,
4754# if using the H2O.ai License Manager server, before the lease REQUIRES renewal.
4755# Default: 21600000 (6 hour) = 6 hour * 60 min / hour * 60 sec / min * 1000 milliseconds / sec
4756# 
4757#license_manager_worker_lease_duration = 21600000
4758
4759# To be used only if License Manager server is started with HTTPS
4760# Accepts a boolean: true/false, or a path to a file/directory. Denotates whether or not to attempt
4761# SSL Certificate verification when making a request to the License Manager server.
4762# True: attempt ssl certificate verification, will fail if certificates are self signed
4763# False: skip ssl certificate verification.
4764# /path/to/cert/directory: load certificates <cert.pem> in directory and use those for certificate verification
4765# Behaves in the same manner as python requests package:
4766# https://requests.readthedocs.io/en/latest/user/advanced/#ssl-cert-verification
4767# 
4768#license_manager_ssl_certs = "true"
4769
4770# Amount of time that Driverless AI workers will keep retrying to startup and obtain a lease from
4771# the license manager before timing out. Time out will cause worker startup to fail.
4772# 
4773#license_manager_worker_startup_timeout = 3600000
4774
4775# Emergency setting that will allow Driverless AI to run even if there is issues communicating with
4776# or obtaining leases from, the License Manager server.
4777# This is an encoded string that can be obtained from either the license manager ui or the logs of the license
4778# manager server.
4779# 
4780#license_manager_dry_run_token = ""
4781
4782# Choose LIME method to be used for creation of surrogate models.
4783#mli_lime_method = "k-LIME"
4784
4785# Choose whether surrogate models should be built for original or transformed features.
4786#mli_use_raw_features = true
4787
4788# Choose whether time series based surrogate models should be built for original features.
4789#mli_ts_use_raw_features = false
4790
4791# Choose whether to run all explainers on the sampled dataset.
4792#mli_sample = true
4793
4794# Set maximum number of features for which to build Surrogate Partial Dependence Plot. Use -1 to calculate Surrogate Partial Dependence Plot for all features.
4795#mli_vars_to_pdp = 10
4796
4797# Set the number of cross-validation folds for surrogate models.
4798#mli_nfolds = 3
4799
4800# Set the number of columns to bin in case of quantile binning.
4801#mli_qbin_count = 0
4802
4803# Number of threads for H2O instance for use by MLI.
4804#h2o_mli_nthreads = 8
4805
4806# Use this option to disable MOJO scoring pipeline. Scoring pipeline is chosen automatically (from MOJO and Python pipelines) by default. In case of certain models MOJO vs. Python choice can impact pipeline performance and robustness.
4807#mli_enable_mojo_scorer = true
4808
4809# When number of rows are above this limit sample for MLI for scoring UI data.
4810#mli_sample_above_for_scoring = 1000000
4811
4812# When number of rows are above this limit sample for MLI for training surrogate models.
4813#mli_sample_above_for_training = 100000
4814
4815# The sample size, number of rows, used for MLI surrogate models.
4816#mli_sample_size = 100000
4817
4818# Number of bins for quantile binning.
4819#mli_num_quantiles = 10
4820
4821# Number of trees for Random Forest surrogate model.
4822#mli_drf_num_trees = 100
4823
4824# Speed up predictions with a fast approximation (can reduce the number of trees or cross-validation folds).
4825#mli_fast_approx = true
4826
4827# Maximum number of interpreters status cache entries.
4828#mli_interpreter_status_cache_size = 1000
4829
4830# Max depth for Random Forest surrogate model.
4831#mli_drf_max_depth = 20
4832
4833# not only sample training, but also sample scoring.
4834#mli_sample_training = true
4835
4836# Regularization strength for k-LIME GLM's.
4837#klime_lambda = "[1e-06, 1e-08]"
4838
4839# Regularization distribution between L1 and L2 for k-LIME GLM's.
4840#klime_alpha = 0.0
4841
4842# Max cardinality for numeric variables in surrogate models to be considered categorical.
4843#mli_max_numeric_enum_cardinality = 25
4844
4845# Maximum number of features allowed for k-LIME k-means clustering.
4846#mli_max_number_cluster_vars = 6
4847
4848# Use all columns for k-LIME k-means clustering (this will override `mli_max_number_cluster_vars` if set to `True`).
4849#use_all_columns_klime_kmeans = false
4850
4851# Strict version check for MLI
4852#mli_strict_version_check = true
4853
4854# MLI cloud name
4855#mli_cloud_name = ""
4856
4857# Compute original model ICE using per feature's bin predictions (true) or use "one frame" strategy (false).
4858#mli_ice_per_bin_strategy = false
4859
4860# By default DIA will run for categorical columns with cardinality <= mli_dia_default_max_cardinality.
4861#mli_dia_default_max_cardinality = 10
4862
4863# By default DIA will run for categorical columns with cardinality >= mli_dia_default_min_cardinality.
4864#mli_dia_default_min_cardinality = 2
4865
4866# When number of rows are above this limit, then sample for MLI transformed Shapley calculation.
4867#mli_shapley_sample_size = 100000
4868
4869# Enable MLI keeper which ensures efficient use of filesystem/memory/DB by MLI.
4870#enable_mli_keeper = true
4871
4872# Enable MLI Sensitivity Analysis
4873#enable_mli_sa = true
4874
4875# Enable priority queues based explainers execution. Priority queues restrict available system resources and prevent system over-utilization. Interpretation execution time might be (significantly) slower.
4876#enable_mli_priority_queues = true
4877
4878# Explainers are run sequentially by default. This option can be used to run all explainers in parallel which can - depending on hardware strength and the number of explainers - decrease interpretation duration. Consider explainer dependencies, random explainers order and hardware over utilization.
4879#mli_sequential_task_execution = true
4880
4881# When number of rows are above this limit, then sample for Disparate Impact Analysis.
4882#mli_dia_sample_size = 100000
4883
4884# When number of rows are above this limit, then sample for Partial Dependence Plot.
4885#mli_pd_sample_size = 25000
4886
4887# Use dynamic switching between Partial Dependence Plot numeric and categorical binning and UI chart selection in case of features which were used both as numeric and categorical by experiment.
4888#mli_pd_numcat_num_chart = true
4889
4890# If 'mli_pd_numcat_num_chart' is enabled, then use numeric binning and chart if feature unique values count is bigger than threshold, else use categorical binning and chart.
4891#mli_pd_numcat_threshold = 11
4892
4893# In New Interpretation screen show only datasets which can be used to explain a selected model. This can slow down the server significantly.
4894#new_mli_list_only_explainable_datasets = false
4895
4896# Enable async/await-based non-blocking MLI API
4897#enable_mli_async_api = true
4898
4899# Enable main chart aggregator in Sensitivity Analysis
4900#enable_mli_sa_main_chart_aggregator = true
4901
4902# When to sample for Sensitivity Analysis (number of rows after sampling).
4903#mli_sa_sampling_limit = 500000
4904
4905# Run main chart aggregator in Sensitivity Analysis when the number of dataset instances is bigger than given limit.
4906#mli_sa_main_chart_aggregator_limit = 1000
4907
4908# Use predict_safe() (true) or predict_base() (false) in MLI (PD, ICE, SA, ...).
4909#mli_predict_safe = false
4910
4911# Number of max retries should the surrogate model fail to build.
4912#mli_max_surrogate_retries = 5
4913
4914# Allow use of symlinks (instead of file copy) by MLI explainer procedures.
4915#enable_mli_symlinks = true
4916
4917# Fraction of memory to allocate for h2o MLI jar
4918#h2o_mli_fraction_memory = 0.45
4919
4920# Add TOML string to Driverless AI server config.toml configuration file.
4921#mli_custom = ""
4922
4923# To exclude e.g. Sensitivity Analysis explainer use: excluded_mli_explainers=['h2oaicore.mli.byor.recipes.sa_explainer.SaExplainer'].
4924#excluded_mli_explainers = "[]"
4925
4926# Enable RPC API performance monitor.
4927#enable_ws_perfmon = false
4928
4929# Number of parallel workers when scoring using MOJO in Kernel Explainer.
4930#mli_kernel_explainer_workers = 4
4931
4932# Use Kernel Explainer to obtain Shapley values for original features.
4933#mli_run_kernel_explainer = false
4934
4935# Sample input dataset for Kernel Explainer.
4936#mli_kernel_explainer_sample = true
4937
4938# Sample size for input dataset passed to Kernel Explainer.
4939#mli_kernel_explainer_sample_size = 1000
4940
4941# 'auto' or int. Number of times to re-evaluate the model when explaining each prediction. More samples lead to lower variance estimates of the SHAP values. The 'auto' setting uses nsamples = 2 * X.shape[1] + 2048. This setting is disabled by default and DAI determines the right number internally.
4942#mli_kernel_explainer_nsamples = "auto"
4943
4944# 'num_features(int)', 'auto' (default for now, but deprecated), 'aic', 'bic', or float. The l1 regularization to use for feature selection (the estimation procedure is based on a debiased lasso). The 'auto' option currently uses aic when less that 20% of the possible sample space is enumerated, otherwise it uses no regularization. THE BEHAVIOR OF 'auto' WILL CHANGE in a future version to be based on 'num_features' instead of AIC. The aic and bic options use the AIC and BIC rules for regularization. Using 'num_features(int)' selects a fix number of top features. Passing a float directly sets the alpha parameter of the sklearn.linear_model.Lasso model used for feature selection.
4945#mli_kernel_explainer_l1_reg = "aic"
4946
4947# Max runtime for Kernel Explainer in seconds. Default is 900, which equates to 15 minutes. Setting this parameter to -1 means to honor the Kernel Shapley sample size provided regardless of max runtime.
4948#mli_kernel_explainer_max_runtime = 900
4949
4950# Tokenizer used to extract tokens from text columns for MLI.
4951#mli_nlp_tokenizer = "tfidf"
4952
4953# Number of tokens used for MLI NLP explanations. -1 means all.
4954#mli_nlp_top_n = 20
4955
4956# Maximum number of records used by MLI NLP explainers.
4957#mli_nlp_sample_limit = 10000
4958
4959# Minimum number of documents in which token has to appear. Integer mean absolute count, float means percentage.
4960#mli_nlp_min_df = 3
4961
4962# Maximum number of documents in which token has to appear. Integer mean absolute count, float means percentage.
4963#mli_nlp_max_df = 0.9
4964
4965# The minimum value in the ngram range. The tokenizer will generate all possible tokens in the (mli_nlp_min_ngram, mli_nlp_max_ngram) range.
4966#mli_nlp_min_ngram = 1
4967
4968# The maximum value in the ngram range. The tokenizer will generate all possible tokens in the (mli_nlp_min_ngram, mli_nlp_max_ngram) range.
4969#mli_nlp_max_ngram = 1
4970
4971# Mode used to choose N tokens for MLI NLP.
4972# "top" chooses N top tokens.
4973# "bottom" chooses N bottom tokens.
4974# "top-bottom" chooses math.floor(N/2) top and math.ceil(N/2) bottom tokens.
4975# "linspace" chooses N evenly spaced out tokens.
4976#mli_nlp_min_token_mode = "top"
4977
4978# The number of top tokens to be used as features when building token based feature importance.
4979#mli_nlp_tokenizer_max_features = -1
4980
4981# The number of top tokens to be used as features when computing text LOCO.
4982#mli_nlp_loco_max_features = -1
4983
4984# The tokenizer method to use when tokenizing a dataset for surrogate models. Can either choose 'TF-IDF' or 'Linear Model + TF-IDF', which first runs TF-IDF to get tokens and then fits a linear model between the tokens and the target to get importances of tokens, which are based on coefficients of the linear model. Default is 'Linear Model + TF-IDF'. Only applies to NLP models.
4985#mli_nlp_surrogate_tokenizer = "Linear Model + TF-IDF"
4986
4987# The number of top tokens to be used as features when building surrogate models. Only applies to NLP models.
4988#mli_nlp_surrogate_tokens = 100
4989
4990# Ignore stop words for MLI NLP.
4991#mli_nlp_use_stop_words = true
4992
4993# List of words to filter out before generation of text tokens, which are passed to MLI NLP LOCO and surrogate models (if enabled). Default is 'english'. Pass in custom stop-words as a list, e.g., ['great', 'good'].
4994#mli_nlp_stop_words = "english"
4995
4996# Append passed in list of custom stop words to default 'english' stop words.
4997#mli_nlp_append_to_english_stop_words = false
4998
4999# Enable MLI for image experiments.
5000#mli_image_enable = true
5001
5002# The maximum number of rows allowed to get the local explanation result, increase the value may jeopardize overall performance, change the value only if necessary.
5003#mli_max_explain_rows = 500
5004
5005# The maximum number of rows allowed to get the NLP token importance result, increasing the value may consume too much memory and negatively impact the performance, change the value only if necessary.
5006#mli_nlp_max_tokens_rows = 50
5007
5008# The minimum number of rows to enable parallel execution for NLP local explanations calculation.
5009#mli_nlp_min_parallel_rows = 10
5010
5011# Run legacy defaults in addition to current default explainers in MLI.
5012#mli_run_legacy_defaults = false
5013
5014# Run explainers sequentially for one given MLI job.
5015#mli_run_explainers_sequentially = false
5016
5017# Set dask CUDA/RAPIDS cluster settings for single node workers.
5018# Additional environment variables can be set, see: https://dask-cuda.readthedocs.io/en/latest/ucx.html#dask-scheduler
5019# e.g. for ucx use: {} dict version of: dict(n_workers=None, threads_per_worker=1, processes=True, memory_limit='auto', device_memory_limit=None, CUDA_VISIBLE_DEVICES=None, data=None, local_directory=None, protocol='ucx', enable_tcp_over_ucx=True, enable_infiniband=False, enable_nvlink=False, enable_rdmacm=False, ucx_net_devices='auto', rmm_pool_size='1GB')
5020# WARNING: Do not add arguments like {'n_workers': 1, 'processes': True, 'threads_per_worker': 1} this will lead to hangs, cuda cluster handles this itself.
5021# 
5022#dask_cuda_cluster_kwargs = "{'scheduler_port': 0, 'dashboard_address': ':0', 'protocol': 'tcp'}"
5023
5024# Set dask cluster settings for single node workers.
5025# 
5026#dask_cluster_kwargs = "{'n_workers': 1, 'processes': True, 'threads_per_worker': 1, 'scheduler_port': 0, 'dashboard_address': ':0', 'protocol': 'tcp'}"
5027
5028# Whether to start dask workers on this multinode worker.
5029# 
5030#start_dask_worker = true
5031
5032# Set dask scheduler env.
5033# See https://docs.dask.org/en/latest/setup/cli.html
5034# 
5035#dask_scheduler_env = "{}"
5036
5037# Set dask scheduler env.
5038# See https://docs.dask.org/en/latest/setup/cli.html
5039# 
5040#dask_cuda_scheduler_env = "{}"
5041
5042# Set dask scheduler options.
5043# See https://docs.dask.org/en/latest/setup/cli.html
5044# 
5045#dask_scheduler_options = ""
5046
5047# Set dask cuda scheduler options.
5048# See https://docs.dask.org/en/latest/setup/cli.html
5049# 
5050#dask_cuda_scheduler_options = ""
5051
5052# Set dask worker env.
5053# See https://docs.dask.org/en/latest/setup/cli.html
5054# 
5055#dask_worker_env = "{'NCCL_P2P_DISABLE': '1', 'NCCL_DEBUG': 'WARN'}"
5056
5057# Set dask worker options.
5058# See https://docs.dask.org/en/latest/setup/cli.html
5059# 
5060#dask_worker_options = "--memory-limit 0.95"
5061
5062# Set dask cuda worker options.
5063# Similar options as dask_cuda_cluster_kwargs.
5064# See https://dask-cuda.readthedocs.io/en/latest/ucx.html#launching-scheduler-workers-and-clients-separately
5065# "--rmm-pool-size 1GB" can be set to give 1GB to RMM for more efficient rapids
5066# 
5067#dask_cuda_worker_options = "--memory-limit 0.95"
5068
5069# Set dask cuda worker env.
5070# See: https://dask-cuda.readthedocs.io/en/latest/ucx.html#launching-scheduler-workers-and-clients-separately
5071# https://ucx-py.readthedocs.io/en/latest/dask.html
5072# 
5073#dask_cuda_worker_env = "{}"
5074
5075# See https://docs.dask.org/en/latest/setup/cli.html
5076# e.g. ucx is optimal, while tcp is most reliable
5077# 
5078#dask_protocol = "tcp"
5079
5080# See https://docs.dask.org/en/latest/setup/cli.html
5081# 
5082#dask_server_port = 8786
5083
5084# See https://docs.dask.org/en/latest/setup/cli.html
5085# 
5086#dask_dashboard_port = 8787
5087
5088# See https://docs.dask.org/en/latest/setup/cli.html
5089# e.g. ucx is optimal, while tcp is most reliable
5090# 
5091#dask_cuda_protocol = "tcp"
5092
5093# See https://docs.dask.org/en/latest/setup/cli.html
5094# port + 1 is used for dask dashboard
5095# 
5096#dask_cuda_server_port = 8790
5097
5098# See https://docs.dask.org/en/latest/setup/cli.html
5099# 
5100#dask_cuda_dashboard_port = 8791
5101
5102# If empty string, auto-detect IP capable of reaching network.
5103# Required to be set if using worker_mode=multinode.
5104# 
5105#dask_server_ip = ""
5106
5107# Number of processses per dask (not cuda-GPU) worker.
5108# If -1, uses dask default of cpu count + 1 + nprocs.
5109# If -2, uses DAI default of total number of physical cores.  Recommended for heavy feature engineering.
5110# If 1, assumes tasks are mostly multi-threaded and can use entire node per task.  Recommended for heavy multinode model training.
5111# Only applicable to dask (not dask_cuda) workers
5112# 
5113#dask_worker_nprocs = 1
5114
5115# Number of threads per process for dask workers
5116#dask_worker_nthreads = 1
5117
5118# Number of threads per process for dask_cuda workers
5119# If -2, uses DAI default of physical cores per GPU,
5120# since must have 1 worker/GPU only.
5121# 
5122#dask_cuda_worker_nthreads = -2
5123
5124# See https://github.com/dask/dask-lightgbm
5125# 
5126#lightgbm_listen_port = 12400
5127
5128# Whether to enable jupyter server
5129#enable_jupyter_server = false
5130
5131# Port for jupyter server
5132#jupyter_server_port = 8889
5133
5134# Whether to enable jupyter server browser
5135#enable_jupyter_server_browser = false
5136
5137# Whether to root access to jupyter server browser
5138#enable_jupyter_server_browser_root = false
5139
5140# Hostname (or IP address) of remote Triton inference service (outside of DAI), to be used when auto_deploy_triton_scoring_pipeline
5141# and make_triton_scoring_pipeline are not disabled. If set, check triton_model_repository_dir_remote and triton_server_params_remote as well.
5142# 
5143#triton_host_remote = ""
5144
5145# Path to model repository directory for remote Triton inference server outside of Driverless AI. All Triton deployments for all users are stored in this directory. Requires write access to this directory from Driverless AI (shared file system). This setting is optional. If not provided, will upload each model deployment over gRPC protocol.
5146#triton_model_repository_dir_remote = ""
5147
5148# Parameters to connect to remote Triton server, only used if triton_host_remote and
5149# triton_model_repository_dir_remote are set.
5150# Note: 'model-control-mode' need to be set to 'explicit' in order to allow DAI upload model to remote
5151# triton server.
5152# .
5153#triton_server_params_remote = "{'http-port': 8000, 'grpc-port': 8001, 'metrics-port': 8002, 'model-control-mode': 'explicit'}"
5154
5155#triton_log_level = 0
5156
5157#triton_model_reload_on_startup_count = 0
5158
5159#triton_clean_up_temp_python_env_on_startup = true
5160
5161# When set to true, CPU executors will strictly run just CPU tasks.
5162#multinode_enable_strict_queue_policy = false
5163
5164# Controls whether CPU tasks can run on GPU machines.
5165#multinode_enable_cpu_tasks_on_gpu_machines = true
5166
5167# Storage medium to be used to exchange data between main server and remote worker nodes.
5168#multinode_storage_medium = "minio"
5169
5170# How the long running tasks are scheduled.
5171# multiprocessing: forks the current process immediately.
5172# singlenode:      shares the task through redis and needs a worker running.
5173# multinode:       same as singlenode and also shares the data through minio
5174# and allows worker to run on the different machine.
5175# 
5176#worker_mode = "singlenode"
5177
5178# Redis settings
5179#redis_ip = "127.0.0.1"
5180
5181# Redis settings
5182#redis_port = 6379
5183
5184# Redis database. Each DAI instance running on the redis server should have unique integer.
5185#redis_db = 0
5186
5187# Redis password. Will be randomly generated main server startup, and by default it will show up in config file uncommented.If you are running more than one DriverlessAI instance per system, make sure each and every instance is connected to its own redis queue.
5188#main_server_redis_password = "PlWUjvEJSiWu9j0aopOyL5KwqnrKtyWVoZHunqxr"
5189
5190# If set to true, the config will get encrypted before it gets saved into the Redis database.
5191#redis_encrypt_config = false
5192
5193# The port that Minio will listen on, this only takes effect if the current system is a multinode main server.
5194#local_minio_port = 9001
5195
5196# Location of main server's minio server.
5197#main_server_minio_address = "127.0.0.1:9001"
5198
5199# Access key of main server's minio server.
5200#main_server_minio_access_key_id = "GMCSE2K2T3RV6YEHJUYW"
5201
5202# Secret access key of main server's minio server.
5203#main_server_minio_secret_access_key = "JFxmXvE/W1AaqwgyPxAUFsJZRnDWUaeQciZJUe9H"
5204
5205# Name of minio bucket used for file synchronization.
5206#main_server_minio_bucket = "h2oai"
5207
5208# S3 global access key.
5209#main_server_s3_access_key_id = "access_key"
5210
5211# S3 global secret access key
5212#main_server_s3_secret_access_key = "secret_access_key"
5213
5214# S3 bucket.
5215#main_server_s3_bucket = "h2oai-multinode-tests"
5216
5217# Maximum number of local tasks processed at once, limited to no more than total number of physical (not virtual) cores divided by two (minimum of 1).
5218#worker_local_processors = 32
5219
5220# A concurrency limit for the 3 priority queues, only enabled when worker_remote_processors is greater than 0.
5221#worker_priority_queues_processors = 4
5222
5223# A timeout before which a scheduled task is bumped up in priority
5224#worker_priority_queues_time_check = 30
5225
5226# Maximum number of remote tasks processed at once, if value is set to -1 the system will automatically pick a reasonable limit depending on the number of available virtual CPU cores.
5227#worker_remote_processors = -1
5228
5229# If worker_remote_processors >= 3, factor by which each task reduces threads, used by various packages like datatable, lightgbm, xgboost, etc.
5230#worker_remote_processors_max_threads_reduction_factor = 0.7
5231
5232# Temporary file system location for multinode data transfer. This has to be an absolute path with equivalent configuration on both the main server and remote workers.
5233#multinode_tmpfs = ""
5234
5235# When set to true, will use the 'multinode_tmpfs' as datasets store.
5236#multinode_store_datasets_in_tmpfs = false
5237
5238# How often the server should extract results from redis queue in milliseconds.
5239#redis_result_queue_polling_interval = 100
5240
5241# Sleep time for worker loop.
5242#worker_sleep = 0.1
5243
5244# For how many seconds worker should wait for main server minio bucket before it fails
5245#main_server_minio_bucket_ping_timeout = 180
5246
5247# How long the worker should wait on redis db initialization in seconds.
5248#worker_start_timeout = 30
5249
5250#worker_no_main_server_wait_time = 1800
5251
5252#worker_no_main_server_wait_time_with_hard_assert = 30
5253
5254# For how many seconds the worker shouldn't respond to be marked unhealthy.
5255#worker_healthy_response_period = 300
5256
5257# Whether to enable priority queue for worker nodes to schedule experiments.
5258# 
5259#enable_experiments_priority_queue = false
5260
5261# Exposes the DriverlessAI base version when enabled.
5262#expose_server_version = true
5263
5264# https settings
5265# You can make a self-signed certificate for testing with the following commands:
5266# sudo openssl req -x509 -newkey rsa:4096 -keyout private_key.pem -out cert.pem -days 3650 -nodes -subj '/O=Driverless AI'
5267# sudo chown dai:dai cert.pem private_key.pem
5268# sudo chmod 600 cert.pem private_key.pem
5269# sudo mv cert.pem private_key.pem /etc/dai
5270#enable_https = false
5271
5272# https settings
5273# You can make a self-signed certificate for testing with the following commands:
5274# sudo openssl req -x509 -newkey rsa:4096 -keyout private_key.pem -out cert.pem -days 3650 -nodes -subj '/O=Driverless AI'
5275# sudo chown dai:dai cert.pem private_key.pem
5276# sudo chmod 600 cert.pem private_key.pem
5277# sudo mv cert.pem private_key.pem /etc/dai
5278#ssl_key_file = "/etc/dai/private_key.pem"
5279
5280# https settings
5281# You can make a self-signed certificate for testing with the following commands:
5282# sudo openssl req -x509 -newkey rsa:4096 -keyout private_key.pem -out cert.pem -days 3650 -nodes -subj '/O=Driverless AI'
5283# sudo chown dai:dai cert.pem private_key.pem
5284# sudo chmod 600 cert.pem private_key.pem
5285# sudo mv cert.pem private_key.pem /etc/dai
5286#ssl_crt_file = "/etc/dai/cert.pem"
5287
5288# https settings
5289# Passphrase for the ssl_key_file,
5290# either use this setting or ssl_key_passphrase_file,
5291# or neither if no passphrase is used.
5292#ssl_key_passphrase = ""
5293
5294# https settings
5295# Passphrase file  for the ssl_key_file,
5296# either use this setting or ssl_key_passphrase,
5297# or neither if no passphrase is used.
5298#ssl_key_passphrase_file = ""
5299
5300# SSL TLS
5301#ssl_no_sslv2 = true
5302
5303# SSL TLS
5304#ssl_no_sslv3 = true
5305
5306# SSL TLS
5307#ssl_no_tlsv1 = true
5308
5309# SSL TLS
5310#ssl_no_tlsv1_1 = true
5311
5312# SSL TLS
5313#ssl_no_tlsv1_2 = false
5314
5315# SSL TLS
5316#ssl_no_tlsv1_3 = false
5317
5318# https settings
5319# Sets the client verification mode.
5320# CERT_NONE: Client does not need to provide the certificate and if it does any
5321# verification errors are ignored.
5322# CERT_OPTIONAL: Client does not need to provide the certificate and if it does
5323# certificate is verified against set up CA chains.
5324# CERT_REQUIRED: Client needs to provide a certificate and certificate is
5325# verified.
5326# You'll need to set 'ssl_client_key_file' and 'ssl_client_crt_file'
5327# When this mode is selected for Driverless to be able to verify
5328# it's own callback requests.
5329# 
5330#ssl_client_verify_mode = "CERT_NONE"
5331
5332# https settings
5333# Path to the Certification Authority certificate file. This certificate will be
5334# used when to verify client certificate when client authentication is turned on.
5335# If this is not set, clients are verified using default system certificates.
5336# 
5337#ssl_ca_file = ""
5338
5339# https settings
5340# path to the private key that Driverless will use to authenticate itself when
5341# CERT_REQUIRED mode is set.
5342# 
5343#ssl_client_key_file = ""
5344
5345# https settings
5346# path to the client certificate that Driverless will use to authenticate itself
5347# when CERT_REQUIRED mode is set.
5348# 
5349#ssl_client_crt_file = ""
5350
5351# If enabled, webserver will serve xsrf cookies and verify their validity upon every POST request
5352#enable_xsrf_protection = true
5353
5354# Sets the `SameSite` attribute for the `_xsrf` cookie; options are "Lax", "Strict", or "".
5355#xsrf_cookie_samesite = ""
5356
5357#enable_secure_cookies = false
5358
5359# When enabled each authenticated access will be verified comparing IP address of initiator of session and current request IP
5360#verify_session_ip = false
5361
5362# Enables automatic detection for forbidden/dangerous constructs in custom recipe
5363#custom_recipe_security_analysis_enabled = false
5364
5365# List of modules that can be imported in custom recipes. Default empty list means all modules are allowed except for banlisted ones
5366#custom_recipe_import_allowlist = "[]"
5367
5368# List of modules that cannot be imported in custom recipes
5369#custom_recipe_import_banlist = "['shlex', 'plumbum', 'pexpect', 'envoy', 'commands', 'fabric', 'subprocess', 'os.system', 'system']"
5370
5371# Regex pattern list of calls which are allowed in custom recipes.
5372# Empty list means everything (except for banlist) is allowed.
5373# E.g. if only `os.path.*` is in allowlist, custom recipe can only call methods
5374# from `os.path` module and the built in ones
5375# 
5376#custom_recipe_method_call_allowlist = "[]"
5377
5378# Regex pattern list of calls which need to be rejected in custom recipes.
5379# E.g. if `os.system` in banlist, custom recipe cannot call `os.system()`.
5380# If `socket.*` in banlist, recipe cannot call any method of socket module such as
5381# `socket.socket()` or any `socket.a.b.c()`
5382# 
5383#custom_recipe_method_call_banlist = "['os\\.system', 'socket\\..*', 'subprocess.*', 'os.spawn.*']"
5384
5385# List of regex patterns representing dangerous sequences/constructs
5386# which could be harmful to whole system and should be banned from code
5387# 
5388#custom_recipe_dangerous_patterns = "['rm -rf', 'rm -fr']"
5389
5390# If enabled, user can log in from 2 browsers (scripts) at the same time
5391#allow_concurrent_sessions = true
5392
5393# Extra HTTP headers.
5394#extra_http_headers = "{}"
5395
5396# By default DriverlessAI issues cookies with HTTPOnly and Secure attributes (morsels) enabled. In addition to that, SameSite attribute is set to 'Lax', as it's a default in modern browsers. The config overrides the default key/value (morsels).
5397#http_cookie_attributes = "{'samesite': 'Lax'}"
5398
5399# Enable column imputation
5400#enable_imputation = false
5401
5402# Adds advanced settings panel to experiment setup, which allows creating
5403# custom features and more.
5404# 
5405#enable_advanced_features_experiment = false
5406
5407# Specifies whether DriverlessAI uses H2O Storage or H2O Entity Server for
5408# a shared entities backend.
5409# h2o-storage: Uses legacy H2O Storage.
5410# entity-server: Uses the new HAIC Entity Server.
5411# 
5412#h2o_storage_mode = "h2o-storage"
5413
5414# Address of the H2O Storage endpoint. Keep empty to use the local storage only.
5415#h2o_storage_address = ""
5416
5417# Whether to use remote projects stored in H2O Storage instead of local projects.
5418#h2o_storage_projects_enabled = false
5419
5420# Whether the channel to the storage should be encrypted.
5421#h2o_storage_tls_enabled = true
5422
5423# Path to the certification authority certificate that H2O Storage server identity will be checked against.
5424#h2o_storage_tls_ca_path = ""
5425
5426# Path to the client certificate to authenticate with H2O Storage server
5427#h2o_storage_tls_cert_path = ""
5428
5429# Path to the client key to authenticate with H2O Storage server
5430#h2o_storage_tls_key_path = ""
5431
5432# UUID of a Storage project to use instead of the remote HOME folder.
5433#h2o_storage_internal_default_project_id = ""
5434
5435# Deadline for RPC calls with H2O Storage in seconds. Sets maximum number of seconds that Driverless waits for RPC call to complete before it cancels it.
5436#h2o_storage_rpc_deadline_seconds = 60
5437
5438# Deadline for RPC bytestrteam calls with H2O Storage in seconds. Sets maximum number of seconds that Driverless waits for RPC call to complete before it cancels it. This value is used for uploading and downloading artifacts.
5439#h2o_storage_rpc_bytestream_deadline_seconds = 7200
5440
5441# Storage client manages it's own access tokens derived from  the refresh token received on the user login. When this option is set access token with the scopes defined here is requested. (space separated list)
5442#h2o_storage_oauth2_scopes = ""
5443
5444# Maximum size of message size of RPC request in bytes. Requests larger than this limit will fail.
5445#h2o_storage_message_size_limit = 1048576000
5446
5447# If the `h2o_mlops_ui_url` is provided alongside the `enable_storage`, DAI is able to redirect user to the MLOps app upon clicking the Deploy button.
5448#h2o_mlops_ui_url = ""
5449
5450# If the `feature_store_ui_url` is provided alongside the `enable_file_systems`, DAI is able to redirect user to the Feature Store app upon clicking the Feature Store button.
5451#feature_store_ui_url = ""
5452
5453# H2O Secure Store server endpoint URL
5454#h2o_secure_store_endpoint_url = ""
5455
5456# Enable TLS communication between DAI and the H2O Secure Store server
5457#h2o_secure_store_enable_tls = true
5458
5459# Path to the client certificate to authenticate with the H2O Secure Store server. This is only effective when h2o_secure_store_enable_tls=True.
5460#h2o_secure_store_tls_cert_path = ""
5461
5462# Keystore file that contains secure config.toml items like passwords, secret keys etc. Keystore is managed by h2oai.keystore tool.
5463#keystore_file = ""
5464
5465# Verbosity of logging
5466# 0: quiet   (CRITICAL, ERROR, WARNING)
5467# 1: default (CRITICAL, ERROR, WARNING, INFO, DATA)
5468# 2: verbose (CRITICAL, ERROR, WARNING, INFO, DATA, DEBUG)
5469# Affects server and all experiments
5470#log_level = 1
5471
5472# Whether to collect relevant server logs (h2oai_server.log, dai.log from systemctl or docker, and h2o log)
5473# Useful for when sending logs to H2O.ai
5474#collect_server_logs_in_experiment_logs = false
5475
5476# When set, will migrate all user entities to the defined user upon startup, this is mostly useful during
5477# instance migration via H2O's AIEM/Steam.
5478#migrate_all_entities_to_user = ""
5479
5480# Whether to have all user content isolated into a directory for each user.
5481# If set to False, all users content is common to single directory,
5482# recipes are shared, and brain folder for restart/refit is shared.
5483# If set to True, each user has separate folder for all user tasks,
5484# recipes are isolated to each user, and brain folder for restart/refit is
5485# only for the specific user.
5486# Migration from False to True or back to False is allowed for
5487# all experiment content accessible by GUI or python client,
5488# all recipes, and starting experiment with same settings, restart, or refit.
5489# However, if switch to per-user mode, the common brain folder is no longer used.
5490# 
5491#per_user_directories = true
5492
5493# List of file names to ignore during dataset import. Any files with names listed above will be skipped when
5494# DAI creates a dataset. Example, directory contains 3 files: [data_1.csv, data_2.csv, _SUCCESS]
5495# DAI will only attempt to create a dataset using files data_1.csv and data_2.csv, and _SUCCESS file will be ignored.
5496# Default is to ignore _SUCCESS files which are commonly created in exporting data from Hadoop
5497# 
5498#data_import_ignore_file_names = "['_SUCCESS']"
5499
5500# For data import from a directory (multiple files), allow column types to differ and perform upcast during import.
5501#data_import_upcast_multi_file = false
5502
5503# If set to true, will explode columns with list data type when importing parquet files.
5504#data_import_explode_list_type_columns_in_parquet = false
5505
5506# List of file types that Driverless AI should attempt to import data as IF no file extension exists in the file name
5507# If no file extension is provided, Driverless AI will attempt to import the data starting with first type
5508# in the defined list. Default ["parquet", "orc"]
5509# Example: 'test.csv' (file extension exists) vs 'test' (file extension DOES NOT exist)
5510# NOTE: see supported_file_types configuration option for more details on supported file types
5511# 
5512#files_without_extensions_expected_types = "['parquet', 'orc']"
5513
5514# do_not_log_list : add configurations that you do not wish to be recorded in logs here.They will still be stored in experiment information so child experiments can behave consistently.
5515#do_not_log_list = "['cols_to_drop', 'cols_to_drop_sanitized', 'cols_to_group_by', 'cols_to_group_by_sanitized', 'cols_to_force_in', 'cols_to_force_in_sanitized', 'do_not_log_list', 'do_not_store_list', 'pytorch_nlp_pretrained_s3_access_key_id', 'pytorch_nlp_pretrained_s3_secret_access_key', 'auth_openid_end_session_endpoint_url']"
5516
5517# do_not_store_list : add configurations that you do not wish to be stored at all here.Will not be remembered across experiments, so not applicable to data science related itemsthat could be controlled by a user.  These items are automatically not logged.
5518#do_not_store_list = "['artifacts_git_password', 'auth_jwt_secret', 'auth_openid_client_id', 'auth_openid_client_secret', 'auth_openid_userinfo_auth_key', 'auth_openid_userinfo_auth_value', 'auth_openid_userinfo_username_key', 'auth_tls_ldap_bind_password', 'aws_access_key_id', 'aws_secret_access_key', 'azure_blob_account_key', 'azure_blob_account_name', 'azure_connection_string', 'azure_sas_token', 'deployment_aws_access_key_id', 'deployment_aws_secret_access_key', 'gcs_path_to_service_account_json', 'gcs_service_account_json', 'kaggle_key', 'kaggle_username', 'kdb_password', 'kdb_user', 'ldap_bind_password', 'ldap_search_password', 'local_htpasswd_file', 'main_server_minio_access_key_id', 'main_server_minio_secret_access_key', 'main_server_redis_password', 'minio_access_key_id', 'minio_endpoint_url', 'minio_secret_access_key', 'main_server_s3_access_key_id', 'main_server_s3_secret_access_key', 'snowflake_account', 'snowflake_password', 'snowflake_authenticator', 'snowflake_url', 'snowflake_user', 'custom_recipe_security_analysis_enabled', 'custom_recipe_import_allowlist', 'custom_recipe_import_banlist', 'custom_recipe_method_call_allowlist', 'custom_recipe_method_call_banlist', 'custom_recipe_dangerous_patterns', 'azure_ad_client_secret', 'azure_blob_keycloak_aad_client_secret', 'artifacts_azure_blob_account_name', 'artifacts_azure_blob_account_key', 'artifacts_azure_connection_string', 'artifacts_azure_sas_token', 'tensorflow_nlp_pretrained_s3_access_key_id', 'tensorflow_nlp_pretrained_s3_secret_access_key', 'ssl_key_passphrase', 'jdbc_app_configs', 'openai_api_secret_key', 'h2ogpt_key']"
5519
5520# Memory limit in bytes for datatable to use during parsing of CSV files. -1 for unlimited. 0 for automatic. >0 for constraint.
5521#datatable_parse_max_memory_bytes = -1
5522
5523# Delimiter/Separator to use when parsing tabular text files like CSV. Automatic if empty. Must be provided at system start.
5524#datatable_separator = ""
5525
5526# Whether to enable ping of system status during DAI data ingestion.
5527#ping_load_data_file = false
5528
5529# Period between checking DAI status.  Should be small enough to avoid slowing parent who stops ping process.
5530#ping_sleep_period = 0.5
5531
5532# Precision of how data is stored
5533# 'datatable' keeps original datatable storage types (i.e. bool, int, float32, float64) (experimental)
5534# 'float32' best for speed, 'float64' best for accuracy or very large input values, "datatable" best for memory
5535# 'float32' allows numbers up to about +-3E38 with relative error of about 1E-7
5536# 'float64' allows numbers up to about +-1E308 with relative error of about 1E-16
5537# Some calculations, like the GLM standardization, can only handle up to sqrt() of these maximums for data values,
5538# So GLM with 32-bit precision can only handle up to about a value of 1E19 before standardization generates inf values.
5539# If you see "Best individual has invalid score" you may require higher precision.
5540#data_precision = "float32"
5541
5542# Precision of most data transformers (same options and notes as data_precision).
5543# Useful for higher precision in transformers with numerous operations that can accumulate error.
5544# Also useful if want faster performance for transformers but otherwise want data stored in high precision.
5545#transformer_precision = "float32"
5546
5547# Whether to change ulimit soft limits up to hard limits (for DAI server app, which is not a generic user app).
5548# Prevents resource limit problems in some cases.
5549# Restricted to no more than limit_nofile and limit_nproc for those resources.
5550#ulimit_up_to_hard_limit = true
5551
5552#disable_core_files = false
5553
5554# number of file limit
5555# Below should be consistent with start-dai.sh
5556#limit_nofile = 131071
5557
5558# number of threads limit
5559# Below should be consistent with start-dai.sh
5560#limit_nproc = 16384
5561
5562# '
5563# Whether to compute training, validation, and test correlation matrix (table and heatmap pdf) and save to disk
5564# alpha: WARNING: currently single threaded and quadratically slow for many columns
5565#compute_correlation = false
5566
5567# Whether to dump to disk a correlation heatmap
5568#produce_correlation_heatmap = false
5569
5570# Value to report high correlation between original features
5571#high_correlation_value_to_report = 0.95
5572
5573# If True, experiments aborted by server restart will automatically restart and continue upon user login
5574#restart_experiments_after_shutdown = false
5575
5576# When environment variable is set to toml value, consider that an override of any toml value.  Experiment's remember toml values for scoring, and this treats any environment set as equivalent to putting OVERRIDE_ in front of the environment key.
5577#any_env_overrides = false
5578
5579# Include byte order mark (BOM) when writing CSV files. Required to support UTF-8 encoding in Excel.
5580#datatable_bom_csv = false
5581
5582# Whether to enable debug prints (to console/stdout/stderr), e.g. showing up in dai*.log or dai*.txt type files.
5583#debug_print = false
5584
5585# Level (0-4) for debug prints (to console/stdout/stderr), e.g. showing up in dai*.log or dai*.txt type files.  1-2 is normal, 4 would lead to highly excessive debug and is not recommended in production.
5586#debug_print_level = 0
5587
5588#return_quickly_autodl_testing = false
5589
5590#return_quickly_autodl_testing2 = false
5591
5592#return_before_final_model = false
5593
5594# Whether to check if config.toml keys are valid and fail if not valid
5595#check_invalid_config_toml_keys = true
5596
5597#predict_safe_trials = 2
5598
5599#fit_safe_trials = 2
5600
5601#allow_no_pid_host = true
5602
5603#enable_autodl_system_insights = true
5604
5605#enable_deleting_autodl_system_insights_finished_experiments = true
5606
5607#main_logger_with_experiment_ids = true
5608
5609# Reduce memory usage during final ensemble feature engineering (1 uses most memory, larger values use less memory)
5610#final_munging_memory_reduction_factor = 2
5611
5612# How much more memory a typical transformer needs than the input data.
5613# Can be increased if, e.g., final model munging uses too much memory due to parallel operations.
5614#munging_memory_overhead_factor = 5
5615
5616#per_transformer_segfault_protection_ga = false
5617
5618#per_transformer_segfault_protection_final = false
5619
5620# How often to check resources (disk, memory, cpu) to see if need to stall submission.
5621#submit_resource_wait_period = 10
5622
5623# Stall submission of subprocesses if system CPU usage is higher than this threshold in percent (set to 100 to disable). A reasonable number is 90.0 if activated
5624#stall_subprocess_submission_cpu_threshold_pct = 100
5625
5626# Restrict/Stall submission of subprocesses if DAI fork count (across all experiments) per unit ulimit nproc soft limit is higher than this threshold in percent (set to -1 to disable, 0 for minimal forking. A reasonable number is 90.0 if activated
5627#stall_subprocess_submission_dai_fork_threshold_pct = -1.0
5628
5629# Restrict/Stall submission of subprocesses if experiment fork count (across all experiments) per unit ulimit nproc soft limit is higher than this threshold in percent (set to -1 to disable, 0 for minimal forking). A reasonable number is 90.0 if activated. For small data leads to overhead of about 0.1s per task submitted due to checks, so for scoring can slow things down for tests.
5630#stall_subprocess_submission_experiment_fork_threshold_pct = -1.0
5631
5632# Whether to restrict pool workers even if not used, by reducing number of pool workers available. Good if really huge number of experiments, but otherwise, best to have all pool workers ready and only stall submission of tasks so can be dynamic to multi-experiment environment
5633#restrict_initpool_by_memory = true
5634
5635# Whether to terminate experiments if the system memory available falls below memory_limit_gb_terminate
5636#terminate_experiment_if_memory_low = false
5637
5638# Memory in GB beyond which will terminate experiment if terminate_experiment_if_memory_low=true.
5639#memory_limit_gb_terminate = 5
5640
5641# A fraction that with valid values between 0.1 and 1.0 that determines the disk usage quota for a user, this quota will be checked during datasets import or experiment runs.
5642#users_disk_usage_quota = 1.0
5643
5644# Path to use for scoring directory path relative to run path
5645#scoring_data_directory = "tmp"
5646
5647#num_models_for_resume_graph = 1000
5648
5649# Internal helper to allow memory of if changed exclusive mode
5650#last_exclusive_mode = ""
5651
5652#mojo_acceptance_test_errors_fatal = true
5653
5654#mojo_acceptance_test_errors_shap_fatal = true
5655
5656#mojo_acceptance_test_orig_shap = true
5657
5658# Which MOJO runtimes should be tested as part of the mini acceptance tests
5659#mojo_acceptance_test_mojo_types = "['C++', 'Java']"
5660
5661# Create MOJO for feature engineering pipeline only (no predictions)
5662#make_mojo_scoring_pipeline_for_features_only = false
5663
5664# Replaces target encoding features by their input columns. Instead of CVTE_Age:Income:Zip, this will create Age:Income:Zip. Only when make_mojo_scoring_pipeline_for_features_only is enabled.
5665#mojo_replace_target_encoding_with_grouped_input_cols = false
5666
5667# Use pipeline to generate transformed features, when making predictions, bypassing the model that usually converts transformed features into predictions.
5668#predictions_as_transform_only = false
5669
5670# If set to true, will make sure only current instance can access its database
5671#enable_single_instance_db_access = true
5672
5673# DCGM daemon address, DCGM has to be in standalone mode in remote/local host.
5674#dcgm_daemon_address = "127.0.0.1"
5675
5676# Deprecated - maps to enable_pytorch_nlp_transformer and enable_pytorch_nlp_model in 1.10.2+
5677#enable_pytorch_nlp = "auto"
5678
5679# How long to wait per GPU for tensorflow/torch to run during system checks.
5680#check_timeout_per_gpu = 20
5681
5682# Whether to fail start-up if cannot successfully run GPU checks
5683#gpu_exit_if_fails = true
5684
5685#how_started = ""
5686
5687#wizard_state = ""
5688
5689# Whether to enable pushing telemetry events to a configured telemetry receiver in 'telemetry_plugins_dir'.
5690#enable_telemetry = false
5691
5692# Directory to scan for telemetry recipes.
5693#telemetry_plugins_dir = "./telemetry_plugins"
5694
5695# Whether to enable TLS to communicate to H2O.ai Telemetry Service.
5696#h2o_telemetry_tls_enabled = false
5697
5698# Timeout value when communicating to H2O.ai Telemetry Service.
5699#h2o_telemetry_rpc_deadline_seconds = 60
5700
5701# H2O.ai Telemetry Service address in H2O.ai Cloud.
5702#h2o_telemetry_address = ""
5703
5704# H2O.ai Telemetry Service access token file location.
5705#h2o_telemetry_service_token_location = ""
5706
5707# TLS CA path when communicating to H2O.ai Telemetry Service.
5708#h2o_telemetry_tls_ca_path = ""
5709
5710# TLS certificate path when communicating to H2O.ai Telemetry Service.
5711#h2o_telemetry_tls_cert_path = ""
5712
5713# TLS key path when communicating to H2O.ai Telemetry Service.
5714#h2o_telemetry_tls_key_path = ""
5715
5716# Enable time series lag-based recipe with lag transformers. If disabled, the same train-test gap and periods are used, but no lag transformers are enabled. If disabled, the set of feature transformations is quite limited without lag transformers, so consider setting enable_time_unaware_transformers to true in order to treat the problem as more like an IID type problem.
5717#time_series_recipe = true
5718
5719# Whether causal splits are used when time_series_recipe is false orwhether to use same train-gap-test splits when lag transformers are disabled (default behavior).For train-test gap, period, etc. to be used when lag-based recipe is disabled, this must be false.
5720#time_series_causal_split_recipe = false
5721
5722# Whether to use lag transformers when using causal-split for validation
5723# (as occurs when not using time-based lag recipe).
5724# If no time groups columns, lag transformers will still use time-column as sole time group column.
5725# 
5726#use_lags_if_causal_recipe = false
5727
5728# 'diverse': explore a diverse set of models built using various expert settings. Note that it's possible to rerun another such diverse leaderboard on top of the best-performing model(s), which will effectively help you compose these expert settings.
5729# 'sliding_window': If the forecast horizon is N periods, create a separate model for each of the (gap, horizon) pairs of (0,n), (n,n), (2*n,n), ..., (2*N-1, n) in units of time periods.
5730# The number of periods to predict per model n is controlled by the expert setting 'time_series_leaderboard_periods_per_model', which defaults to 1.
5731#time_series_leaderboard_mode = "diverse"
5732
5733# Fine-control to limit the number of models built in the 'sliding_window' mode. Larger values lead to fewer models.
5734#time_series_leaderboard_periods_per_model = 1
5735
5736# Whether to create larger validation splits that are not bound to the length of the forecast horizon.
5737#time_series_merge_splits = true
5738
5739# Maximum ratio of training data samples used for validation across splits when larger validation splits are created.
5740#merge_splits_max_valid_ratio = -1.0
5741
5742# Whether to keep a fixed-size train timespan across time-based splits.
5743# That leads to roughly the same amount of train samples in every split.
5744# 
5745#fixed_size_train_timespan = false
5746
5747# Provide date or datetime timestamps (in same format as the time column) for custom training and validation splits like this: "tr_start1, tr_end1, va_start1, va_end1, ..., tr_startN, tr_endN, va_startN, va_endN"
5748#time_series_validation_fold_split_datetime_boundaries = ""
5749
5750# Set fixed number of time-based splits for internal model validation (actual number of splits allowed can be less and is determined at experiment run-time).
5751#time_series_validation_splits = -1
5752
5753# Maximum overlap between two time-based splits. Higher values increase the amount of possible splits.
5754#time_series_splits_max_overlap = 0.5
5755
5756# Earliest allowed datetime (in %Y%m%d format) for which to allow automatic conversion of integers to a time column during parsing. For example, 2010 or 201004 or 20100402 or 201004022312 can be converted to a valid date/datetime, but 1000 or 100004 or 10000402 or 10004022313 can not, and neither can 201000 or 20100500 etc.
5757#min_ymd_timestamp = 19000101
5758
5759# Latest allowed datetime (in %Y%m%d format) for which to allow automatic conversion of integers to a time column during parsing. For example, 2010 or 201004 or 20100402 can be converted to a valid date/datetime, but 3000 or 300004 or 30000402 or 30004022313 can not, and neither can 201000 or 20100500 etc.
5760#max_ymd_timestamp = 21000101
5761
5762# maximum number of data samples (randomly selected rows) for date/datetime format detection
5763#max_rows_datetime_format_detection = 100000
5764
5765# Manually disables certain datetime formats during data ingest and experiments.
5766# For example, ['%y'] will avoid parsing columns that contain '00', '01', '02' string values as a date column.
5767# 
5768#disallowed_datetime_formats = "['%y']"
5769
5770# Whether to use datetime cache
5771#use_datetime_cache = true
5772
5773# Minimum amount of rows required to utilize datetime cache
5774#datetime_cache_min_rows = 10000
5775
5776# Automatically generate is-holiday features from date columns
5777#holiday_features = true
5778
5779#holiday_country = ""
5780
5781# List of countries for which to look up holiday calendar and to generate is-Holiday features for
5782#holiday_countries = "['UnitedStates', 'UnitedKingdom', 'EuropeanCentralBank', 'Germany', 'Mexico', 'Japan']"
5783
5784# Max. sample size for automatic determination of time series train/valid split properties, only if time column is selected
5785#max_time_series_properties_sample_size = 250000
5786
5787# Maximum number of lag sizes to use for lags-based time-series experiments. are sampled from if sample_lag_sizes==True, else all are taken (-1 == automatic)
5788#max_lag_sizes = 30
5789
5790# Minimum required autocorrelation threshold for a lag to be considered for feature engineering
5791#min_lag_autocorrelation = 0.1
5792
5793# How many samples of lag sizes to use for a single time group (single time series signal)
5794#max_signal_lag_sizes = 100
5795
5796# If enabled, sample from a set of possible lag sizes (e.g., lags=[1, 4, 8]) for each lag-based transformer, to no more than max_sampled_lag_sizes lags. Can help reduce overall model complexity and size, esp. when many unavailable columns for prediction.
5797#sample_lag_sizes = false
5798
5799# If sample_lag_sizes is enabled, sample from a set of possible lag sizes (e.g., lags=[1, 4, 8]) for each lag-based transformer, to no more than max_sampled_lag_sizes lags. Can help reduce overall model complexity and size. Defaults to -1 (auto), in which case it's the same as the feature interaction depth controlled by max_feature_interaction_depth.
5800#max_sampled_lag_sizes = -1
5801
5802# Override lags to be used
5803# e.g. [7, 14, 21] # this exact list
5804# e.g. 21 # produce from 1 to 21
5805# e.g. 21:3 produce from 1 to 21 in step of 3
5806# e.g. 5-21 produce from 5 to 21
5807# e.g. 5-21:3 produce from 5 to 21 in step of 3
5808# 
5809#override_lag_sizes = "[]"
5810
5811# Override lags to be used for features that are not known ahead of time
5812# e.g. [7, 14, 21] # this exact list
5813# e.g. 21 # produce from 1 to 21
5814# e.g. 21:3 produce from 1 to 21 in step of 3
5815# e.g. 5-21 produce from 5 to 21
5816# e.g. 5-21:3 produce from 5 to 21 in step of 3
5817# 
5818#override_ufapt_lag_sizes = "[]"
5819
5820# Override lags to be used for features that are known ahead of time
5821# e.g. [7, 14, 21] # this exact list
5822# e.g. 21 # produce from 1 to 21
5823# e.g. 21:3 produce from 1 to 21 in step of 3
5824# e.g. 5-21 produce from 5 to 21
5825# e.g. 5-21:3 produce from 5 to 21 in step of 3
5826# 
5827#override_non_ufapt_lag_sizes = "[]"
5828
5829# Smallest considered lag size
5830#min_lag_size = -1
5831
5832# Whether to enable feature engineering based on selected time column, e.g. Date~weekday.
5833#allow_time_column_as_feature = true
5834
5835# Whether to enable integer time column to be used as a numeric feature.
5836# If using time series recipe, using time column (numeric time stamps) as input features can lead to model that
5837# memorizes the actual time stamps instead of features that generalize to the future.
5838# 
5839#allow_time_column_as_numeric_feature = false
5840
5841# Allowed date or date-time transformations.
5842# Date transformers include: year, quarter, month, week, weekday, day, dayofyear, num.
5843# Date transformers also include: hour, minute, second.
5844# Features in DAI will show up as get_ + transformation name.
5845# E.g. num is a direct numeric value representing the floating point value of time,
5846# which can lead to over-fitting if used on IID problems.  So this is turned off by default.
5847#datetime_funcs = "['year', 'quarter', 'month', 'week', 'weekday', 'day', 'dayofyear', 'hour', 'minute', 'second']"
5848
5849# Whether to filter out date and date-time transformations that lead to unseen values in the future.
5850# 
5851#filter_datetime_funcs = true
5852
5853# Whether to consider time groups columns (tgc) as standalone features.
5854# Note that 'time_column' is treated separately via 'Allow to engineer features from time column'.
5855# Note that tgc_allow_target_encoding independently controls if time column groups are target encoded.
5856# Use allowed_coltypes_for_tgc_as_features for control per feature type.
5857# 
5858#allow_tgc_as_features = true
5859
5860# Which time groups columns (tgc) feature types to consider as standalone features,
5861# if the corresponding flag "Consider time groups columns as standalone features" is set to true.
5862# E.g. all column types would be ["numeric", "categorical", "ohe_categorical", "datetime", "date", "text"]
5863# Note that 'time_column' is treated separately via 'Allow to engineer features from time column'.
5864# Note that if lag-based time series recipe is disabled, then all tgc are allowed features.
5865# 
5866#allowed_coltypes_for_tgc_as_features = "['numeric', 'categorical', 'ohe_categorical', 'datetime', 'date', 'text']"
5867
5868# Whether various transformers (clustering, truncated SVD) are enabled,
5869# that otherwise would be disabled for time series due to
5870# potential to overfit by leaking across time within the fit of each fold.
5871# 
5872#enable_time_unaware_transformers = "auto"
5873
5874# Whether to group by all time groups columns for creating lag features, instead of sampling from them
5875#tgc_only_use_all_groups = true
5876
5877# Whether to allow target encoding of time groups. This can be useful if there are many groups.
5878# Note that allow_tgc_as_features independently controls if tgc are treated as normal features.
5879# 'auto': Choose CV by default.
5880# 'CV': Enable out-of-fold and CV-in-CV (if enabled) encoding
5881# 'simple': Simple memorized targets per group.
5882# 'off': Disable.
5883# Only relevant for time series experiments that have at least one time column group apart from the time column.
5884#tgc_allow_target_encoding = "auto"
5885
5886# if allow_tgc_as_features is true or tgc_allow_target_encoding is true, whether to try both possibilities to see which does better during tuning.  Safer than forcing one way or the other.
5887#tgc_allow_features_and_target_encoding_auto_tune = true
5888
5889# Enable creation of holdout predictions on training data
5890# using moving windows (useful for MLI, but can be slow)
5891#time_series_holdout_preds = true
5892
5893# Max number of splits used for creating final time-series model's holdout/backtesting predictions. With the default value '-1' the same amount of splits as during model validation will be used. Use 'time_series_validation_splits' to control amount of time-based splits used for model validation.
5894#time_series_max_holdout_splits = -1
5895
5896#single_model_vs_cv_score_reldiff = 0.05
5897
5898#single_model_vs_cv_score_reldiff2 = 0.0
5899
5900# Whether to blend ensembles in link space, so that can apply inverse link function to get predictions after blending. This allows to get Shapley values to sum up to final predictions, after applying inverse link function: preds = inverse_link(   (blend(base learner predictions in link space   )))      = inverse_link(sum(blend(base learner shapley values in link space)))      = inverse_link(sum(      ensemble shapley values in link space     ))For binary classification, this is only supported if inverse_link = logistic = 1/(1+exp(-x))For multiclass classification, this is only supported if inverse_link = softmax = exp(x)/sum(exp(x))For regression, this behavior happens naturally if all base learners use the identity link function, otherwise not possible
5901#blend_in_link_space = true
5902
5903# Whether to speed up time-series holdout predictions for back-testing on training data (used for MLI and metrics calculation). Can be slightly less accurate.
5904#mli_ts_fast_approx = false
5905
5906# Whether to speed up Shapley values for time-series holdout predictions for back-testing on training data (used for MLI). Can be slightly less accurate.
5907#mli_ts_fast_approx_contribs = true
5908
5909# Enable creation of Shapley values for holdout predictions on training data
5910# using moving windows (useful for MLI, but can be slow), at the time of the experiment. If disabled, MLI will
5911# generate Shapley values on demand.
5912#mli_ts_holdout_contribs = true
5913
5914# Values of 5 or more can improve generalization by more aggressive dropping of least important features. Set to 1 to disable.
5915#time_series_min_interpretability = 5
5916
5917# Dropout mode for lag features in order to achieve an equal n.a.-ratio between train and validation/test. The independent mode performs a simple feature-wise dropout, whereas the dependent one takes lag-size dependencies per sample/row into account.
5918#lags_dropout = "dependent"
5919
5920# Normalized probability of choosing to lag non-targets relative to targets (-1.0 = auto)
5921#prob_lag_non_targets = -1.0
5922
5923# Method to create rolling test set predictions, if the forecast horizon is shorter than the time span of the test set. One can choose between test time augmentation (TTA) and a successive refitting of the final pipeline.
5924#rolling_test_method = "tta"
5925
5926#rolling_test_method_max_splits = 1000
5927
5928# Apply TTA in one pass instead of using rolling windows for internal validation split predictions. Note: Setting this to 'False' leads to significantly longer runtimes.
5929#fast_tta_internal = true
5930
5931# Apply TTA in one pass instead of using rolling windows for test set predictions. This only applies if the forecast horizon is shorter than the time span of the test set. Note: Setting this to 'False' leads to significantly longer runtimes.
5932#fast_tta_test = true
5933
5934# Probability for new Lags/EWMA gene to use default lags (determined by frequency/gap/horizon, independent of data) (-1.0 = auto)
5935#prob_default_lags = -1.0
5936
5937# Unnormalized probability of choosing other lag time-series transformers based on interactions (-1.0 = auto)
5938#prob_lagsinteraction = -1.0
5939
5940# Unnormalized probability of choosing other lag time-series transformers based on aggregations (-1.0 = auto)
5941#prob_lagsaggregates = -1.0
5942
5943# Time series centering or detrending transformation. The free parameter(s) of the trend model are fitted and the trend is removed from the target signal, and the pipeline is fitted on the residuals. Predictions are made by adding back the trend. Note: Can be cascaded with 'Time series lag-based target transformation', but is mutually exclusive with regular target transformations. The robust centering or linear detrending variants use RANSAC to achieve a higher tolerance w.r.t. outliers. The Epidemic target transformer uses the SEIR model: https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology#The_SEIR_model
5944#ts_target_trafo = "none"
5945
5946# Dictionary to control Epidemic SEIRD model for de-trending of target per time series group.
5947# Note: The target column must correspond to I(t), the infected cases as a function of time.
5948# For each training split and time series group, the SEIRD model is fitted to the target signal (by optimizing
5949# the free parameters shown below for each time series group).
5950# Then, the SEIRD model's value is subtracted from the training response, and the residuals are passed to
5951# the feature engineering and modeling pipeline. For predictions, the SEIRD model's value is added to the residual
5952# predictions from the pipeline, for each time series group.
5953# Note: Careful selection of the bounds for the free parameters N, beta, gamma, delta, alpha, rho, lockdown,
5954# beta_decay, beta_decay_rate is extremely important for good results.
5955# - S(t) : susceptible/healthy/not immune
5956# - E(t) : exposed/not yet infectious
5957# - I(t) : infectious/active <= target column
5958# - R(t) : recovered/immune
5959# - D(t) : deceased
5960# ### Free parameters:
5961# - N : total population, N=S+E+I+R+D
5962# - beta : rate of exposure (S -> E)
5963# - gamma : rate of recovering (I -> R)
5964# - delta : incubation period
5965# - alpha : fatality rate
5966# - rho : rate at which people die
5967# - lockdown : day of lockdown (-1 => no lockdown)
5968# - beta_decay : beta decay due to lockdown
5969# - beta_decay_rate : speed of beta decay
5970# ### Dynamics:
5971# if lockdown >= 0:
5972# beta_min = beta * (1 - beta_decay)
5973# beta = (beta - beta_min) / (1 + np.exp(-beta_decay_rate * (-t + lockdown))) + beta_min
5974# dSdt = -beta * S * I / N
5975# dEdt = beta * S * I / N - delta * E
5976# dIdt = delta * E - (1 - alpha) * gamma * I - alpha * rho * I
5977# dRdt = (1 - alpha) * gamma * I
5978# dDdt = alpha * rho * I
5979# Provide lower/upper bounds for each parameter you want to control the bounds for. Valid parameters are:
5980# N_min, N_max, beta_min, beta_max, gamma_min, gamma_max, delta_min, delta_max, alpha_min, alpha_max,
5981# rho_min, rho_max, lockdown_min, lockdown_max, beta_decay_min, beta_decay_max,
5982# beta_decay_rate_min, beta_decay_rate_max. You can change any subset of parameters, e.g.,
5983# ts_target_trafo_epidemic_params_dict="{'N_min': 1000, 'beta_max': 0.2}"
5984# To get SEIR model (in cases where death rates are very low, can speed up calculations significantly):
5985# set alpha_min=alpha_max=rho_min=rho_max=beta_decay_rate_min=beta_decay_rate_max=0, lockdown_min=lockdown_max=-1.
5986# 
5987#ts_target_trafo_epidemic_params_dict = "{}"
5988
5989#ts_target_trafo_epidemic_target = "I"
5990
5991# Time series lag-based target transformation. One can choose between difference and ratio of the current and a lagged target. The corresponding lag size can be set via 'Target transformation lag size'. Note: Can be cascaded with 'Time series target transformation', but is mutually exclusive with regular target transformations.
5992#ts_lag_target_trafo = "none"
5993
5994# Lag size used for time series target transformation. See setting 'Time series lag-based target transformation'. -1 => smallest valid value = prediction periods + gap (automatically adjusted by DAI if too small).
5995#ts_target_trafo_lag_size = -1
5996
5997# Maximum amount of columns send from UI to backend in order to auto-detect TGC
5998#tgc_via_ui_max_ncols = 10
5999
6000# Maximum frequency of duplicated timestamps for TGC detection
6001#tgc_dup_tolerance = 0.01
6002
6003# Timeout in seconds for time-series properties detection in UI.
6004#timeseries_split_suggestion_timeout = 30.0
6005
6006# Weight TS models scores as split number to this power.
6007# E.g. Use 1.0 to weight split closest to horizon by a factor
6008# that is number of splits larger than oldest split.
6009# Applies to tuning models and final back-testing models.
6010# If 0.0 (default) is used, median function is used, else mean is used.
6011# 
6012#timeseries_recency_weight_power = 0.0
6013
6014# Every *.toml file is read from this directory and process the same way as main config file.
6015#user_config_directory = ""
6016
6017# IP address for the procsy process.
6018#procsy_ip = "127.0.0.1"
6019
6020# Port for the procsy process.
6021#procsy_port = 12347
6022
6023# Request timeout (in seconds) for the procsy process.
6024#procsy_timeout = 3600
6025
6026# IP address for use by MLI.
6027#h2o_ip = "127.0.0.1"
6028
6029# Port of H2O instance for use by MLI. Each H2O node has an internal port (web port+1, so by default port 12349) for internal node-to-node communication
6030#h2o_port = 12348
6031
6032# IP address and port for Driverless AI HTTP server.
6033#ip = "127.0.0.1"
6034
6035# IP address and port for Driverless AI HTTP server.
6036#port = 12345
6037
6038# A list of two integers indicating the port range to search over, and dynamically find an open port to bind to (e.g., [11111,20000]).
6039#port_range = "[]"
6040
6041# Strict version check for DAI
6042#strict_version_check = true
6043
6044# File upload limit (default 100GB)
6045#max_file_upload_size = 104857600000
6046
6047# Data directory. All application data and files related datasets and
6048# experiments are stored in this directory.
6049#data_directory = "./tmp"
6050
6051# Sets a custom path for the master.db. Use this to store the database outside the data directory,
6052# which can improve performance if the data directory is on a slow drive.
6053#db_path = ""
6054
6055# Datasets directory. If set, it will denote the location from which all
6056# datasets will be read from and written into, typically this location shall be configured to be
6057# on an external file system to allow for a more granular control to just the datasets volume.
6058# If empty then will default to data_directory.
6059#datasets_directory = ""
6060
6061# Path to the directory where the logs of HDFS, Hive, JDBC, and KDB+ data connectors will be saved.
6062#data_connectors_logs_directory = "./tmp"
6063
6064# Subdirectory within data_directory to store server logs.
6065#server_logs_sub_directory = "server_logs"
6066
6067# Subdirectory within data_directory to store pid files for controlling kill/stop of DAI servers.
6068#pid_sub_directory = "pids"
6069
6070# Path to the directory which will be use to save MapR tickets when MapR multi-user mode is enabled.
6071# This is applicable only when enable_mapr_multi_user_mode is set to true.
6072# 
6073#mapr_tickets_directory = "./tmp/mapr-tickets"
6074
6075# MapR tickets duration in minutes, if set to -1, it will use the default value
6076# (not specified in maprlogin command), otherwise will be the specified configuration
6077# value but no less than one day.
6078# 
6079#mapr_tickets_duration_minutes = -1
6080
6081# Whether at server start to delete all temporary uploaded files, left over from failed uploads.
6082# 
6083#remove_uploads_temp_files_server_start = true
6084
6085# Whether to run through entire data directory and remove all temporary files.
6086# Can lead to slow start-up time if have large number (much greater than 100) of experiments.
6087# 
6088#remove_temp_files_server_start = false
6089
6090# Whether to delete temporary files after experiment is aborted/cancelled.
6091# 
6092#remove_temp_files_aborted_experiments = true
6093
6094# Whether to opt in to usage statistics and bug reporting
6095#usage_stats_opt_in = true
6096
6097# Configurations for a HDFS data source
6098# Path of hdfs coresite.xml
6099# core_site_xml_path is deprecated, please use hdfs_config_path
6100#core_site_xml_path = ""
6101
6102# (Required) HDFS config folder path. Can contain multiple config files.
6103#hdfs_config_path = ""
6104
6105# Path of the principal key tab file. Required when hdfs_auth_type='principal'.
6106# key_tab_path is deprecated, please use hdfs_keytab_path
6107# 
6108#key_tab_path = ""
6109
6110# Path of the principal key tab file. Required when hdfs_auth_type='principal'.
6111# 
6112#hdfs_keytab_path = ""
6113
6114# Whether to delete preview cache on server exit
6115#preview_cache_upon_server_exit = true
6116
6117# When this setting is enabled, any user can see all tasks running in the system, including their owner and an identification key. If this setting is turned off, user can see only their own tasks.
6118#all_tasks_visible_to_users = true
6119
6120# When enabled, server exposes Health API at /apis/health/v1, which provides system overview and utilization statistics
6121#enable_health_api = true
6122
6123#notification_url = "https://s3.amazonaws.com/ai.h2o.notifications/dai_notifications_prod.json"
6124
6125# When enabled, the notification scripts will inherit
6126# the parent's process (DriverlessAI) environment variables.
6127# 
6128#listeners_inherit_env_variables = false
6129
6130# Notification scripts
6131# - the variable points to a location of script which is executed at given event in experiment lifecycle
6132# - the script should have executable flag enabled
6133# - use of absolute path is suggested
6134# The on experiment start notification script location
6135#listeners_experiment_start = ""
6136
6137# The on experiment finished notification script location
6138#listeners_experiment_done = ""
6139
6140# The on experiment import notification script location
6141#listeners_experiment_import_done = ""
6142
6143# Notification script triggered when building of MOJO pipeline for experiment is
6144# finished. The value should be an absolute path to executable script.
6145# 
6146#listeners_mojo_done = ""
6147
6148# Notification script triggered when rendering of AutoDoc for experiment is
6149# finished. The value should be an absolute path to executable script.
6150# 
6151#listeners_autodoc_done = ""
6152
6153# Notification script triggered when building of python scoring pipeline
6154# for experiment is finished.
6155# The value should be an absolute path to executable script.
6156# 
6157#listeners_scoring_pipeline_done = ""
6158
6159# Notification script triggered when experiment and all its artifacts selected
6160# at the beginning of experiment are finished building.
6161# The value should be an absolute path to executable script.
6162# 
6163#listeners_experiment_artifacts_done = ""
6164
6165# Whether to run quick performance benchmark at start of application
6166#enable_quick_benchmark = true
6167
6168# Whether to run extended performance benchmark at start of application
6169#enable_extended_benchmark = false
6170
6171# Scaling factor for number of rows for extended performance benchmark. For rigorous performance benchmarking,
6172# values of 1 or larger are recommended.
6173#extended_benchmark_scale_num_rows = 0.1
6174
6175# Number of columns for extended performance benchmark.
6176#extended_benchmark_num_cols = 20
6177
6178# Seconds to allow for testing memory bandwidth by generating numpy frames
6179#benchmark_memory_timeout = 2
6180
6181# Maximum portion of vm total to use for numpy memory benchmark
6182#benchmark_memory_vm_fraction = 0.25
6183
6184# Maximum number of columns to use for numpy memory benchmark
6185#benchmark_memory_max_cols = 1500
6186
6187# Whether to run quick startup checks at start of application
6188#enable_startup_checks = true
6189
6190# Application ID override, which should uniquely identify the instance
6191#application_id = ""
6192
6193# After how many seconds to abort MLI recipe execution plan or recipe compatibility checks.
6194# Blocks main server from all activities, so long timeout is not desired, esp. in case of hanging processes,
6195# while a short timeout can too often lead to abortions on busy system.
6196# 
6197#main_server_fork_timeout = 10.0
6198
6199# After how many days the audit log records are removed.
6200# Set equal to 0 to disable removal of old records.
6201# 
6202#audit_log_retention_period = 5
6203
6204# Time to wait after performing a cleanup of temporary files for in-browser dataset upload.
6205# 
6206#dataset_tmp_upload_file_retention_time_min = 5
6207