Using the config.toml File

The config.toml file is a configuration file that uses the TOML v0.5.0 file format. Administrators can customize various aspects of a Driverless AI (DAI) environment by editing the config.toml file before starting DAI.

备注

For information on configuration security, see Configuration Security.

Configuration Override Chain

The configuration engine reads and overrides variables in the following order:

  1. Driverless AI defaults: These are stored in a Python config module.

  2. config.toml - Place this file in a folder or mount it in a Docker container and specify the path in the “DRIVERLESS_AI_CONFIG_FILE” environment variable.

  3. Keystore file - Set the keystore_file parameter in the config.toml file or the environment variable “DRIVERLESS_AI_KEYSTORE_FILE” to point to a valid DAI keystore file generated using the h2oai.keystore tool. If an environment variable is set, the value in the config.toml for keystore_file is overridden.

  4. Environment variable - Configuration variables can also be provided as environment variables. They must have the prefix DRIVERLESS_AI_ followed by the variable name in all caps. For example, “authentication_method” can be provided as “DRIVERLESS_AI_AUTHENTICATION_METHOD”. Setting environment variables overrides values from the keystore file.

  1. Copy the config.toml file from inside the Docker image to your local filesystem.

                # Make a config directory
                mkdir config

                # Copy the config.toml file to the new config directory.
                docker run --runtime=nvidia \
                  --pid=host \
                  --rm \
                  --init \
                  -u `id -u`:`id -g` \
                  -v `pwd`/config:/config \
                  --entrypoint bash \
                  h2oai/dai-ubi8-x86_64:2.4.0-cuda11.8.0.xx
                  -c "cp /etc/dai/config.toml /config"
  1. Edit the desired variables in the config.toml file. Save your changes when you are done.

  2. Start DAI with the DRIVERLESS_AI_CONFIG_FILE environment variable. Ensure that this environment variable points to the location of the edited config.toml file so that the software can locate the configuration file.

                docker run --runtime=nvidia \
                  --pid=host \
                  --init \
                  --rm \
                  --shm-size=2g --cap-add=SYS_NICE --ulimit nofile=131071:131071 --ulimit nproc=16384:16384 \
                  -u `id -u`:`id -g` \
                  -p 12345:12345 \
                  -e DRIVERLESS_AI_CONFIG_FILE="/config/config.toml" \
                  -v `pwd`/config:/config \
                  -v `pwd`/data:/data \
                  -v `pwd`/log:/log \
                  -v `pwd`/license:/license \
                  -v `pwd`/tmp:/tmp \
                  h2oai/dai-ubi8-x86_64:2.4.0-cuda11.8.0.xx

Sample config.toml File

The following is a copy of the standard config.toml file included with this version of DAI. The sections that follow describe some examples showing how to set different environment variables, data connectors, authentication methods, and notifications.

  1
  2##############################################################################
  3#                        DRIVERLESS AI CONFIGURATION FILE
  4#
  5# Comments:
  6# This file is authored in TOML (see https://github.com/toml-lang/toml)
  7#
  8# Config Override Chain
  9# Configuration variables for Driverless AI can be provided in several ways,
 10# the config engine reads and overrides variables in the following order
 11#
 12# 1. h2oai/config/config.toml
 13# [internal not visible to users]
 14#
 15# 2. config.toml
 16# [place file in a folder/mount file in docker container and provide path
 17# in "DRIVERLESS_AI_CONFIG_FILE" environment variable]
 18#
 19# 3. Keystore file
 20# [set keystore_file parameter in config.toml, or environment variable
 21# "DRIVERLESS_AI_KEYSTORE_FILE" to point to a valid DAI keystore file 
 22# generated using h2oai.keystore tool
 23#
 24# 4. Environment variable
 25# [configuration variables can also be provided as environment variables
 26# they must have the prefix "DRIVERLESS_AI_" followed by
 27# variable name in caps e.g "authentication_method" can be provided as
 28# "DRIVERLESS_AI_AUTHENTICATION_METHOD"]
 29##############################################################################
 30
 31# If the experiment is not done after this many minutes, stop feature engineering and model tuning as soon as possible and proceed with building the final modeling pipeline and deployment artifacts, independent of model score convergence or pre-determined number of iterations. Only active is not in reproducible mode. Depending on the data and experiment settings, overall experiment runtime can differ significantly from this setting.
 32#max_runtime_minutes = 1440
 33
 34# if non-zero, then set max_runtime_minutes automatically to min(max_runtime_minutes, max(min_auto_runtime_minutes, runtime estimate)) when enable_preview_time_estimate is true, so that the preview performs a best estimate of the runtime.  Set to zero to disable runtime estimate being used to constrain runtime of experiment.
 35#min_auto_runtime_minutes = 60
 36
 37# Whether to tune max_runtime_minutes based upon final number of base models,so try to trigger start of final model in order to better ensure stop entire experiment before max_runtime_minutes.Note: If the time given is short enough that tuning models are reduced belowfinal model expectations, the final model may be shorter than expected leadingto an overall shorter experiment time.
 38#max_runtime_minutes_smart = true
 39
 40# If the experiment is not done after this many minutes, push the abort button. Preserves experiment artifacts made so far for summary and log zip files, but further artifacts are made.
 41#max_runtime_minutes_until_abort = 10080
 42
 43# If reproducbile is set, then experiment and all artifacts are reproducible, however then experiments may take arbitrarily long for a given choice of dials, features, and models.
 44# Setting this to False allows the experiment to complete after a fixed time, with all aspects of the model and feature building are reproducible and seeded, but the overall experiment behavior will not necessarily be reproducible if later iterations would have been used in final model building.
 45# This should set to True if every seeded experiment of exact same setup needs to generate the exact same final model, regardless of duration.
 46#strict_reproducible_for_max_runtime = true
 47
 48# Uses model built on large number of experiments to estimate runtime.  It can be inaccurate in cases that were not trained on.
 49#enable_preview_time_estimate = true
 50
 51# Uses model built on large number of experiments to estimate mojo size.  It can be inaccurate in cases that were not trained on.
 52#enable_preview_mojo_size_estimate = true
 53
 54# Uses model built on large number of experiments to estimate max cpu memory.  It can be inaccurate in cases that were not trained on.
 55#enable_preview_cpu_memory_estimate = true
 56
 57#enable_preview_time_estimate_rough = false
 58
 59# If the experiment is not done by this time, push the abort button. Accepts time in format given by time_abort_format (defaults to %Y-%m-%d %H:%M:%S)assuming a time zone set by time_abort_timezone (defaults to UTC). One can also give integer seconds since 1970-01-01 00:00:00 UTC. Applies to time on a DAI worker that runs experiments. Preserves experiment artifacts made so far for summary and log zip files, but further artifacts are made.NOTE: If start new experiment with same parameters, restart, or refit, thisabsolute time will apply to such experiments or set of leaderboard experiments.
 60#time_abort = ""
 61
 62# Any format is allowed as accepted by datetime.strptime.
 63#time_abort_format = "%Y-%m-%d %H:%M:%S"
 64
 65# Any time zone in format accepted by datetime.strptime.
 66#time_abort_timezone = "UTC"
 67
 68# Whether to delete all directories and files matching experiment pattern when call do_delete_model (True),
 69# or whether to just delete directories (False).  False can be used to preserve experiment logs that do
 70# not take up much space.
 71# 
 72#delete_model_dirs_and_files = true
 73
 74# Whether to delete all directories and files matching dataset pattern when call do_delete_dataset (True),
 75# or whether to just delete directories (False).  False can be used to preserve dataset logs that do
 76# not take up much space.
 77# 
 78#delete_data_dirs_and_files = true
 79
 80# # Recipe type
 81# ## Recipes override any GUI settings
 82# - **'auto'**: all models and features automatically determined by experiment settings, toml settings, and feature_engineering_effort
 83# - **'compliant'** : like 'auto' except:
 84# - *interpretability=10* (to avoid complexity, overrides GUI or python client chose for interpretability)
 85# - *enable_glm='on'* (rest 'off', to avoid complexity and be compatible with algorithms supported by MLI)
 86# - *fixed_ensemble_level=0*: Don't use any ensemble
 87# - *feature_brain_level=0*(: No feature brain used (to ensure every restart is identical)
 88# - *max_feature_interaction_depth=1*: interaction depth is set to 1 (no multi-feature interactions to avoid complexity)
 89# - *target_transformer='identity'*: for regression (to avoid complexity)
 90# - *check_distribution_shift_drop='off'*: Don't use distribution shift between train, valid, and test to drop features (bit risky without fine-tuning)
 91# - **'monotonic_gbm'** : like 'auto' except:
 92# - *monotonicity_constraints_interpretability_switch=1*: enable monotonicity constraints
 93# - *self.config.monotonicity_constraints_correlation_threshold = 0.01*: see below
 94# - *monotonicity_constraints_drop_low_correlation_features=true*: drop features that aren't correlated with target by at least 0.01 (specified by parameter above)
 95# - *fixed_ensemble_level=0*: Don't use any ensemble (to avoid complexity)
 96# - *included_models=['LightGBMModel']*
 97# - *included_transformers=['OriginalTransformer']*: only original (numeric) features will be used
 98# - *feature_brain_level=0*: No feature brain used (to ensure every restart is identical)
 99# - *monotonicity_constraints_log_level='high'*
100# - *autodoc_pd_max_runtime=-1*: no timeout for PDP creation in AutoDoc
101# - **'kaggle'** : like 'auto' except:
102# - external validation set is concatenated with train set, with target marked as missing
103# - test set is concatenated with train set, with target marked as missing
104# - transformers that do not use the target are allowed to fit_transform across entire train + validation + test
105# - several config toml expert options open-up limits (e.g. more numerics are treated as categoricals)
106# - Note: If plentiful memory, can:
107# - choose kaggle mode and then change fixed_feature_interaction_depth to large negative number,
108# otherwise default number of features given to transformer is limited to 50 by default
109# - choose mutation_mode = "full", so even more types are transformations are done at once per transformer
110# - **'nlp_model'**: Only enables NLP models that process pure text
111# - **'nlp_transformer'**: Only enables NLP transformers that process pure text, while any model type is allowed
112# - **'image_model'**: Only enables Image models that process pure images
113# - **'image_transformer'**: Only enables Image transformers that process pure images, while any model type is allowed
114# - **'unsupervised'**: Only enables unsupervised transformers, models and scorers
115# - **'gpus_max'**: Maximize use of GPUs (e.g. use XGBoost, rapids, Optuna hyperparameter search, etc.)
116# - **'more_overfit_protection'**: Potentially improve overfit, esp. for small data, by disabling target encoding and making GA behave like final model for tree counts and learning rate
117# - **'feature_store_mojo'**: Creates a MOJO to be used as transformer in the H2O Feature Store, to augment data on a row-by-row level based on Driverless AI's feature engineering. Only includes transformers that don't depend on the target, since features like target encoding need to be created at model fitting time to avoid data leakage. And features like lags need to be created from the raw data, they can't be computed with a row-by-row MOJO transformer.
118# Each pipeline building recipe mode can be chosen, and then fine-tuned using each expert settings.  Changing the
119# pipeline building recipe will reset all pipeline building recipe options back to default and then re-apply the
120# specific rules for the new mode, which will undo any fine-tuning of expert options that are part of pipeline building
121# recipe rules.
122# If choose to do new/continued/refitted/retrained experiment from parent experiment, the recipe rules are not re-applied
123# and any fine-tuning is preserved.  To reset recipe behavior, one can switch between 'auto' and the desired mode.  This
124# way the new child experiment will use the default settings for the chosen recipe.
125#recipe = "auto"
126
127# Whether to treat model like UnsupervisedModel, so that one specifies each scorer, pretransformer, and transformer in expert panel like one would do for supervised experiments.
128# Otherwise (False), custom unsupervised models will assume the model itself specified these.
129# If the unsupervised model chosen has _included_transformers, _included_pretransformers, and _included_scorers selected, this should be set to False (default) else should be set to True.
130# Then if one wants the unsupervised model to only produce 1 gene-transformer, then the custom unsupervised model can have:
131# _ngenes_max = 1
132# _ngenes_max_by_layer = [1000, 1]
133# The 1000 for the pretransformer layer just means that layer can have any number of genes.  Choose 1 if you expect single instance of the pretransformer to be all one needs, e.g. consumes input features fully and produces complete useful output features.
134# 
135#custom_unsupervised_expert_mode = false
136
137# Whether to enable genetic algorithm for selection and hyper-parameter tuning of features and models.
138# - If disabled ('off'), will go directly to final pipeline training (using default feature engineering and feature selection).
139# - 'auto' is same as 'on' unless pure NLP or Image experiment.
140# - "Optuna": Uses DAI genetic algorithm for feature engineering, but model hyperparameters are tuned with Optuna.
141# - In the Optuna case, the scores shown in the iteration panel are the best score and trial scores.
142# - Optuna mode currently only uses Optuna for XGBoost, LightGBM, and CatBoost (custom recipe).
143# - If Pruner is enabled, as is default, Optuna mode disables mutations of eval_metric so pruning uses same metric across trials to compare properly.
144# Currently does not supported when pre_transformers or multi-layer pipeline used, which must go through at least one round of tuning or evolution.
145# 
146#enable_genetic_algorithm = "auto"
147
148# How much effort to spend on feature engineering (-1...10)
149# Heuristic combination of various developer-level toml parameters
150# -1  : auto (5, except 1 for wide data in order to limit engineering)
151# 0   : keep only numeric features, only model tuning during evolution
152# 1   : keep only numeric features and frequency-encoded categoricals, only model tuning during evolution
153# 2   : Like #1 but instead just no Text features.  Some feature tuning before evolution.
154# 3   : Like #5 but only tuning during evolution.  Mixed tuning of features and model parameters.
155# 4   : Like #5, but slightly more focused on model tuning
156# 5   : Default.  Balanced feature-model tuning
157# 6-7 : Like #5, but slightly more focused on feature engineering
158# 8   : Like #6-7, but even more focused on feature engineering with high feature generation rate, no feature dropping even if high interpretability
159# 9-10: Like #8, but no model tuning during feature evolution
160# 
161#feature_engineering_effort = -1
162
163# Whether to enable train/valid and train/test distribution shift detection ('auto'/'on'/'off').
164# By default, LightGBMModel is used for shift detection if possible, unless it is turned off in model
165# expert panel, and then only the models selected in recipe list will be used.
166# 
167#check_distribution_shift = "auto"
168
169# Whether to enable train/test distribution shift detection ('auto'/'on'/'off') for final model transformed features.
170# By default, LightGBMModel is used for shift detection if possible, unless it is turned off in model
171# expert panel, and then only the models selected in recipe list will be used.
172# 
173#check_distribution_shift_transformed = "auto"
174
175# Whether to drop high-shift features ('auto'/'on'/'off').  Auto disables for time series.
176#check_distribution_shift_drop = "auto"
177
178# If distribution shift detection is enabled, drop features (except ID, text, date/datetime, time, weight) for
179# which shift AUC, GINI, or Spearman correlation is above this value
180# (e.g. AUC of a binary classifier that predicts whether given feature value
181# belongs to train or test data)
182# 
183#drop_features_distribution_shift_threshold_auc = 0.999
184
185# Specify whether to check leakage for each feature (``on`` or ``off``).
186# If a fold column is used, this option checks leakage without using the fold column.
187# By default, LightGBM Model is used for leakage detection when possible, unless it is
188# turned off in the Model Expert Settings tab, in which case only the models selected with
189# the ``included_models`` option are used. Note that this option is always disabled for time
190# series experiments.
191# 
192#check_leakage = "auto"
193
194# If leakage detection is enabled,
195# drop features for which AUC (R2 for regression), GINI,
196# or Spearman correlation is above this value.
197# If fold column present, features are not dropped,
198# because leakage test applies without fold column used.
199# 
200#drop_features_leakage_threshold_auc = 0.999
201
202# Max number of rows x number of columns to trigger (stratified) sampling for leakage checks
203# 
204#leakage_max_data_size = 10000000
205
206# Specify the maximum number of features to use and show in importance tables.
207# When Interpretability is set higher than 1,
208# transformed or original features with lower importance than the top max_features_importance features are always removed.
209# Feature importances of transformed or original features correspondingly will be pruned.
210# Higher values can lead to lower performance and larger disk space used for datasets with more than 100k columns.
211# 
212#max_features_importance = 100000
213
214# Whether to create the Python scoring pipeline at the end of each experiment.
215#make_python_scoring_pipeline = "auto"
216
217# Whether to create the MOJO scoring pipeline at the end of each experiment. If set to "auto", will attempt to
218# create it if possible (without dropping capabilities). If set to "on", might need to drop some models,
219# transformers or custom recipes.
220# 
221#make_mojo_scoring_pipeline = "auto"
222
223# Whether to create a C++ MOJO based Triton scoring pipeline at the end of each experiment. If set to "auto", will attempt to
224# create it if possible (without dropping capabilities). If set to "on", might need to drop some models,
225# transformers or custom recipes. Requires make_mojo_scoring_pipeline != "off".
226# 
227#make_triton_scoring_pipeline = "off"
228
229# Whether to automatically deploy the model to the Triton inference server at the end of each experiment.
230# "remote" will deploy to the remote Triton inference server to location provided by triton_host_remote (and optionally, triton_model_repository_dir_remote).
231# "off" requires manual action (Deploy wizard or Python client or manual transfer of exported Triton directory from Deploy wizard) to deploy the model to Triton.
232# 
233#auto_deploy_triton_scoring_pipeline = "off"
234
235# Test remote Triton deployments during creation of MOJO pipeline. Requires triton_host_remote to be configured and make_triton_scoring_pipeline to be enabled.
236#triton_mini_acceptance_test_remote = true
237
238#triton_client_timeout_testing = 300
239
240#test_triton_when_making_mojo_pipeline_only = false
241
242# Perform timing and accuracy benchmarks for Injected MOJO scoring vs Python scoring. This is for full scoring data, and can be slow. This also requires hard asserts. Doesn't force MOJO scoring by itself, so depends on mojo_for_predictions='on' if want full coverage.
243#mojo_for_predictions_benchmark = true
244
245# Fail hard if MOJO scoring is this many times slower than Python scoring.
246#mojo_for_predictions_benchmark_slower_than_python_threshold = 10
247
248# Fail hard if MOJO scoring is slower than Python scoring by a factor specified by mojo_for_predictions_benchmark_slower_than_python_threshold, but only if have at least this many rows. To reduce false positives.
249#mojo_for_predictions_benchmark_slower_than_python_min_rows = 100
250
251# Fail hard if MOJO scoring is slower than Python scoring by a factor specified by mojo_for_predictions_benchmark_slower_than_python_threshold, but only if takes at least this many seconds. To reduce false positives.
252#mojo_for_predictions_benchmark_slower_than_python_min_seconds = 2.0
253
254# Inject MOJO into fitted Python state if mini acceptance test passes, so can use C++ MOJO runtime when calling predict(enable_mojo=True, IS_SCORER=True, ...). Prerequisite for mojo_for_predictions='on' or 'auto'.
255#inject_mojo_for_predictions = true
256
257# Use MOJO for making fast low-latency predictions after experiment has finished (when applicable, for AutoDoc/Diagnostics/Predictions/MLI and standalone Python scoring via scorer.zip). For 'auto', only use MOJO if number of rows is equal or below mojo_for_predictions_max_rows. For larger frames, it can be faster to use the Python backend since used libraries are more likely already vectorized.
258#mojo_for_predictions = "auto"
259
260# For smaller datasets, the single-threaded but low latency C++ MOJO runtime can lead to significantly faster scoring times than the regular in-Driverless AI Python scoring environment. If enable_mojo=True is passed to the predict API, and the MOJO exists and is applicable, then use the MOJO runtime for datasets that have fewer or equal number of rows than this threshold. MLI/AutoDoc set enable_mojo=True by default, so this setting applies. This setting is only used if mojo_for_predictions is 'auto'.
261#mojo_for_predictions_max_rows = 10000
262
263# Batch size (in rows) for C++ MOJO predictions. Only when enable_mojo=True is passed to the predict API, and when the MOJO is applicable (e.g., fewer rows than mojo_for_predictions_max_rows). Larger values can lead to faster scoring, but use more memory.
264#mojo_for_predictions_batch_size = 100
265
266# Relative tolerance for mini MOJO acceptance test. If Python/C++ MOJO differs more than this from Python, won't use MOJO inside Python for later scoring. Only applicable if mojo_for_predictions=True. Disabled if <= 0.
267#mojo_acceptance_test_rtol = 0.0
268
269# Absolute tolerance for mini MOJO acceptance test (for regression/Shapley, will be scaled by max(abs(preds)). If Python/C++ MOJO differs more than this from Python, won't use MOJO inside Python for later scoring. Only applicable if mojo_for_predictions=True. Disabled if <= 0.
270#mojo_acceptance_test_atol = 0.0
271
272# Whether to attempt to reduce the size of the MOJO scoring pipeline. A smaller MOJO will also lead to
273# less memory footprint during scoring. It is achieved by reducing some other settings like interaction depth, and
274# hence can affect the predictive accuracy of the model.
275# 
276#reduce_mojo_size = false
277
278# Whether to create the pipeline visualization at the end of each experiment.
279# Uses MOJO to show pipeline, input features, transformers, model, and outputs of model.  MOJO-capable tree models show first tree.
280#make_pipeline_visualization = "auto"
281
282# Whether to create the python pipeline visualization at the end of each experiment.
283# Each feature and transformer includes a variable importance at end in brackets.
284# Only done when forced on, and artifacts as png files will appear in summary zip.
285# Each experiment has files per individual in final population:
286# 1) preprune_False_0.0 : Before final pruning, without any additional variable importance threshold pruning
287# 2) preprune_True_0.0 : Before final pruning, with additional variable importance <=0.0 pruning
288# 3) postprune_False_0.0 : After final pruning, without any additional variable importance threshold pruning
289# 4) postprune_True_0.0 : After final pruning, with additional variable importance <=0.0 pruning
290# 5) posttournament_False_0.0 : After final pruning and tournament, without any additional variable importance threshold pruning
291# 6) posttournament_True_0.0 : After final pruning and tournament, with additional variable importance <=0.0 pruning
292# 1-5 are done with 'on' while 'auto' only does 6 corresponding to the final post-pruned individuals.
293# Even post pruning, some features have zero importance, because only those genes that have value+variance in
294# variable importance of value=0.0 get pruned.  GA can have many folds with positive variance
295# for a gene, and those are not removed in case they are useful features for final model.
296# If small mojo option is chosen (reduce_mojo_size True), then the variance of feature gain is ignored
297# for which genes and features are pruned as well as for what appears in the graph.
298# 
299#make_python_pipeline_visualization = "auto"
300
301# Whether to create the experiment AutoDoc after end of experiment.
302# 
303#make_autoreport = true
304
305#max_cols_make_autoreport_automatically = 1000
306
307#max_cols_make_pipeline_visualization_automatically = 5000
308
309# Pass environment variables from running Driverless AI instance to Python scoring pipeline for
310# deprecated models, when they are used to make predictions. Use with caution.
311# If config.toml overrides are set by env vars, and they differ from what the experiment's env
312# looked like when it was trained, then unexpected consequences can occur. Enable this only to "
313# override certain well-controlled settings like the port for H2O-3 custom recipe server.
314# 
315#pass_env_to_deprecated_python_scoring = false
316
317#transformer_description_line_length = -1
318
319# Whether to measure the MOJO scoring latency at the time of MOJO creation.
320#benchmark_mojo_latency = "auto"
321
322# Max size of pipeline.mojo file (in MB) for automatic mode of MOJO scoring latency measurement
323#benchmark_mojo_latency_auto_size_limit = 2048
324
325# If MOJO creation times out at end of experiment, can still make MOJO from the GUI or from the R/Py clients (timeout doesn't apply there).
326#mojo_building_timeout = 1800.0
327
328# If MOJO visualization creation times out at end of experiment, MOJO is still created if possible within the time limit specified by mojo_building_timeout.
329#mojo_vis_building_timeout = 600.0
330
331# If MOJO creation is too slow, increase this value. Higher values can finish faster, but use more memory.
332# If MOJO creation fails due to an out-of-memory error, reduce this value to 1.
333# Set to -1 for all physical cores.
334# 
335#mojo_building_parallelism = -1
336
337# Size in bytes that all pickled and compressed base models have to satisfy to use parallel MOJO building.
338# For large base models, parallel MOJO building can use too much memory.
339# Only used if final_fitted_model_per_model_fold_files is true.
340# 
341#mojo_building_parallelism_base_model_size_limit = 100000000
342
343# Whether to show model and pipeline sizes in logs.
344# If 'auto', then not done if more than 10 base models+folds, because expect not concerned with size.
345#show_pipeline_sizes = "auto"
346
347# safe: assume might be running another experiment on same node
348# moderate: assume not running any other experiments or tasks on same node, but still only use physical core count
349# max: assume not running anything else on node at all except the experiment
350# If multinode is enabled, this option has no effect, unless worker_remote_processors=1 when it will still be applied.
351# Each exclusive mode can be chosen, and then fine-tuned using each expert settings.  Changing the
352# exclusive mode will reset all exclusive mode related options back to default and then re-apply the
353# specific rules for the new mode, which will undo any fine-tuning of expert options that are part of exclusive mode rules.
354# If choose to do new/continued/refitted/retrained experiment from parent experiment, all the mode rules are not re-applied
355# and any fine-tuning is preserved.  To reset mode behavior, one can switch between 'safe' and the desired mode.   This
356# way the new child experiment will use the default system resources for the chosen mode.
357# 
358#exclusive_mode = "safe"
359
360# Maximum number of workers for Driverless AI server pool (only 1 needed currently)
361#max_workers = 1
362
363# Max number of CPU cores to use for the whole system. Set to <= 0 to use all (physical) cores.
364# If the number of ``worker_remote_processors`` is set to a value >= 3, the number of cores will be reduced
365# by the ratio (``worker_remote_processors_max_threads_reduction_factor`` * ``worker_remote_processors``)
366# to avoid overloading the system when too many remote tasks are processed at once.
367# One can also set environment variable 'OMP_NUM_THREADS' to number of cores to use for OpenMP
368# (e.g., in bash: 'export OMP_NUM_THREADS=32' and 'export OPENBLAS_NUM_THREADS=32').
369# 
370#max_cores = 0
371
372# Max number of CPU cores to use across all of DAI experiments and tasks.
373# -1 is all available, with stall_subprocess_submission_dai_fork_threshold_count=0 means restricted to core count.
374# 
375#max_cores_dai = -1
376
377# Number of virtual cores per physical core (0: auto mode, >=1 use that integer value).  If >=1, the reported physical cores in logs will match the virtual cores divided by this value.
378#virtual_cores_per_physical_core = 0
379
380# Mininum number of virtual cores per physical core. Only applies if virtual cores != physical cores. Can help situations like Intel i9 13900 with 24 physical cores and only 32 virtual cores. So better to limit physical cores to 16.
381#min_virtual_cores_per_physical_core_if_unequal = 2
382
383# Number of physical cores to assume are present (0: auto, >=1 use that integer value).
384# If for some reason DAI does not automatically figure out physical cores correctly,
385# one can override with this value.  Some systems, especially virtualized, do not always provide
386# correct information about the virtual cores, physical cores, sockets, etc.
387#override_physical_cores = 0
388
389# Number of virtual cores to assume are present (0: auto, >=1 use that integer value).
390# If for some reason DAI does not automatically figure out virtual cores correctly,
391# or only a portion of the system is to be used, one can override with this value.
392# Some systems, especially virtualized, do not always provide
393# correct information about the virtual cores, physical cores, sockets, etc.
394#override_virtual_cores = 0
395
396# Whether to treat data as small recipe in terms of work, by spreading many small tasks across many cores instead of forcing GPUs, for models that support it via static var _use_single_core_if_many.  'auto' looks at _use_single_core_if_many for models and data size, 'on' forces, 'off' disables.
397#small_data_recipe_work = "auto"
398
399# Stall submission of tasks if total DAI fork count exceeds count (-1 to disable, 0 for automatic of max_cores_dai)
400#stall_subprocess_submission_dai_fork_threshold_count = 0
401
402# Stall submission of tasks if system memory available is less than this threshold in percent (set to 0 to disable).
403# Above this threshold, the number of workers in any pool of workers is linearly reduced down to 1 once hitting this threshold.
404# 
405#stall_subprocess_submission_mem_threshold_pct = 2
406
407# Whether to set automatic number of cores by physical (True) or logical (False) count.
408# Using all logical cores can lead to poor performance due to cache thrashing.
409# 
410#max_cores_by_physical = true
411
412# Absolute limit to core count
413#max_cores_limit = 200
414
415# Control maximum number of cores to use for a model's fit call (0 = all physical cores >= 1 that count).
416#max_fit_cores = 10
417
418# Control maximum number of cores to use for a scoring across all chosen scorers (0 = auto)
419#parallel_score_max_workers = 0
420
421# Whether to use full multinode distributed cluster (True) or single-node dask (False).
422# In some cases, using entire cluster can be inefficient.  E.g. several DGX nodes can be more efficient
423# if used one DGX at a time for medium-sized data.
424# 
425#use_dask_cluster = true
426
427# Control maximum number of cores to use for a model's predict call (0 = all physical cores >= 1 that count)
428#max_predict_cores = 0
429
430# Factor by which to reduce physical cores, to use for post-model experiment tasks like autoreport, MLI, etc.
431#max_predict_cores_in_dai_reduce_factor = 4
432
433# Maximum number of cores to use for post-model experiment tasks like autoreport, MLI, etc.
434#max_max_predict_cores_in_dai = 10
435
436# Control maximum number of cores to use for a model's transform and predict call when doing operations inside DAI-MLI GUI and R/Py client.
437# The main experiment and other tasks like MLI and autoreport have separate queues.  The main experiments have run at most worker_remote_processors tasks (limited by cores if auto mode),
438# while other tasks run at most worker_local_processors (limited by cores if auto mode) tasks at the same time,
439# so many small tasks can add up.  To prevent overloading the system, the defaults are conservative.  However, if most of the activity involves autoreport or MLI, and no model experiments
440# are running, it may be safe to increase this value to something larger than 4.
441# -1   : Auto mode.  Up to physical cores divided by 4, up to maximum of 10.
442# 0   : all physical cores
443# >= 1: that count).
444# 
445#max_predict_cores_in_dai = -1
446
447# Control number of workers used in CPU mode for tuning (0 = socket count -1 = all physical cores >= 1 that count).  More workers will be more parallel but models learn less from each other.
448#batch_cpu_tuning_max_workers = 0
449
450# Control number of workers used in CPU mode for training (0 = socket count -1 = all physical cores >= 1 that count)
451#cpu_max_workers = 0
452
453# Expected maximum number of forks, used to ensure datatable doesn't overload system. For actual use beyond this value, system will start to have slow-down issues
454#assumed_simultaneous_dt_forks_munging = 3
455
456# Expected maximum number of forks by computing statistics during ingestion, used to ensure datatable doesn't overload system
457#assumed_simultaneous_dt_forks_stats_openblas = 1
458
459# Maximum of threads for datatable for munging
460#max_max_dt_threads_munging = 4
461
462# Expected maximum of threads for datatable no matter if many more cores
463#max_max_dt_threads_stats_openblas = 8
464
465# Maximum of threads for datatable for reading/writing files
466#max_max_dt_threads_readwrite = 4
467
468# Maximum parallel workers for final model building.
469# 0 means automatic, >=1 means limit to no more than that number of parallel jobs.
470# Can be required if some transformer or model uses more than the expected amount of memory.
471# Ways to reduce final model building memory usage, e.g. set one or more of these and retrain final model:
472# 1) Increase munging_memory_overhead_factor to 10
473# 2) Increase final_munging_memory_reduction_factor to 10
474# 3) Lower max_workers_final_munging to 1
475# 4) Lower max_workers_final_base_models to 1
476# 5) Lower max_cores to, e.g., 1/2 or 1/4 of physical cores.
477#max_workers_final_base_models = 0
478
479# Maximum parallel workers for final per-model munging.
480# 0 means automatic, >=1 means limit to no more than that number of parallel jobs.
481# Can be required if some transformer uses more than the expected amount of memory.
482#max_workers_final_munging = 0
483
484# Minimum number of threads for datatable (and OpenMP) during data munging (per process).
485# datatable is the main data munging tool used within Driverless ai (source :
486# https://github.com/h2oai/datatable)
487# 
488#min_dt_threads_munging = 1
489
490# Like min_datatable (and OpenMP)_threads_munging but for final pipeline munging
491#min_dt_threads_final_munging = 1
492
493# Maximum number of threads for datatable during data munging (per process) (0 = all, -1 = auto).
494# If multiple forks, threads are distributed across forks.
495#max_dt_threads_munging = -1
496
497# Maximum number of threads for datatable during data reading and writing (per process) (0 = all, -1 = auto).
498# If multiple forks, threads are distributed across forks.
499#max_dt_threads_readwrite = -1
500
501# Maximum number of threads for datatable stats and openblas (per process) (0 = all, -1 = auto).
502# If multiple forks, threads are distributed across forks.
503#max_dt_threads_stats_openblas = -1
504
505# Maximum number of threads for datatable during TS properties preview panel computations).
506#max_dt_threads_do_timeseries_split_suggestion = 1
507
508# Number of GPUs to use per experiment for training task.  Set to -1 for all GPUs.
509# An experiment will generate many different models.
510# Currently num_gpus_per_experiment!=-1 disables GPU locking, so is only recommended for
511# single experiments and single users.
512# Ignored if GPUs disabled or no GPUs on system.
513# More info at: https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker#gpu-isolation
514# In multinode context when using dask, this refers to the per-node value.
515# For ImageAutoModel, this refers to the total number of GPUs used for that entire model type,
516# since there is only one model type for the entire experiment.
517# E.g. if have 4 GPUs and want 2 ImageAuto experiments to run on 2 GPUs each, can set
518# num_gpus_per_experiment to 2 for each experiment, and each of the 4 GPUs will be used one at a time
519# by the 2 experiments each using 2 GPUs only.
520# 
521#num_gpus_per_experiment = -1
522
523# Number of CPU cores per GPU. Limits number of GPUs in order to have sufficient cores per GPU.
524# Set to -1 to disable, -2 for auto mode.
525# In auto mode, if lightgbm_use_gpu is 'auto' or 'off', then min_num_cores_per_gpu=1, else min_num_cores_per_gpu=2, due to lightgbm requiring more cores even when using GPUs.
526#min_num_cores_per_gpu = -2
527
528# Number of GPUs to use per model training task.  Set to -1 for all GPUs.
529# For example, when this is set to -1 and there are 4 GPUs available, all of them can be used for the training of a single model.
530# Only applicable currently to image auto pipeline building recipe or Dask models with more than one GPU or more than one node.
531# Ignored if GPUs disabled or no GPUs on system.
532# For ImageAutoModel, the maximum of num_gpus_per_model and num_gpus_per_experiment (all GPUs if -1) is taken.
533# More info at: https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker#gpu-isolation
534# In multinode context when using Dask, this refers to the per-node value.
535# 
536#num_gpus_per_model = 1
537
538# Number of GPUs to use for predict for models and transform for transformers when running outside of fit/fit_transform.
539# -1 means all, 0 means no GPUs, >1 means that many GPUs up to visible limit.
540# If predict/transform are called in same process as fit/fit_transform, number of GPUs will match,
541# while new processes will use this count for number of GPUs for applicable models/transformers.
542# Exception: TensorFlow (abandoned since 2.4.0), PyTorch models/transformers, and RAPIDS (abandoned since 1.11) predict on GPU always if GPUs exist.
543# RAPIDS requires python scoring package be used also on GPUs.
544# In multinode context when using Dask, this refers to the per-node value.
545# 
546#num_gpus_for_prediction = 0
547
548# Which gpu_id to start with
549# -1 : auto-mode.  E.g. 2 experiments can each set num_gpus_per_experiment to 2 and use 4 GPUs
550# If using CUDA_VISIBLE_DEVICES=... to control GPUs (preferred method), gpu_id=0 is the
551# first in that restricted list of devices.
552# E.g. if CUDA_VISIBLE_DEVICES='4,5' then gpu_id_start=0 will refer to the
553# device #4.
554# E.g. from expert mode, to run 2 experiments, each on a distinct GPU out of 2 GPUs:
555# Experiment#1: num_gpus_per_model=1, num_gpus_per_experiment=1, gpu_id_start=0
556# Experiment#2: num_gpus_per_model=1, num_gpus_per_experiment=1, gpu_id_start=1
557# E.g. from expert mode, to run 2 experiments, each on a distinct GPU out of 8 GPUs:
558# Experiment#1: num_gpus_per_model=1, num_gpus_per_experiment=4, gpu_id_start=0
559# Experiment#2: num_gpus_per_model=1, num_gpus_per_experiment=4, gpu_id_start=4
560# E.g. Like just above, but now run on all 4 GPUs/model
561# Experiment#1: num_gpus_per_model=4, num_gpus_per_experiment=4, gpu_id_start=0
562# Experiment#2: num_gpus_per_model=4, num_gpus_per_experiment=4, gpu_id_start=4
563# If num_gpus_per_model!=1, global GPU locking is disabled
564# (because underlying algorithms don't support arbitrary gpu ids, only sequential ids),
565# so must setup above correctly to avoid overlap across all experiments by all users
566# More info at: https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker#gpu-isolation
567# Note that GPU selection does not wrap, so gpu_id_start + num_gpus_per_model must be less than number of visibile GPUs
568# 
569#gpu_id_start = -1
570
571# Whether to reduce features until model does not fail.
572# Currently for non-dask XGBoost models (i.e. GLMModel, XGBoostGBMModel, XGBoostDartModel, XGBoostRFModel),
573# during normal fit or when using Optuna.
574# Primarily useful for GPU OOM.
575# If XGBoost runs out of GPU memory, this is detected, and
576# (regardless of setting of skip_model_failures),
577# we perform feature selection using XGBoost on subsets of features.
578# The dataset is progressively reduced by factor of 2 with more models to cover all features.
579# This splitting continues until no failure occurs.
580# Then all sub-models are used to estimate variable importance by absolute information gain,
581# in order to decide which features to include.
582# Finally, a single model with the most important features
583# is built using the feature count that did not lead to OOM.
584# For 'auto', this option is set to 'off' when reproducible experiment is enabled,
585# because the condition of running OOM can change for same experiment seed.
586# Reduction is only done on features and not on rows for the feature selection step.
587# 
588#allow_reduce_features_when_failure = "auto"
589
590# With allow_reduce_features_when_failure, this controls how many repeats of sub-models
591# used for feature selection.  A single repeat only has each sub-model
592# consider a single sub-set of features, while repeats shuffle which
593# features are considered allowing more chance to find important interactions.
594# More repeats can lead to higher accuracy.
595# The cost of this option is proportional to the repeat count.
596# 
597#reduce_repeats_when_failure = 1
598
599# With allow_reduce_features_when_failure, this controls the fraction of features
600# treated as an anchor that are fixed for all sub-models.
601# Each repeat gets new anchors.
602# For tuning and evolution, the probability depends
603# upon any prior importance (if present) from other individuals,
604# while final model uses uniform probability for anchor features.
605# 
606#fraction_anchor_reduce_features_when_failure = 0.1
607
608# Error strings from XGBoost that are used to trigger re-fit on reduced sub-models.
609# See allow_reduce_features_when_failure.
610# 
611#xgboost_reduce_on_errors_list = "['Memory allocation error on worker', 'out of memory', 'XGBDefaultDeviceAllocatorImpl', 'invalid configuration argument', 'Requested memory']"
612
613# Error strings from LightGBM that are used to trigger re-fit on reduced sub-models.
614# See allow_reduce_features_when_failure.
615# 
616#lightgbm_reduce_on_errors_list = "['Out of Host Memory']"
617
618# LightGBM does not significantly benefit from GPUs, unlike other tools like XGBoost or Bert/Image Models.
619# Each experiment will try to use all GPUs, and on systems with many cores and GPUs,
620# this leads to many experiments running at once, all trying to lock the GPU for use,
621# leaving the cores heavily under-utilized.  So by default, DAI always uses CPU for LightGBM, unless 'on' is specified.
622#lightgbm_use_gpu = "auto"
623
624# Kaggle username for automatic submission and scoring of test set predictions.
625# See https://github.com/Kaggle/kaggle-api#api-credentials for details on how to obtain Kaggle API credentials",
626# 
627#kaggle_username = ""
628
629# Kaggle key for automatic submission and scoring of test set predictions.
630# See https://github.com/Kaggle/kaggle-api#api-credentials for details on how to obtain Kaggle API credentials",
631# 
632#kaggle_key = ""
633
634# Max. number of seconds to wait for Kaggle API call to return scores for given predictions
635#kaggle_timeout = 120
636
637#kaggle_keep_submission = false
638
639# If provided, can extend the list to arbitrary and potentially future Kaggle competitions to make
640# submissions for. Only used if kaggle_key and kaggle_username are provided.
 641# Provide a quoted comma-separated list of tuples (target column name, number of test rows, competition, metric) like this:
 642# kaggle_competitions='("target", 200000, "santander-customer-transaction-prediction", "AUC"), ("TARGET", 75818, "santander-customer-satisfaction", "AUC")'
 643# 
 644#kaggle_competitions = ""
 645
 646# Period (in seconds) of ping by Driverless AI server to each experiment
 647# (in order to get logger info like disk space and memory usage).
 648# 0 means don't print anything.
 649#ping_period = 60
 650
 651# Whether to enable ping of system status during DAI experiments.
 652#ping_autodl = true
 653
 654# Minimum amount of disk space in GB needed to run experiments.
 655# Experiments will fail if this limit is crossed.
 656# This limit exists because Driverless AI needs to generate data for model training
 657# feature engineering, documentation and other such processes.
 658#disk_limit_gb = 5
 659
 660# Minimum amount of disk space in GB needed to before stall forking of new processes during an experiment.
 661#stall_disk_limit_gb = 1
 662
 663# Minimum amount of system memory in GB needed to start experiments.
 664# Similarly with disk space, a certain amount of system memory is needed to run some basic
 665# operations.
 666#memory_limit_gb = 5
 667
 668# Minimum number of rows needed to run experiments (values lower than 100 might not work).
 669# A minimum threshold is set to ensure there is enough data to create a statistically
 670# reliable model and avoid other small-data related failures.
 671# 
 672#min_num_rows = 100
 673
 674# Minimum required number of rows (in the training data) for each class label for classification problems.
 675#min_rows_per_class = 5
 676
 677# Minimum required number of rows for each split when generating validation samples.
 678#min_rows_per_split = 5
 679
 680# Level of reproducibility desired (for same data and same inputs).
 681# Only active if 'reproducible' mode is enabled (GUI button enabled or a seed is set from the client API).
 682# Supported levels are:
 683# reproducibility_level = 1 for same experiment results as long as same O/S, same CPU(s) and same GPU(s)
 684# reproducibility_level = 2 for same experiment results as long as same O/S, same CPU architecture and same GPU architecture
 685# reproducibility_level = 3 for same experiment results as long as same O/S, same CPU architecture, not using GPUs
 686# reproducibility_level = 4 for same experiment results as long as same O/S, (best effort)
 687# 
 688#reproducibility_level = 1
 689
 690# Seed for random number generator to make experiments reproducible, to a certain reproducibility level (see above).
 691# Only active if 'reproducible' mode is enabled (GUI button enabled or a seed is set from the client API).
 692# 
 693#seed = 1234
 694
 695# The list of values that should be interpreted as missing values during data import.
 696# This applies to both numeric and string columns. Note that the dataset must be reloaded after applying changes to this config via the expert settings.
 697# Also note that 'nan' is always interpreted as a missing value for numeric columns.
 698#missing_values = "['', '?', 'None', 'nan', 'NA', 'N/A', 'unknown', 'inf', '-inf', '1.7976931348623157e+308', '-1.7976931348623157e+308']"
 699
 700# Whether to impute (to mean) for GLM on training data.
 701#glm_nan_impute_training_data = false
 702
 703# Whether to impute (to mean) for GLM on validation data.
 704#glm_nan_impute_validation_data = false
 705
 706# Whether to impute (to mean) for GLM on prediction data (required for consistency with MOJO).
 707#glm_nan_impute_prediction_data = true
 708
 709# Internal threshold for number of rows x number of columns to trigger certain statistical
 710# techniques (small data recipe like including one hot encoding for all model types, and smaller learning rate)
 711# to increase model accuracy
 712#statistical_threshold_data_size_small = 100000
 713
 714# Internal threshold for number of rows x number of columns to trigger certain statistical
 715# techniques (fewer genes created, removal of high max_depth for tree models, etc.) that can speed up modeling.
 716# Also controls maximum rows used in training final model,
 717# by sampling statistical_threshold_data_size_large / columns number of rows
 718#statistical_threshold_data_size_large = 500000000
 719
 720# Internal threshold for number of rows x number of columns to trigger sampling for auxiliary data uses,
 721# like imbalanced data set detection and bootstrap scoring sample size and iterations
 722#aux_threshold_data_size_large = 10000000
 723
 724# Internal threshold for set-based method for sampling without replacement.
 725# Can be 10x faster than np_random_choice internal optimized method, and
 726# up to 30x faster than np.random.choice to sample 250k rows from 1B rows etc.
 727#set_method_sampling_row_limit = 5000000
 728
 729# Internal threshold for number of rows x number of columns to trigger certain changes in performance
 730# (fewer threads if beyond large value) to help avoid OOM or unnecessary slowdowns
 731# (fewer threads if lower than small value) to avoid excess forking of tasks
 732#performance_threshold_data_size_small = 100000
 733
 734# Internal threshold for number of rows x number of columns to trigger certain changes in performance
 735# (fewer threads if beyond large value) to help avoid OOM or unnecessary slowdowns
 736# (fewer threads if lower than small value) to avoid excess forking of tasks
 737#performance_threshold_data_size_large = 100000000
 738
 739# Threshold for number of rows x number of columns to trigger GPU to be default for models like XGBoost GBM.
 740#gpu_default_threshold_data_size_large = 1000000
 741
 742# Maximum fraction of mismatched columns to allow between train and either valid or test.  Beyond this value the experiment will fail with invalid data error.
 743#max_relative_cols_mismatch_allowed = 0.5
 744
 745# Enable various rules to handle wide (Num. columns > Num. rows) datasets ('auto'/'on'/'off').  Setting on forces rules to be enabled regardless of columns.
 746#enable_wide_rules = "auto"
 747
 748# If columns > wide_factor * rows, then enable wide rules if auto.  For columns > rows, random forest is always enabled.
 749#wide_factor = 5.0
 750
 751# Maximum number of columns to start an experiment. This threshold exists to constraint the # complexity and the length of the Driverless AI's processes.
 752#max_cols = 10000000
 753
 754# Largest number of rows to use for column stats, otherwise sample randomly
 755#max_rows_col_stats = 1000000
 756
 757# Largest number of rows to use for cv in cv for target encoding when doing gini scoring test
 758#max_rows_cv_in_cv_gini = 100000
 759
 760# Largest number of rows to use for constant model fit, otherwise sample randomly
 761#max_rows_constant_model = 1000000
 762
 763# Largest number of rows to use for final ensemble base model fold cores, otherwise sample randomly
 764#max_rows_final_ensemble_base_model_fold_scores = 1000000
 765
 766# Largest number of rows to use for final ensemble blender for regression and binary (scaled down linearly by number of classes for multiclass for >= 10 classes), otherwise sample randomly.
 767#max_rows_final_blender = 1000000
 768
 769# Smallest number of rows (or number of rows if less than this) to use for final ensemble blender.
 770#min_rows_final_blender = 10000
 771
 772# Largest number of rows to use for final training score (no holdout), otherwise sample randomly
 773#max_rows_final_train_score = 5000000
 774
 775# Largest number of rows to use for final ROC, lift-gains, confusion matrix, residual, and actual vs. predicted.  Otherwise sample randomly
 776#max_rows_final_roccmconf = 1000000
 777
 778# Largest number of rows to use for final holdout scores, otherwise sample randomly
 779#max_rows_final_holdout_score = 5000000
 780
 781# Largest number of rows to use for final holdout bootstrap scores, otherwise sample randomly
 782#max_rows_final_holdout_bootstrap_score = 1000000
 783
 784# Whether to obtain permutation feature importance on original features for reporting in logs and summary zip file
 785# (as files with pattern fs_*.json or fs_*.tab.txt).
 786# This computes feature importance on a single un-tuned model
 787# (typically LightGBM with pre-defined un-tuned hyperparameters)
 788# and simple set of features (encoding typically is frequency encoding or target encoding).
 789# Features with low importance are automatically dropped if there are many original features,
 790# or a model with feature selection by permutation importance is created if interpretability is high enough in order to see if it gives a better score.
 791# One can manually drop low importance features, but this can be risky as transformers or hyperparameters might recover
 792# their usefulness.
 793# Permutation importance is obtained by:
 794# 1) Transforming categoricals to frequency or target encoding features.
 795# 2) Fitting that model on many folds, different data sizes, and slightly varying hyperparameters.
 796# 3) Predicting on that model for each feature where each feature has its data shuffled.
 797# 4) Computing the score on each shuffled prediction.
 798# 5) Computing the difference between the unshuffled score and the shuffled score to arrive at a delta score
 799# 6) The delta score becomes the variable importance once normalized by the maximum.
 800# Positive delta scores indicate the feature helped the model score,
 801# while negative delta scores indicate the feature hurt the model score.
 802# The normalized scores are stored in the fs_normalized_* files in the summary zip.
 803# The unnormalized scores (actual delta scores) are stored in the fs_unnormalized_* files in the summary zip.
 804# AutoDoc has a similar functionality of providing permutation importance on original features,
 805# where that takes the specific final model of an experiment and runs training data set through permutation importance to get original importance,
 806# so shuffling of original features is performed and the full pipeline is computed in each shuffled set of original features.
 807# 
 808#orig_features_fs_report = false
 809
 810# Maximum number of rows when doing permutation feature importance, reduced by (stratified) random sampling.
 811# 
 812#max_rows_fs = 500000
 813
 814#max_rows_leak = 100000
 815
 816# How many workers to use for feature selection by permutation for predict phase.
 817# (0 = auto, > 0: min of DAI value and this value, < 0: exactly negative of this value)
 818# 
 819#max_workers_fs = 0
 820
 821# How many workers to use for shift and leakage checks  if using LightGBM on CPU.
 822# (0 = auto, > 0: min of DAI value and this value, < 0: exactly negative of this value)
 823# 
 824#max_workers_shift_leak = 0
 825
 826# Maximum number of columns selected out of original set of original columns, using feature selection.
 827# The selection is based upon how well target encoding (or frequency encoding if not available) on categoricals and numerics treated as categoricals.
 828# This is useful to reduce the final model complexity. First the best
 829# [max_orig_cols_selected] are found through feature selection methods and then
 830# these features are used in feature evolution (to derive other features) and in modelling.
 831# 
 832#max_orig_cols_selected = 10000000
 833
 834# Maximum number of numeric columns selected, above which will do feature selection
 835# same max_orig_cols_selected but for numeric columns.
 836#max_orig_numeric_cols_selected = 10000000
 837
 838#max_orig_nonnumeric_cols_selected_default = 300
 839
 840# Maximum number of non-numeric columns selected, above which will do feature selection on all features. Same as max_orig_numeric_cols_selected but for categorical columns.
 841# If set to -1, then auto mode which uses max_orig_nonnumeric_cols_selected_default, but then for small data can be increased up to 10x larger.
 842# 
 843#max_orig_nonnumeric_cols_selected = -1
 844
 845# The factor times max_orig_cols_selected, by which column selection is based upon no target encoding and no treating numerical as categorical
 846# in order to limit performance cost of feature engineering
 847#max_orig_cols_selected_simple_factor = 2
 848
 849# Like max_orig_cols_selected, but columns above which add special individual with original columns reduced.
 850# 
 851#fs_orig_cols_selected = 10000000
 852
 853# Like max_orig_numeric_cols_selected, but applicable to special individual with original columns reduced.
 854# A separate individual in the genetic algorithm is created by doing feature selection by permutation importance on original features.
 855# 
 856#fs_orig_numeric_cols_selected = 10000000
 857
 858# Like max_orig_nonnumeric_cols_selected, but applicable to special individual with original columns reduced.
 859# A separate individual in the genetic algorithm is created by doing feature selection by permutation importance on original features.
 860# 
 861#fs_orig_nonnumeric_cols_selected = 200
 862
 863# Like max_orig_cols_selected_simple_factor, but applicable to special individual with original columns reduced.
 864#fs_orig_cols_selected_simple_factor = 2
 865
 866#predict_shuffle_inside_model = true
 867
 868#use_native_cats_for_lgbm_fs = true
 869
 870#orig_stddev_max_cols = 1000
 871
 872# Maximum allowed fraction of unique values for integer and categorical columns (otherwise will treat column as ID and drop)
 873#max_relative_cardinality = 0.95
 874
 875# Maximum allowed number of unique values for integer and categorical columns (otherwise will treat column as ID and drop)
 876#max_absolute_cardinality = 1000000
 877
 878# Whether to treat some numerical features as categorical.
 879# For instance, sometimes an integer column may not represent a numerical feature but
 880# represent different numerical codes instead.
 881# Very restrictive to disable, since then even columns with few categorical levels that happen to be numerical
 882# in value will not be encoded like a categorical.
 883# 
 884#num_as_cat = true
 885
 886# Max number of unique values for integer/real columns to be treated as categoricals (test applies to first statistical_threshold_data_size_small rows only)
 887#max_int_as_cat_uniques = 50
 888
 889# Max number of unique values for integer/real columns to be treated as categoricals (test applies to first statistical_threshold_data_size_small rows only). Applies to integer or real numerical feature that violates Benford's law, and so is ID-like but not entirely an ID.
 890#max_int_as_cat_uniques_if_not_benford = 10000
 891
 892# When the fraction of non-numeric (and non-missing) values is less or equal than this value, consider the
 893# column numeric. Can help with minor data quality issues for experimentation, > 0 is not recommended for production,
 894# since type inconsistencies can occur. Note: Replaces non-numeric values with missing values
 895# at start of experiment, so some information is lost, but column is now treated as numeric, which can help.
 896# If < 0, then disabled.
 897# If == 0, then if number of rows <= max_rows_col_stats, then convert any column of strings of numbers to numeric type.
 898# 
 899#max_fraction_invalid_numeric = 0.0
 900
 901# Number of folds for models used during the feature engineering process.
 902# Increasing this will put a lower fraction of data into validation and more into training
 903# (e.g., num_folds=3 means 67%/33% training/validation splits).
 904# Actual value will vary for small or big data cases.
 905# 
 906#num_folds = 3
 907
 908#fold_balancing_repeats_times_rows = 100000000.0
 909
 910#max_fold_balancing_repeats = 10
 911
 912#fixed_split_seed = 0
 913
 914#show_fold_stats = true
 915
 916# For multiclass problems only. Whether to allow different sets of target classes across (cross-)validation
 917# fold splits. Especially important when passing a fold column that isn't balanced w.r.t class distribution.
 918# 
 919#allow_different_classes_across_fold_splits = true
 920
 921# Accuracy setting equal and above which enables full cross-validation (multiple folds) during feature evolution
 922# as opposed to only a single holdout split (e.g. 2/3 train and 1/3 validation holdout)
 923# 
 924#full_cv_accuracy_switch = 9
 925
 926# Accuracy setting equal and above which enables stacked ensemble as final model.
 927# Stacking commences at the end of the feature evolution process..
 928# It quite often leads to better model performance, but it does increase the complexity
 929# and execution time of the final model.
 930# 
 931#ensemble_accuracy_switch = 5
 932
 933# Number of fold splits to use for ensemble_level >= 2.
 934# The ensemble modelling may require predictions to be made on out-of-fold samples
 935# hence the data needs to be split on different folds to generate these predictions.
 936# Less folds (like 2 or 3) normally create more stable models, but may be less accurate
 937# More folds can get to higher accuracy at the expense of more time, but the performance
 938# may be less stable when the training data is not enough (i.e. higher chance of overfitting).
 939# Actual value will vary for small or big data cases.
 940# 
 941#num_ensemble_folds = 4
 942
 943# Includes pickles of (train_idx, valid_idx) tuples (numpy row indices for original training data)
 944# for all internal validation folds in the experiment summary zip. For debugging.
 945# Saves both feature engineering folds (validation_train_valid_split_fold_*.pickle) and
 946# final ensemble folds (ensemble_train_valid_split_fold_*.pickle) when no validation dataset is provided.
 947# 
 948#save_validation_splits = false
 949
 950# Number of repeats for each fold for all validation
 951# (modified slightly for small or big data cases)
 952# 
 953#fold_reps = 1
 954
 955#max_num_classes_hard_limit = 10000
 956
 957# Maximum number of classes to allow for a classification problem.
 958# High number of classes may make certain processes of Driverless AI time-consuming.
 959# Memory requirements also increase with higher number of classes
 960# 
 961#max_num_classes = 1000
 962
 963# Maximum number of classes to compute ROC and CM for,
 964# beyond which roc_reduce_type choice for reduction is applied.
 965# Too many classes can take much longer than model building time.
 966# 
 967#max_num_classes_compute_roc = 200
 968
 969# Maximum number of classes to show in GUI for confusion matrix, showing first max_num_classes_client_and_gui labels.
 970# Beyond 6 classes the diagnostics launched from GUI are visually truncated.
 971# This will only modify client-GUI launched diagnostics if changed in config.toml and server is restarted,
 972# while this value can be changed in expert settings to control experiment plots.
 973# 
 974#max_num_classes_client_and_gui = 10
 975
 976# If too many classes when computing roc,
 977# reduce by "rows" by randomly sampling rows,
 978# or reduce by truncating classes to no more than max_num_classes_compute_roc.
 979# If have sufficient rows for class count, can reduce by rows.
 980# 
 981#roc_reduce_type = "rows"
 982
 983#min_roc_sample_size = 1
 984
 985# Maximum number of rows to obtain confusion matrix related plots during feature evolution.
 986# Does not limit final model calculation.
 987# 
 988#max_rows_cm_ga = 500000
 989
 990# Number of actuals vs. predicted data points to use in order to generate in the relevant
 991# plot/graph which is shown at the right part of the screen within an experiment.
 992#num_actuals_vs_predicted = 100
 993
 994# Whether to use feature_brain results even if running new experiments.
 995# Feature brain can be risky with some types of changes to experiment setup.
 996# Even rescoring may be insufficient, so by default this is False.
 997# For example, one experiment may have training=external validation by accident, and get high score,
 998# and while feature_brain_reset_score='on' means we will rescore, it will have already seen
 999# during training the external validation and leak that data as part of what it learned from.
1000# If this is False, feature_brain_level just sets possible models to use and logs/notifies,
1001# but does not use these feature brain cached models.
1002# 
1003#use_feature_brain_new_experiments = false
1004
1005# Whether reuse dataset schema, such as data types set in UI for each column, from parent experiment ('on') or to ignore original dataset schema and only use new schema ('off').
1006# resume_data_schema=True is a basic form of data lineage, but it may not be desirable if data colunn names changed to incompatible data types like int to string.
1007# 'auto': for restart, retrain final pipeline, or refit best models, default is to resume data schema, but new experiments would not by default reuse old schema.
1008# 'on': force reuse of data schema from parent experiment if possible
1009# 'off': don't reuse data schema under any case.
1010# The reuse of the column schema can also be disabled by:
1011# in UI: selecting Parent Experiment as None
1012# in client: setting resume_experiment_id to None
1013#resume_data_schema = "auto"
1014
1015#resume_data_schema_old_logic = false
1016
1017# Whether to show (or use) results from H2O.ai brain: the local caching and smart re-use of prior experiments,
1018# in order to generate more useful features and models for new experiments.
1019# See use_feature_brain_new_experiments for how new experiments by default do not use brain cache.
1020# It can also be used to control checkpointing for experiments that have been paused or interrupted.
1021# DAI will use H2O.ai brain cache if cache file has
1022# a) any matching column names and types for a similar experiment type
1023# b) exactly matches classes
1024# c) exactly matches class labels
1025# d) matches basic time series choices
1026# e) interpretability of cache is equal or lower
1027# f) main model (booster) is allowed by new experiment.
1028# Level of brain to use (for chosen level, where higher levels will also do all lower level operations automatically)
1029# -1 = Don't use any brain cache and don't write any cache
1030# 0 = Don't use any brain cache but still write cache
1031# Use case: Want to save model for later use, but want current model to be built without any brain models
1032# 1 = smart checkpoint from latest best individual model
1033# Use case: Want to use latest matching model, but match can be loose, so needs caution
1034# 2 = smart checkpoint from H2O.ai brain cache of individual best models
1035# Use case: DAI scans through H2O.ai brain cache for best models to restart from
1036# 3 = smart checkpoint like level #1, but for entire population.  Tune only if brain population insufficient size
1037# (will re-score entire population in single iteration, so appears to take longer to complete first iteration)
1038# 4 = smart checkpoint like level #2, but for entire population.  Tune only if brain population insufficient size
1039# (will re-score entire population in single iteration, so appears to take longer to complete first iteration)
1040# 5 = like #4, but will scan over entire brain cache of populations to get best scored individuals
1041# (can be slower due to brain cache scanning if big cache)
1042# 1000 + feature_brain_level (above positive values) = use resumed_experiment_id and actual feature_brain_level,
1043# to use other specific experiment as base for individuals or population,
1044# instead of sampling from any old experiments
1045# GUI has 3 options and corresponding settings:
1046# 1) New Experiment: Uses feature brain level default of 2
1047# 2) New Experiment With Same Settings: Re-uses the same feature brain level as parent experiment
1048# 3) Restart From Last Checkpoint: Resets feature brain level to 1003 and sets experiment ID to resume from
1049# (continued genetic algorithm iterations)
1050# 4) Retrain Final Pipeline:  Like Restart but also time=0 so skips any tuning and heads straight to final model
1051# (assumes had at least one tuning iteration in parent experiment)
1052# Other use cases:
1053# a) Restart on different data: Use same column names and fewer or more rows (applicable to 1 - 5)
1054# b) Re-fit only final pipeline: Like (a), but choose time=1 and feature_brain_level=3 - 5
1055# c) Restart with more columns: Add columns, so model builds upon old model built from old column names (1 - 5)
1056# d) Restart with focus on model tuning: Restart, then select feature_engineering_effort = 3 in expert settings
1057# e) can retrain final model but ignore any original features except those in final pipeline (normal retrain but set brain_add_features_for_new_columns=false)
1058# Notes:
1059# 1) In all cases, we first check the resumed experiment id if given, and then the brain cache
1060# 2) For Restart cases, may want to set min_dai_iterations to non-zero to force delayed early stopping, else may not be enough iterations to find better model.
1061# 3) A "New experiment with Same Settings" of a Restart will use feature_brain_level=1003 for default Restart mode (revert to 2, or even 0 if want to start a fresh experiment otherwise)
1062#feature_brain_level = 2
1063
1064# Whether to smartly keep score to avoid re-munging/retraining/rescoring steps brain models ('auto'); always
1065# force all steps for all brain imports ('on'); or never rescore ('off').
1066# 'auto' only rescores if differences in the current and previous experiments warrant it (e.g., column or metric changes).
1067# 'on' is useful when smart similarity checking is not reliable enough.
1068# 'off' is useful when you want to reuse the same features and model for the final model refit, despite changes in seed or other features
1069# that might change the outcome if rescored before reaching the final model.
1070# If set to 'off', no limits are applied to features during brain ingestion,
1071# while you can set brain_add_features_for_new_columns to false if you want to ignore any new columns in the data.
1072# Additionally, any unscored individuals loaded from the parent experiment are not rescored during refit or retrain.
1073# You can also set refit_same_best_individual to True if you want the same best individual (highest-scored model and features) to be used
1074# regardless of any scoring changes.
1075# 
1076#feature_brain_reset_score = "auto"
1077
1078#enable_strict_confict_key_check_for_brain = true
1079
1080#allow_change_layer_count_brain = false
1081
1082# Relative number of columns that must match between current reference individual and brain individual.
1083# 0.0: perfect match
1084# 1.0: All columns are different, worst match
1085# e.g. 0.1 implies no more than 10% of columns mismatch between reference set of columns and brain individual.
1086# 
1087#brain_maximum_diff_score = 0.1
1088
1089# Maximum number of brain individuals pulled from H2O.ai brain cache for feature_brain_level=1, 2
1090#max_num_brain_indivs = 3
1091
1092# Save feature brain iterations every iter_num % feature_brain_iterations_save_every_iteration == 0, to be able to restart/refit with which_iteration_brain >= 0
1093# 0 means disable
1094# 
1095#feature_brain_save_every_iteration = 0
1096
1097# When doing restart or re-fit type feature_brain_level with resumed_experiment_id, choose which iteration to start from, instead of only last best
1098# -1 means just use last best
1099# Usage:
1100# 1) Run one experiment with feature_brain_iterations_save_every_iteration=1 or some other number
1101# 2) Identify which iteration brain dump one wants to restart/refit from
1102# 3) Restart/Refit from original experiment, setting which_iteration_brain to that number in expert settings
1103# Note: If restart from a tuning iteration, this will pull in entire scored tuning population and use that for feature evolution
1104# 
1105#which_iteration_brain = -1
1106
1107# When doing re-fit from feature brain, if change columns or features, population of individuals used to refit from may change order of which was best,
1108# leading to better result chosen (False case).  But sometimes want to see exact same model/features with only one feature added,
1109# and then would need to set this to True case.
1110# E.g. if refit with just 1 extra column and have interpretability=1, then final model will be same features,
1111# with one more engineered feature applied to that new original feature.
1112# 
1113#refit_same_best_individual = false
1114
1115# When doing restart or re-fit of experiment from feature brain,
1116# sometimes user might change data significantly and then warrant
1117# redoing reduction of original features by feature selection, shift detection, and leakage detection.
1118# However, in other cases, if data and all options are nearly (or exactly) identical, then these
1119# steps might change the features slightly (e.g. due to random seed if not setting reproducible mode),
1120# leading to changes in features and model that is refitted.  By default, restart and refit avoid
1121# these steps assuming data and experiment setup have no changed significantly.
1122# If check_distribution_shift is forced to on (instead of auto), then this option is ignored.
1123# In order to ensure exact same final pipeline is fitted, one should also set:
1124# 1) brain_add_features_for_new_columns false
1125# 2) refit_same_best_individual true
1126# 3) feature_brain_reset_score 'off'
1127# 4) force_model_restart_to_defaults false
1128# The score will still be reset if the experiment metric chosen changes,
1129# but changes to the scored model and features will be more frozen in place.
1130# 
1131#restart_refit_redo_origfs_shift_leak = "[]"
1132
1133# Directory, relative to data_directory, to store H2O.ai brain meta model files
1134#brain_rel_dir = "H2O.ai_brain"
1135
1136# Maximum size in bytes the brain will store
1137# We reserve this memory to save data in order to ensure we can retrieve an experiment if
1138# for any reason it gets interrupted.
1139# -1: unlimited
1140# >=0 number of GB to limit brain to
1141#brain_max_size_GB = 20
1142
1143# Whether to take any new columns and add additional features to pipeline, even if doing retrain final model.
1144# In some cases, one might have a new dataset but only want to keep same pipeline regardless of new columns,
1145# in which case one sets this to False.  For example, new data might lead to new dropped features,
1146# due to shift or leak detection.  To avoid change of feature set, one can disable all dropping of columns,
1147# but set this to False to avoid adding any columns as new features,
1148# so pipeline is perfectly preserved when changing data.
1149# 
1150#brain_add_features_for_new_columns = true
1151
1152# If restart/refit and no longer have the original model class available, be conservative
1153# and go back to defaults for that model class.  If False, then try to keep original hyperparameters,
1154# which can fail to work in general.
1155# 
1156#force_model_restart_to_defaults = true
1157
1158# Whether to enable early stopping
1159# Early stopping refers to stopping the feature evolution/engineering process
1160# when there is no performance uplift after a certain number of iterations.
1161# After early stopping has been triggered, Driverless AI will initiate the ensemble
1162# process if selected.
1163#early_stopping = true
1164
1165# Whether to enable early stopping per individual
1166# Each individual in the generic algorithm will stop early if no improvement,
1167# and it will no longer be mutated.
1168# Instead, the best individual will be additionally mutated.
1169#early_stopping_per_individual = true
1170
1171# Minimum number of Driverless AI iterations to stop the feature evolution/engineering
1172# process even if score is not improving. Driverless AI needs to run for at least that many
1173# iterations before deciding to stop. It can be seen a safeguard against suboptimal (early)
1174# convergence.
1175# 
1176#min_dai_iterations = 0
1177
1178# Maximum features per model (and each model within the final model if ensemble) kept.
1179# Keeps top variable importance features, prunes rest away, after each scoring.
1180# Final ensemble will exclude any pruned-away features and only train on kept features,
1181# but may contain a few new features due to fitting on different data view (e.g. new clusters)
1182# Final scoring pipeline will exclude any pruned-away features,
1183# but may contain a few new features due to fitting on different data view (e.g. new clusters)
1184# -1 means no restrictions except internally-determined memory and interpretability restrictions.
1185# Notes:
1186# * If interpretability > remove_scored_0gain_genes_in_postprocessing_above_interpretability, then
1187# every GA iteration post-processes features down to this value just after scoring them.  Otherwise,
1188# only mutations of scored individuals will be pruned (until the final model where limits are strictly applied).
1189# * If ngenes_max is not also limited, then some individuals will have more genes and features until
1190# pruned by mutation or by preparation for final model.
1191# * E.g. to generally limit every iteration to exactly 1 features, one must set nfeatures_max=ngenes_max=1
1192# and remove_scored_0gain_genes_in_postprocessing_above_interpretability=0, but the genetic algorithm
1193# will have a harder time finding good features.
1194# 
1195#nfeatures_max = -1
1196
1197# Maximum genes (transformer instances) per model (and each model within the final model if ensemble) kept.
1198# Controls number of genes before features are scored, so just randomly samples genes if pruning occurs.
1199# If restriction occurs after scoring features, then aggregated gene importances are used for pruning genes.
1200# Instances includes all possible transformers, including original transformer for numeric features.
1201# -1 means no restrictions except internally-determined memory and interpretability restrictions
1202# 
1203#ngenes_max = -1
1204
1205# Like ngenes_max but controls minimum number of genes.
1206#ngenes_min = -1
1207
1208# Like nfeatures_max but controls the minimum number of features.
1209# Useful when DAI generates too few engineered features by default and you want it to create more.
1210# This is especially useful when the dataset has few input features, causing Driverless AI to behave conservatively and generate fewer transformed features.
1211# For example, if only the target encoding transformer is selected, increasing this value allows DAI to explore more possible input features.
1212#nfeatures_min = -1
1213
1214# Whether to limit feature counts by interpretability setting via features_allowed_by_interpretability
1215#limit_features_by_interpretability = true
1216
1217# Whether to use out-of-fold predictions of Word-based CNN Torch models as transformers for NLP if Torch enabled
1218#enable_textcnn = "auto"
1219
1220# Whether to use out-of-fold predictions of Word-based Bi-GRU Torch models as transformers for NLP if Torch enabled
1221#enable_textbigru = "auto"
1222
1223# Whether to use out-of-fold predictions of Character-level CNN Torch models as transformers for NLP if Torch enabled
1224#enable_charcnn = "auto"
1225
1226# Whether to use pretrained PyTorch models (BERT Transformer) as transformers for NLP tasks. Fits a linear model on top of pretrained embeddings. Requires internet connection. Default of 'auto' means disabled. To enable, set to 'on'. GPU(s) are highly recommended.Reduce string_col_as_text_min_relative_cardinality closer to 0.0 and string_col_as_text_threshold closer to 0.0 to force string column to be treated as text despite low number of uniques.
1227#enable_pytorch_nlp_transformer = "auto"
1228
1229# More rows can slow down the fitting process. Recommended values are less than 100000.
1230#pytorch_nlp_transformer_max_rows_linear_model = 50000
1231
1232# Whether to use pretrained PyTorch models and fine-tune them for NLP tasks. Requires internet connection. Default of 'auto' means disabled. To enable, set to 'on'. These models are only using the first text column, and can be slow to train. GPU(s) are highly recommended.Set string_col_as_text_min_relative_cardinality=0.0 to force string column to be treated as text despite low number of uniques.
1233#enable_pytorch_nlp_model = "auto"
1234
1235# Select which pretrained PyTorch NLP model(s) to use. Non-default ones might have no MOJO support. Requires internet connection. Only if PyTorch models or transformers for NLP are set to 'on'.
1236#pytorch_nlp_pretrained_models = "['bert-base-uncased', 'distilbert-base-uncased', 'bert-base-multilingual-cased']"
1237
1238# Max. number of epochs for Torch models for making NLP features
1239#pytorch_max_epochs_nlp = 2
1240
1241# Path to pretrained embeddings for Torch NLP models, can be a path in local file system or an S3 location (s3://).
1242# For example, download and unzip https://nlp.stanford.edu/data/glove.6B.zip
1243# nlp_pretrained_embeddings_file_path = /path/on/server/to/glove.6B.300d.txt
1244# 
1245#nlp_pretrained_embeddings_file_path = ""
1246
1247#nlp_pretrained_s3_access_key_id = ""
1248
1249#nlp_pretrained_s3_secret_access_key = ""
1250
1251# Allow training of all weights of the neural network graph, including the pretrained embedding layer weights. If disabled, then the embedding layer is frozen, but all other weights are still fine-tuned.
1252#nlp_pretrained_embeddings_trainable = false
1253
1254#bert_migration_timeout_secs = 600
1255
1256#enable_bert_transformer_acceptance_test = false
1257
1258#enable_bert_model_acceptance_test = false
1259
1260# Whether to parallelize tokenization for BERT Models/Transformers.
1261#pytorch_tokenizer_parallel = true
1262
1263# Number of epochs for fine-tuning of PyTorch NLP models. Larger values can increase accuracy but take longer to train.
1264#pytorch_nlp_fine_tuning_num_epochs = -1
1265
1266# Batch size for PyTorch NLP models. Larger models and larger batch sizes will use more memory.
1267#pytorch_nlp_fine_tuning_batch_size = -1
1268
1269# Maximum sequence length (padding length) for PyTorch NLP models. Larger models and larger padding lengths will use more memory.
1270#pytorch_nlp_fine_tuning_padding_length = -1
1271
1272# Path to pretrained PyTorch NLP models. Note that this can be either a path in the local file system
1273# (/path/on/server/to/bert_models_folder), an URL or a S3 location (s3://).
1274# To get all models, download http://s3.amazonaws.com/artifacts.h2o.ai/releases/ai/h2o/pretrained/bert_models.zip
1275# and unzip and store it in a directory on the instance where DAI is installed.
1276# ``pytorch_nlp_pretrained_models_dir=/path/on/server/to/bert_models_folder``
1277# 
1278#pytorch_nlp_pretrained_models_dir = ""
1279
1280#pytorch_nlp_pretrained_s3_access_key_id = ""
1281
1282#pytorch_nlp_pretrained_s3_secret_access_key = ""
1283
1284# Fraction of text columns out of all features to be considered a text-dominated problem
1285#text_fraction_for_text_dominated_problem = 0.3
1286
1287# Fraction of text transformers to all transformers above which to trigger that text dominated problem
1288#text_transformer_fraction_for_text_dominated_problem = 0.3
1289
1290# Whether to reduce options for text-dominated models to reduce expense, e.g. disable ensemble, disable genetic algorithm, single identity target encoder for classification, etc.
1291#text_dominated_limit_tuning = true
1292
1293# Whether to reduce options for image-dominated models to reduce expense, e.g. disable ensemble, disable genetic algorithm, single identity target encoder for classification, etc.
1294#image_dominated_limit_tuning = true
1295
1296# Threshold for average string-is-text score as determined by internal heuristics
1297# It decides when a string column will be treated as text (for an NLP problem) or just as
1298# a standard categorical variable.
1299# Higher values will favor string columns as categoricals, lower values will favor string columns as text.
1300# Set string_col_as_text_min_relative_cardinality=0.0 to force string column to be treated as text despite low number of uniques.
1301#string_col_as_text_threshold = 0.3
1302
1303# Threshold for string columns to be treated as text during preview - should be less than string_col_as_text_threshold to allow data with first 20 rows that don't look like text to still work for Text-only transformers (0.0 - text, 1.0 - string)
1304#string_col_as_text_threshold_preview = 0.1
1305
1306# Mininum fraction of unique values for string columns to be considered as possible text (otherwise categorical)
1307#string_col_as_text_min_relative_cardinality = 0.1
1308
1309# Mininum number of uniques for string columns to be considered as possible text (if not already)
1310#string_col_as_text_min_absolute_cardinality = 10000
1311
1312# If disabled, require 2 or more alphanumeric characters for a token in Text (Count and TF/IDF) transformers, otherwise create tokens out of single alphanumeric characters. True means that 'Street 3' is tokenized into 'Street' and '3', while False means that it's tokenized into 'Street'.
1313#tokenize_single_chars = true
1314
1315# Supported image types. URIs with these endings will be considered as image paths (local or remote).
1316#supported_image_types = "['jpg', 'jpeg', 'png', 'bmp', 'ppm', 'tif', 'tiff', 'JPG', 'JPEG', 'PNG', 'BMP', 'PPM', 'TIF', 'TIFF']"
1317
1318# Whether to create absolute paths for images when importing datasets containing images. Can faciliate testing or re-use of frames for scoring.
1319#image_paths_absolute = false
1320
1321# Whether to use pretrained deep learning models for processing of image data as part of the feature engineering pipeline. A column of URIs to images (jpg, png, etc.) will be converted to a numeric representation using ImageNet-pretrained deep learning models. If no GPUs are found, then must be set to 'on' to enable.
1322#enable_image_transformer = "auto"
1323
1324# Supported ImageNet pretrained architectures for Image V2 Transformer. Non-default ones will require internet access to download pretrained models from H2O S3 buckets (To get all models, download http://s3.amazonaws.com/artifacts.h2o.ai/releases/ai/h2o/pretrained/dai_image_models_2_3_0.zip and unzip inside image_pretrained_models_dir).
1325#image_transformer_pretrained_models = "['levit']"
1326
1327# Dimensionality of feature (embedding) space created by Image V2 Transformer. If more than one is selected, multiple transformers can be active at the same time.
1328#image_transformer_vectorization_output_dimension = "[100]"
1329
1330# Enable fine-tuning of the ImageNet pretrained models used for the Image V2 Transformer. Enabling this will slow down training, but should increase accuracy.
1331#image_transformer_fine_tune = false
1332
1333# Number of epochs for fine-tuning of ImageNet pretrained models used for the Image V2 Transformer.
1334#image_transformer_fine_tuning_num_epochs = 2
1335
1336# The list of possible image augmentations to apply while fine-tuning the ImageNet pretrained models used for the Image V2 Transformer. Details about individual augmentations could be found here: https://albumentations.ai/docs/. Note: Does not apply to tf_efficientnetv2 as the recommended transformers from huggingface will be used.
1337#default_image_augmentations = "['HorizontalFlip']"
1338
1339# Batch size for Image V2 Transformer. Larger architectures and larger batch sizes will use more memory.Note. Driverless will automatically find the most appropriate batch size if set to -1 (or non-positive).
1340#image_transformer_batch_size = -1
1341
1342# Path to pretrained Image models.
1343# To get all models, download http://s3.amazonaws.com/artifacts.h2o.ai/releases/ai/h2o/pretrained/dai_image_models_2_3_0.zip,
1344# then extract it in a directory on the instance where Driverless AI is installed.
1345# 
1346#image_pretrained_models_dir = "./pretrained/image/"
1347
1348# Max. number of seconds to wait for image download if images are provided by URL
1349#image_download_timeout = 60
1350
1351# Maximum fraction of missing elements in a string column for it to be considered as possible image paths (URIs)
1352#string_col_as_image_max_missing_fraction = 0.1
1353
1354# Fraction of (unique) image URIs that need to have valid endings (as defined by string_col_as_image_valid_types) for a string column to be considered as image data
1355#string_col_as_image_min_valid_types_fraction = 0.8
1356
1357# Whether to use GPU(s), if available, to transform images into embeddings with Image V2 Transformer. Can lead to significant speedups.
1358#image_transformer_use_gpu = true
1359
1360# Nominally, the time dial controls the search space, with higher time trying more options, but any keys present in this dictionary will override the automatic choices.
1361# e.g. ``params_image_auto_search_space="{'augmentation': ['safe'], 'crop_strategy': ['Resize'], 'optimizer': ['AdamW'], 'dropout': [0.1], 'epochs_per_stage': [5], 'warmup_epochs': [0], 'mixup': [0.0], 'cutmix': [0.0], 'global_pool': ['avg'], 'learning_rate': [3e-4]}"``
1362# Options, e.g. used for time>=8
1363# # Overfit Protection Options:
1364# 'augmentation': ``["safe", "semi_safe", "hard"]``
1365# 'crop_strategy': ``["Resize", "RandomResizedCropSoft", "RandomResizedCropHard"]``
1366# 'dropout': ``[0.1, 0.3, 0.5]``
1367# # Global Pool Options:
1368# avgmax -- sum of AVG and MAX poolings
1369# catavgmax -- concatenation of AVG and MAX poolings
1370# https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/adaptive_avgmax_pool.py
1371# ``'global_pool': ['avg', 'avgmax', 'catavgmax']``
1372# # Regression: No MixUp and CutMix:
1373# ``'mixup': [0.0]``
1374# ``'cutmix': [0.0]``
1375# # Classification: Beta distribution coeff to generate weights for MixUp:
1376# ``'mixup': [0.0, 0.4, 1.0, 3.0]``
1377# ``'cutmix': [0.0, 0.4, 1.0, 3.0]``
1378# # Optimization Options:
1379# ``'epochs_per_stage': [5, 10, 15]``  # from 40 to 135 epochs
1380# ``'warmup_epochs': [0, 0.5, 1]``
1381# ``'optimizer': ["AdamW", "SGD"]``
1382# ``'learning_rate': [1e-3, 3e-4, 1e-4]``
1383#params_image_auto_search_space = "{}"
1384
1385# Nominally, the accuracy dial controls the architectures considered if this is left empty,
1386# but one can choose specific ones.  The options in the list are ordered by complexity.
1387#image_auto_arch = "[]"
1388
1389# Any images smaller are upscaled to the minimum.  Default is 64, but can be as small as 32 given the pooling layers used.
1390#image_auto_min_shape = 64
1391
1392# 0 means automatic based upon time dial of min(1, time//2).
1393#image_auto_num_final_models = 0
1394
1395# 0 means automatic based upon time dial of max(4 * (time - 1), 2).
1396#image_auto_num_models = 0
1397
1398# 0 means automatic based upon time dial of time + 1 if time < 6 else time - 1.
1399#image_auto_num_stages = 0
1400
1401# 0 means automatic based upon time dial or number of models and stages
1402# set by image_auto_num_models and image_auto_num_stages.
1403#image_auto_iterations = 0
1404
1405# 0.0 means automatic based upon the current stage, where stage 0 uses half, stage 1 uses 3/4, and stage 2 uses full image.
1406# One can pass 1.0 to override and always use full image.  0.5 would mean use half.
1407#image_auto_shape_factor = 0.0
1408
1409# Control maximum number of cores to use for image auto model parallel data management. 0 will disable mp: https://pytorch-lightning.readthedocs.io/en/latest/guides/speed.html
1410#max_image_auto_ddp_cores = 10
1411
1412# Percentile value cutoff of input text token lengths for nlp deep learning models
1413#text_dl_token_pad_percentile = 99
1414
1415# Maximum token length of input text to be used in nlp deep learning models
1416#text_dl_token_pad_max = 512
1417
1418# Interpretability setting equal and above which will use automatic monotonicity constraints in
1419# XGBoostGBM/LightGBM/DecisionTree models.
1420# 
1421#monotonicity_constraints_interpretability_switch = 7
1422
1423# For models that support monotonicity constraints, and if enabled, show automatically determined monotonicity constraints for each feature going into the model based on its correlation with the target. 'low' shows only monotonicity constraint direction. 'medium' shows correlation of positively and negatively constraint features. 'high' shows all correlation values.
1424#monotonicity_constraints_log_level = "medium"
1425
1426# Threshold, of Pearson product-moment correlation coefficient between numerical or encoded transformed
1427# feature and target, above (below negative for) which will enforce positive (negative) monotonicity
1428# for XGBoostGBM, LightGBM and DecisionTree models.
1429# Enabled when interpretability >= monotonicity_constraints_interpretability_switch config toml value.
1430# Only if monotonicity_constraints_dict is not provided.
1431# 
1432#monotonicity_constraints_correlation_threshold = 0.1
1433
1434# If enabled, only monotonic features with +1/-1 constraints will be passed to the model(s), and features
1435# without monotonicity constraints (0, as set by monotonicity_constraints_dict or determined automatically)
1436# will be dropped. Otherwise all features will be in the model.
1437# Only active when interpretability >= monotonicity_constraints_interpretability_switch or
1438# monotonicity_constraints_dict is provided.
1439# 
1440#monotonicity_constraints_drop_low_correlation_features = false
1441
1442# Manual override for monotonicity constraints. Mapping of original numeric features to desired constraint
1443# (1 for pos, -1 for neg, or 0 to disable.  True can be set for automatic handling, False is same as 0).
1444# Features that are not listed here will be treated automatically,
1445# and so get no constraint (i.e., 0) if interpretability < monotonicity_constraints_interpretability_switch
1446# and otherwise the constraint is automatically determined from the correlation between each feature and the target.
1447# Example: {'PAY_0': -1, 'PAY_2': -1, 'AGE': -1, 'BILL_AMT1': 1, 'PAY_AMT1': -1}
1448# 
1449#monotonicity_constraints_dict = "{}"
1450
1451# Exploring feature interactions can be important in gaining better predictive performance.
1452# The interaction can take multiple forms (i.e. feature1 + feature2 or feature1 * feature2 + ... featureN)
1453# Although certain machine learning algorithms (like tree-based methods) can do well in
1454# capturing these interactions as part of their training process, still generating them may
1455# help them (or other algorithms) yield better performance.
1456# The depth of the interaction level (as in "up to" how many features may be combined at
1457# once to create one single feature) can be specified to control the complexity of the
1458# feature engineering process.  For transformers that use both numeric and categorical features, this constrains
1459# the number of each type, not the total number. Higher values might be able to make more predictive models
1460# at the expense of time (-1 means automatic).
1461# 
1462#max_feature_interaction_depth = -1
1463
1464# Instead of sampling from min to max (up to max_feature_interaction_depth unless all specified)
1465# columns allowed for each transformer (0), choose fixed non-zero number of columns to use.
1466# Can make same as number of columns to use all columns for each transformers if allowed by each transformer.
1467# -n can be chosen to do 50/50 sample and fixed of n features.
1468# 
1469#fixed_feature_interaction_depth = 0
1470
1471# Accuracy setting equal and above which enables tuning of model parameters
1472# Only applicable if parameter_tuning_num_models=-1 (auto)
1473#tune_parameters_accuracy_switch = 3
1474
1475# Accuracy setting equal and above which enables tuning of target transform for regression.
1476# This is useful for time series when instead of predicting the actual target value, it
1477# might be better to predict a transformed target variable like sqrt(target) or log(target)
1478# as a means to control for outliers.
1479#tune_target_transform_accuracy_switch = 5
1480
1481# Select a target transformation for regression problems. Must be one of: ['auto',
1482# 'identity', 'identity_noclip', 'center', 'standardize', 'unit_box', 'log', 'log_noclip', 'square',
1483# 'sqrt', 'double_sqrt', 'inverse', 'anscombe', 'logit', 'sigmoid'].
1484# If set to 'auto', will automatically pick the best target transformer (if accuracy is set to
1485# tune_target_transform_accuracy_switch or larger, considering interpretability level of each target transformer),
1486# otherwise will fall back to 'identity_noclip' (easiest to interpret, Shapley values are in original space, etc.).
1487# All transformers except for 'center', 'standardize', 'identity_noclip' and 'log_noclip' perform clipping
1488# to constrain the predictions to the domain of the target in the training data. Use 'center', 'standardize',
1489# 'identity_noclip' or 'log_noclip' to disable clipping and to allow predictions outside of the target domain observed in
1490# the training data (for parametric models or custom models that support extrapolation).
1491# 
1492#target_transformer = "auto"
1493
1494# Select list of target transformers to use for tuning. Only for target_transformer='auto' and accuracy >= tune_target_transform_accuracy_switch.
1495# 
1496#target_transformer_tuning_choices = "['identity', 'identity_noclip', 'center', 'standardize', 'unit_box', 'log', 'square', 'sqrt', 'double_sqrt', 'anscombe', 'logit', 'sigmoid']"
1497
1498# Tournament style (method to decide which models are best at each iteration)
1499# 'auto' : Choose based upon accuracy and interpretability
1500# 'uniform' : all individuals in population compete to win as best (can lead to all, e.g. LightGBM models in final ensemble, which may not improve ensemble performance due to lack of diversity)
1501# 'model' : individuals with same model type compete (good if multiple models do well but some models that do not do as well still contribute to improving ensemble)
1502# 'feature' : individuals with similar feature types compete (good if target encoding, frequency encoding, and other feature sets lead to good results)
1503# 'fullstack' : Choose among optimal model and feature types
1504# 'model' and 'feature' styles preserve at least one winner for each type (and so 2 total indivs of each type after mutation)
1505# For each case, a round robin approach is used to choose best scores among type of models to choose from.
1506# If enable_genetic_algorithm=='Optuna', then every individual is self-mutated without any tournament
1507# during the genetic algorithm.  The tournament is only used to prune-down individuals for, e.g.,
1508# tuning -> evolution and evolution -> final model.
1509# 
1510#tournament_style = "auto"
1511
1512# Interpretability above which will use 'uniform' tournament style
1513#tournament_uniform_style_interpretability_switch = 8
1514
1515# Accuracy below which will use uniform style if tournament_style = 'auto' (regardless of other accuracy tournament style switch values)
1516#tournament_uniform_style_accuracy_switch = 6
1517
1518# Accuracy equal and above which uses model style if tournament_style = 'auto'
1519#tournament_model_style_accuracy_switch = 6
1520
1521# Accuracy equal and above which uses feature style if tournament_style = 'auto'
1522#tournament_feature_style_accuracy_switch = 13
1523
1524# Accuracy equal and above which uses fullstack style if tournament_style = 'auto'
1525#tournament_fullstack_style_accuracy_switch = 13
1526
1527# Whether to use penalized score for GA tournament or actual score
1528#tournament_use_feature_penalized_score = true
1529
1530# Whether to keep poor scores for small data (<10k rows) in case exploration will find good model.
1531# sets tournament_remove_poor_scores_before_evolution_model_factor=1.1
1532# tournament_remove_worse_than_constant_before_evolution=false
1533# tournament_keep_absolute_ok_scores_before_evolution_model_factor=1.1
1534# tournament_remove_poor_scores_before_final_model_factor=1.1
1535# tournament_remove_worse_than_constant_before_final_model=true
1536#tournament_keep_poor_scores_for_small_data = true
1537
1538# Factor (compared to best score plus each score) beyond which to drop poorly scoring models before evolution.
1539# This is useful in cases when poorly scoring models take a long time to train.
1540#tournament_remove_poor_scores_before_evolution_model_factor = 0.7
1541
1542# For before evolution after tuning, whether to remove models that are worse than (optimized to scorer) constant prediction model
1543#tournament_remove_worse_than_constant_before_evolution = true
1544
1545# For before evolution after tuning, where on scale of 0 (perfect) to 1 (constant model) to keep ok scores by absolute value.
1546#tournament_keep_absolute_ok_scores_before_evolution_model_factor = 0.2
1547
1548# Factor (compared to best score) beyond which to drop poorly scoring models before building final ensemble.  This is useful in cases when poorly scoring models take a long time to train.
1549#tournament_remove_poor_scores_before_final_model_factor = 0.3
1550
1551# For before final model after evolution, whether to remove models that are worse than (optimized to scorer) constant prediction model
1552#tournament_remove_worse_than_constant_before_final_model = true
1553
1554# Driverless AI uses a genetic algorithm (GA) to find the best features, best models and
1555# best hyper parameters for these models. The GA facilitates getting good results while not
1556# requiring torun/try every possible model/feature/parameter. This version of GA has
1557# reinforcement learning elements - it uses a form of exploration-exploitation to reach
1558# optimum solutions. This means it will capitalise on models/features/parameters that seem # to be working well and continue to exploit them even more, while allowing some room for
1559# trying new (and semi-random) models/features/parameters to avoid settling on a local
1560# minimum.
1561# These models/features/parameters tried are what-we-call individuals of a population. More # individuals connote more models/features/parameters to be tried and compete to find the best # ones.
1562#num_individuals = 2
1563
1564# set fixed number of individuals (if > 0) - useful to compare different hardware configurations.  If want 3 individuals in GA race to be preserved, choose 6, since need 1 mutatable loser per surviving individual.
1565#fixed_num_individuals = 0
1566
1567#max_fold_reps_hard_limit = 20
1568
1569# number of unique targets or folds counts after which switch to faster/simpler non-natural sorting and print outs
1570#sanitize_natural_sort_limit = 1000
1571
1572# number of fold ids to report cardinality for, both most common (head) and least common (tail)
1573#head_tail_fold_id_report_length = 30
1574
1575# Whether target encoding (CV target encoding, weight of evidence, etc.) could be enabled
1576# Target encoding refers to several different feature transformations (primarily focused on
1577# categorical data) that aim to represent the feature using information of the actual
1578# target variable. A simple example can be to use the mean of the target to replace each
1579# unique category of a categorical feature. This type of features can be very predictive,
1580# but are prone to overfitting and require more memory as they need to store mappings of
1581# the unique categories and the target values.
1582# 
1583#enable_target_encoding = "auto"
1584
1585# For target encoding, whether a model is used to compute Ginis for checking sanity of transformer. Requires cvte_cv_in_cv to be enabled. If enabled, CV-in-CV isn't done in case the check fails.
1586#cvte_cv_in_cv_use_model = false
1587
1588# For target encoding,
1589# whether an outer level of cross-fold validation is performed,
1590# in cases when GINI is detected to flip sign (or have inconsistent sign for weight of evidence)
1591# between fit_transform on training, transform on training, and transform on validation data.
1592# The degree to which GINI is poor is also used to perform fold-averaging of look-up tables instead
1593# of using global look-up tables.
1594# 
1595#cvte_cv_in_cv = true
1596
1597# For target encoding,
1598# when an outer level of cross-fold validation is performed,
1599# increase number of outer folds or abort target encoding when GINI between feature and target
1600# are not close between fit_transform on training, transform on training, and transform on validation data.
1601# 
1602#cv_in_cv_overconfidence_protection = "auto"
1603
1604#cv_in_cv_overconfidence_protection_factor = 3.0
1605
1606#enable_lexilabel_encoding = "off"
1607
1608#enable_isolation_forest = "off"
1609
1610# Whether one hot encoding could be enabled.  If auto, then only applied for small data and GLM.
1611#enable_one_hot_encoding = "auto"
1612
1613# Limit number of output features (total number of bins) created by all BinnerTransformers based on this
1614# value, scaled by accuracy, interpretability and dataset size. 0 means unlimited.
1615#binner_cardinality_limiter = 50
1616
1617# Whether simple binning of numeric features should be enabled by default. If auto, then only for
1618# GLM/FTRL/GrowNet for time-series or for interpretability >= 6. Binning can help linear (or simple)
1619# models by exposing more signal for features that are not linearly correlated with the target. Note that
1620# NumCatTransformer and NumToCatTransformer already do binning, but also perform target encoding, which makes them
1621# less interpretable. The BinnerTransformer is more interpretable, and also works for time series.
1622#enable_binning = "auto"
1623
1624# Tree uses XGBoost to find optimal split points for binning of numeric features.
1625# Quantile use quantile-based binning. Might fall back to quantile-based if too many classes or
1626# not enough unique values.
1627#binner_bin_method = "['tree']"
1628
1629# If enabled, will attempt to reduce the number of bins during binning of numeric features.
1630# Applies to both tree-based and quantile-based bins.
1631#binner_minimize_bins = true
1632
1633# Given a set of bins (cut points along min...max), the encoding scheme converts the original
1634# numeric feature values into the values of the output columns (one column per bin, and one extra bin for
1635# missing values if any).
1636# Piecewise linear is 0 left of the bin, and 1 right of the bin, and grows linearly from 0 to 1 inside the bin.
1637# Binary is 1 inside the bin and 0 outside the bin. Missing value bin encoding is always binary, either 0 or 1.
1638# If no missing values in the data, then there is no missing value bin.
1639# Piecewise linear helps to encode growing values and keeps smooth transitions across the bin
1640# boundaries, while binary is best suited for detecting specific values in the data.
1641# Both are great at providing features to models that otherwise lack non-linear pattern detection.
1642#binner_encoding = "['piecewise_linear', 'binary']"
1643
1644# If enabled (default), include the original feature value as a output feature for the BinnerTransformer.
1645# This ensures that the BinnerTransformer never has less signal than the OriginalTransformer, since they can
1646# be chosen exclusively.
1647# 
1648#binner_include_original = true
1649
1650#isolation_forest_nestimators = 200
1651
1652# Transformer display names to indicate which transformers to use in experiment.
1653# More information for these transformers can be viewed here:
1654# http://docs.h2o.ai/driverless-ai/latest-stable/docs/userguide/transformations.html
1655# This section allows including/excluding these transformations and may be useful when
1656# simpler (more interpretable) models are sought at the expense of accuracy.
1657# the interpretability setting)
1658# for multi-class: '['NumCatTETransformer', 'TextLinModelTransformer',
1659# 'FrequentTransformer', 'CVTargetEncodeTransformer', 'ClusterDistTransformer',
1660# 'WeightOfEvidenceTransformer', 'TruncSVDNumTransformer', 'CVCatNumEncodeTransformer',
1661# 'DatesTransformer', 'TextTransformer', 'OriginalTransformer',
1662# 'NumToCatWoETransformer', 'NumToCatTETransformer', 'ClusterTETransformer',
1663# 'InteractionsTransformer']'
1664# for regression/binary: '['TextTransformer', 'ClusterDistTransformer',
1665# 'OriginalTransformer', 'TextLinModelTransformer', 'NumToCatTETransformer',
1666# 'DatesTransformer', 'WeightOfEvidenceTransformer', 'InteractionsTransformer',
1667# 'FrequentTransformer', 'CVTargetEncodeTransformer', 'NumCatTETransformer',
1668# 'NumToCatWoETransformer', 'TruncSVDNumTransformer', 'ClusterTETransformer',
1669# 'CVCatNumEncodeTransformer']'
1670# This list appears in the experiment logs (search for 'Transformers used')
1671# 
1672#included_transformers = "[]"
1673
1674# Auxiliary to included_transformers
1675# e.g. to disable all Target Encoding: excluded_transformers =
1676# '['NumCatTETransformer', 'CVTargetEncodeF', 'NumToCatTETransformer',
1677# 'ClusterTETransformer']'.
1678# Does not affect transformers used for preprocessing with included_pretransformers.
1679# 
1680#excluded_transformers = "[]"
1681
1682# Exclude list of genes (i.e. genes (built on top of transformers) to not use,
1683# independent of the interpretability setting)
1684# Some transformers are used by multiple genes, so this allows different control over feature engineering
1685# for multi-class: '['InteractionsGene', 'WeightOfEvidenceGene',
1686# 'NumToCatTargetEncodeSingleGene', 'OriginalGene', 'TextGene', 'FrequentGene',
1687# 'NumToCatWeightOfEvidenceGene', 'NumToCatWeightOfEvidenceMonotonicGene', '
1688# CvTargetEncodeSingleGene', 'DateGene', 'NumToCatTargetEncodeMultiGene', '
1689# DateTimeGene', 'TextLinRegressorGene', 'ClusterIDTargetEncodeSingleGene',
1690# 'CvCatNumEncodeGene', 'TruncSvdNumGene', 'ClusterIDTargetEncodeMultiGene',
1691# 'NumCatTargetEncodeMultiGene', 'CvTargetEncodeMultiGene', 'TextLinClassifierGene',
1692# 'NumCatTargetEncodeSingleGene', 'ClusterDistGene']'
1693# for regression/binary: '['CvTargetEncodeSingleGene', 'NumToCatTargetEncodeSingleGene',
1694# 'CvCatNumEncodeGene', 'ClusterIDTargetEncodeSingleGene', 'TextLinRegressorGene',
1695# 'CvTargetEncodeMultiGene', 'ClusterDistGene', 'OriginalGene', 'DateGene',
1696# 'ClusterIDTargetEncodeMultiGene', 'NumToCatTargetEncodeMultiGene',
1697# 'NumCatTargetEncodeMultiGene', 'TextLinClassifierGene', 'WeightOfEvidenceGene',
1698# 'FrequentGene', 'TruncSvdNumGene', 'InteractionsGene', 'TextGene',
1699# 'DateTimeGene', 'NumToCatWeightOfEvidenceGene',
1700# 'NumToCatWeightOfEvidenceMonotonicGene', ''NumCatTargetEncodeSingleGene']'
1701# This list appears in the experiment logs (search for 'Genes used')
1702# e.g. to disable interaction gene, use:  excluded_genes =
1703# '['InteractionsGene']'.
1704# Does not affect transformers used for preprocessing with included_pretransformers.
1705# 
1706#excluded_genes = "[]"
1707
1708# "Include specific models" lets you choose a set of models that will be considered during experiment training. The
1709# individual model settings and its AUTO / ON / OFF mean following: AUTO lets the internal decision mechanisms determine
1710# whether the model should be used during training; ON will try to force the use of the model; OFF turns the model
1711# off during training (it is equivalent of deselecting the model in the "Include specific models" picker).
1712# 
1713#included_models = "[]"
1714
1715# Auxiliary to included_models
1716#excluded_models = "[]"
1717
1718#included_scorers = "[]"
1719
1720# Select transformers to be used for preprocessing before other transformers operate.
1721# Pre-processing transformers can potentially take any original features and output
1722# arbitrary features, which will then be used by the normal layer of transformers
1723# whose selection is controlled by toml included_transformers or via the GUI
1724# "Include specific transformers".
1725# Notes:
1726# 1) preprocessing transformers (and all other layers of transformers) are part of the python and (if applicable) mojo scoring packages.
1727# 2) any BYOR transformer recipe or native DAI transformer can be used as a preprocessing transformer.
1728# So, e.g., a preprocessing transformer can do interactions, string concatenations, date extractions as a preprocessing step,
1729# and next layer of Date and DateTime transformers will use that as input data.
1730# Caveats:
1731# 1) one cannot currently do a time-series experiment on a time_column that hasn't yet been made (setup of experiment only knows about original data, not transformed)
1732# However, one can use a run-time data recipe to (e.g.) convert a float date-time into string date-time, and this will
1733# be used by DAIs Date and DateTime transformers as well as auto-detection of time series.
1734# 2) in order to do a time series experiment with the GUI/client auto-selecting groups, periods, etc. the dataset
1735# must have time column and groups prepared ahead of experiment by user or via a one-time data recipe.
1736# 
1737#included_pretransformers = "[]"
1738
1739# Auxiliary to included_pretransformers
1740#excluded_pretransformers = "[]"
1741
1742#include_all_as_pretransformers_if_none_selected = false
1743
1744#force_include_all_as_pretransformers_if_none_selected = false
1745
1746# Number of full pipeline layers
1747# (not including preprocessing layer when included_pretransformers is not empty).
1748# 
1749#num_pipeline_layers = 1
1750
1751# There are 2 data recipes:
1752# 1) that adds new dataset or modifies dataset outside experiment by file/url (pre-experiment data recipe)
1753# 2) that modifies dataset during experiment and python scoring (run-time data recipe)
1754# This list applies to the 2nd case.  One can use the same data recipe code for either case, but note:
1755# A) the 1st case can make any new data, but is not part of scoring package.
1756# B) the 2nd case modifies data during the experiment, so needs some original dataset.
1757# The recipe can still create all new features, as long as it has same *name* for:
1758# target, weight_column, fold_column, time_column, time group columns.
1759# 
1760#included_datas = "[]"
1761
1762# Auxiliary to included_datas
1763#excluded_datas = "[]"
1764
1765# Custom individuals to use in experiment.
1766# DAI contains most information about model type, model hyperparameters, data science types for input features, transformers used, and transformer parameters an Individual Recipe (an object that is evolved by mutation within the context of DAI's genetic algorithm).
1767# Every completed experiment auto-generates python code for the experiment that corresponds to the individual(s) used to build the final model.  This auto-generated python code can be edited offline and uploaded as a recipe, or it can be edited within the custom recipe management editor and saved.  This allowed one a code-first access to a significant portion of DAI's internal transformer and model generation.
1768# Choices are:
1769# * Empty means all individuals are freshly generated and treated by DAI's AutoML as a container of model and transformer choices.
1770# * Recipe display names of custom individuals, usually chosen via the UI.  If the number of included custom individuals is less than DAI would need, then the remaining individuals are freshly generated.
1771# The expert experiment-level option fixed_num_individuals can be used to enforce how many individuals to use in evolution stage.
1772# The expert experiment-level option fixed_ensemble_level can be used to enforce how many individuals (each with one base model) will be used in the final model.
1773# These individuals act in similar way as the feature brain acts for restart and retrain/refit, and one can retrain/refit custom individuals (i.e. skip the tuning and evolution stages) to use them in building a final model.
1774# See toml make_python_code for more details.
1775#included_individuals = "[]"
1776
1777# Auxiliary to included_individuals
1778#excluded_individuals = "[]"
1779
1780# Whether to generate python code for the best individuals for the experiment.
1781# This python code contains a CustomIndividual class that is a recipe that can be edited and customized.  The CustomIndividual class itself can also be customized for expert use.
1782# By default, 'auto' means on.
1783# At the end of an experiment, the summary zip contains auto-generated python code for the individuals used in the experiment, including the last best population (best_population_indivXX.py where XX iterates the population), last best individual (best_individual.py), final base models (final_indivYY.py where YY iterates the final base models).
1784# The summary zip also contains an example_indiv.py file that generates other transformers that may be useful that did not happen to be used in the experiment.
1785# In addition, the GUI and python client allow one to generate custom individuals from an aborted or finished experiment.
1786# For finished experiments, this will provide a zip file containing the final_indivYY.py files, and for aborted experiments this will contain the best population and best individual files.
1787# See included_individuals for more details.
1788#make_python_code = "auto"
1789
1790# Whether to generate json code for the best individuals for the experiment.
1791# This python code contains the essential attributes from the internal DAI
1792# individual class.  Reading the json code as a recipe is not supported.
1793# By default, 'auto' means off.
1794# 
1795#make_json_code = "auto"
1796
1797# Maximum number of genes to make for example auto-generated custom individual,
1798# called example_indiv.py in the summary zip file.
1799# 
1800#python_code_ngenes_max = 100
1801
1802# Minimum number of genes to make for example auto-generated custom individual,
1803# called example_indiv.py in the summary zip file.
1804# 
1805#python_code_ngenes_min = 100
1806
1807# Select the scorer to optimize the binary probability threshold that is being used in related Confusion Matrix based scorers that are trivial to optimize otherwise: Precision, Recall, FalsePositiveRate, FalseDiscoveryRate, FalseOmissionRate, TrueNegativeRate, FalseNegativeRate, NegativePredictiveValue. Use F1 if the target class matters more, and MCC if all classes are equally important. AUTO will try to sync the threshold scorer with the scorer used for the experiment, otherwise falls back to F1. The optimized threshold is also used for creating labels in addition to probabilities in MOJO/Python scorers.
1808#threshold_scorer = "AUTO"
1809
1810# Auxiliary to included_scorers
1811#excluded_scorers = "[]"
1812
1813# Whether to enable constant models ('auto'/'on'/'off')
1814#enable_constant_model = "auto"
1815
1816# Whether to enable Decision Tree models ('auto'/'on'/'off').  'auto' disables decision tree unless only non-constant model chosen.
1817#enable_decision_tree = "auto"
1818
1819# Whether to enable GLM models ('auto'/'on'/'off')
1820#enable_glm = "auto"
1821
1822# Whether to enable XGBoost GBM models ('auto'/'on'/'off')
1823#enable_xgboost_gbm = "auto"
1824
1825# Whether to enable LightGBM models ('auto'/'on'/'off')
1826#enable_lightgbm = "auto"
1827
1828# Whether to enable PyTorch-based GrowNet models ('auto'/'on'/'off')
1829#enable_grownet = "auto"
1830
1831# Whether to enable FTRL support (follow the regularized leader) model ('auto'/'on'/'off')
1832#enable_ftrl = "auto"
1833
1834# Whether to enable RuleFit support (beta version, no mojo) ('auto'/'on'/'off')
1835#enable_rulefit = "auto"
1836
1837# Whether to enable automatic addition of zero-inflated models for regression problems with zero-inflated target values that meet certain conditions: y >= 0, y.std() > y.mean()
1838#enable_zero_inflated_models = "auto"
1839
1840# Whether to use dask_cudf even for 1 GPU.  If False, will use plain cudf.
1841#use_dask_for_1_gpu = false
1842
1843# Number of retrials for dask fit to protect against known xgboost issues https://github.com/dmlc/xgboost/issues/6272 https://github.com/dmlc/xgboost/issues/6551
1844#dask_retrials_allreduce_empty_issue = 5
1845
1846# Whether to enable XGBoost RF mode without early stopping.
1847# Disabled unless switched on.
1848# 
1849#enable_xgboost_rf = "auto"
1850
1851# Whether to enable dask_cudf (multi-GPU) version of XGBoost GBM/RF.
1852# Disabled unless switched on.
1853# Only applicable for single final model without early stopping.  No Shapley possible.
1854# 
1855#enable_xgboost_gbm_dask = "auto"
1856
1857# Whether to enable multi-node LightGBM.
1858# Disabled unless switched on.
1859# 
1860#enable_lightgbm_dask = "auto"
1861
1862# If num_inner_hyperopt_trials_prefinal > 0,
1863# then whether to do hyper parameter tuning during leakage/shift detection.
1864# Might be useful to find non-trivial leakage/shift, but usually not necessary.
1865# 
1866#hyperopt_shift_leak = false
1867
1868# If num_inner_hyperopt_trials_prefinal > 0,
1869# then whether to do hyper parameter tuning during leakage/shift detection,
1870# when checking each column.
1871# 
1872#hyperopt_shift_leak_per_column = false
1873
1874# Number of trials for Optuna hyperparameter optimization for tuning and evolution models.
1875# 0 means no trials.
1876# For small data, 100 is ok choice,
1877# while for larger data smaller values are reasonable if need results quickly.
1878# If using RAPIDS or DASK, hyperparameter optimization keeps data on GPU entire time.
1879# Currently applies to XGBoost GBM/Dart and LightGBM.
1880# Useful when there is high overhead of DAI outside inner model fit/predict,
1881# so this tunes without that overhead.
1882# However, can overfit on a single fold when doing tuning or evolution,
1883# and if using CV then averaging the fold hyperparameters can lead to unexpected results.
1884# 
1885#num_inner_hyperopt_trials_prefinal = 0
1886
1887# Number of trials for Optuna hyperparameter optimization for final models.
1888# 0 means no trials.
1889# For small data, 100 is ok choice,
1890# while for larger data smaller values are reasonable if need results quickly.
1891# Applies to final model only even if num_inner_hyperopt_trials=0.
1892# If using RAPIDS or DASK, hyperparameter optimization keeps data on GPU entire time.
1893# Currently applies to XGBoost GBM/Dart and LightGBM.
1894# Useful when there is high overhead of DAI outside inner model fit/predict,
1895# so this tunes without that overhead.
1896# However, for final model each fold is independently optimized and can overfit on each fold,
1897# after which predictions are averaged
1898# (so no issue with averaging hyperparameters when doing CV with tuning or evolution).
1899# 
1900#num_inner_hyperopt_trials_final = 0
1901
1902# Number of individuals in final model (all folds/repeats for given base model) to
1903# optimize with Optuna hyperparameter tuning.
1904# -1 means all.
1905# 0 is same as choosing no Optuna trials.
1906# Might be only beneficial to optimize hyperparameters of best individual (i.e. value of 1) in ensemble.
1907# 
1908#num_hyperopt_individuals_final = -1
1909
1910# Optuna Pruner to use (applicable to XGBoost and LightGBM that support Optuna callbacks).  To disable choose None.
1911#optuna_pruner = "MedianPruner"
1912
1913# Set Optuna constructor arguments for particular applicable pruners.
1914# https://optuna.readthedocs.io/en/stable/reference/pruners.html
1915# 
1916#optuna_pruner_kwargs = "{'n_startup_trials': 5, 'n_warmup_steps': 20, 'interval_steps': 20, 'percentile': 25.0, 'min_resource': 'auto', 'max_resource': 'auto', 'reduction_factor': 4, 'min_early_stopping_rate': 0, 'n_brackets': 4, 'min_early_stopping_rate_low': 0, 'upper': 1.0, 'lower': 0.0}"
1917
1918# Optuna Pruner to use (applicable to XGBoost and LightGBM that support Optuna callbacks).
1919#optuna_sampler = "TPESampler"
1920
1921# Set Optuna constructor arguments for particular applicable samplers.
1922# https://optuna.readthedocs.io/en/stable/reference/samplers.html
1923# 
1924#optuna_sampler_kwargs = "{}"
1925
1926# Whether to enable Optuna's XGBoost Pruning callback to abort unpromising runs.  Not done if tuning learning rate.
1927#enable_xgboost_hyperopt_callback = true
1928
1929# Whether to enable Optuna's LightGBM Pruning callback to abort unpromising runs.  Not done if tuning learning rate.
1930#enable_lightgbm_hyperopt_callback = true
1931
1932# Whether to enable XGBoost Dart models ('auto'/'on'/'off')
1933#enable_xgboost_dart = "auto"
1934
1935# Whether to enable dask_cudf (multi-GPU) version of XGBoost Dart.
1936# Disabled unless switched on.
1937# If have only 1 GPU, then only uses dask_cudf if use_dask_for_1_gpu is True
1938# Only applicable for single final model without early stopping.  No Shapley possible.
1939# 
1940#enable_xgboost_dart_dask = "auto"
1941
1942# Whether to enable dask_cudf (multi-GPU) version of XGBoost RF.
1943# Disabled unless switched on.
1944# If have only 1 GPU, then only uses dask_cudf if use_dask_for_1_gpu is True
1945# Only applicable for single final model without early stopping.  No Shapley possible.
1946# 
1947#enable_xgboost_rf_dask = "auto"
1948
1949# Number of GPUs to use per model hyperopt training task.  Set to -1 for all GPUs.
1950# For example, when this is set to -1 and there are 4 GPUs available, all of them can be used for the training of a single model across a Dask cluster.
1951# Ignored if GPUs disabled or no GPUs on system.
1952# In multinode context, this refers to the per-node value.
1953# 
1954#num_gpus_per_hyperopt_dask = -1
1955
1956# Whether to use (and expect exists) xgbfi feature interactions for xgboost.
1957#use_xgboost_xgbfi = false
1958
1959# Which boosting types to enable for LightGBM (gbdt = boosted trees, rf_early_stopping = random forest with early stopping rf = random forest (no early stopping), dart = drop-out boosted trees with no early stopping
1960#enable_lightgbm_boosting_types = "['gbdt']"
1961
1962# Whether to enable automatic class weighting for imbalanced multiclass problems. Can make worse probabilities, but improve confusion-matrix based scorers for rare classes without the need to manually calibrate probabilities or fine-tune the label creation process.
1963#enable_lightgbm_multiclass_balancing = "auto"
1964
1965# Whether to enable LightGBM categorical feature support (runs in CPU mode even if GPUs enabled, and no MOJO built)
1966#enable_lightgbm_cat_support = false
1967
1968# Whether to enable LightGBM linear_tree handling
1969# (only CPU mode currently, no L1 regularization -- mae objective, and no MOJO build).
1970# 
1971#enable_lightgbm_linear_tree = false
1972
1973# Whether to enable LightGBM extra trees mode to help avoid overfitting
1974#enable_lightgbm_extra_trees = false
1975
1976# basic: as fast as when no constraints applied, but over-constrains the predictions.
1977# intermediate: very slightly slower, but much less constraining while still holding monotonicity and should be more accurate than basic.
1978# advanced: slower, but even more accurate than intermediate.
1979# 
1980#lightgbm_monotone_constraints_method = "intermediate"
1981
1982# Forbids any monotone splits on the first x (rounded down) level(s) of the tree.
1983# The penalty applied to monotone splits on a given depth is a continuous,
1984# increasing function the penalization parameter.
1985# https://lightgbm.readthedocs.io/en/latest/Parameters.html#monotone_penalty
1986# 
1987#lightgbm_monotone_penalty = 0.0
1988
1989# Whether to enable LightGBM CUDA implementation instead of OpenCL.
1990# CUDA with LightGBM only supported for Pascal+ (compute capability >=6.0)
1991#enable_lightgbm_cuda_support = false
1992
1993# Whether to show constant models in iteration panel even when not best model.
1994#show_constant_model = false
1995
1996#drop_constant_model_final_ensemble = true
1997
1998#xgboost_rf_exact_threshold_num_rows_x_cols = 10000
1999
2000# Select objectives allowed for XGBoost.
2001# Added to allowed mutations (the default reg:squarederror is in sample list 3 times)
2002# Note: logistic, tweedie, gamma, poisson are only valid for targets with positive values.
2003# Note: The objective relates to the form of the (regularized) loss function,
2004# used to determine the split with maximum information gain,
2005# while the metric is the non-regularized metric
2006# measured on the validation set (external or internally generated by DAI).
2007# 
2008#xgboost_reg_objectives = "['reg:squarederror']"
2009
2010# Select metrics allowed for XGBoost.
2011# Added to allowed mutations (the default rmse and mae are in sample list twice).
2012# Note: tweedie, gamma, poisson are only valid for targets with positive values.
2013# 
2014#xgboost_reg_metrics = "['rmse', 'mae']"
2015
2016# Select which objectives allowed for XGBoost.
2017# Added to allowed mutations (all evenly sampled).
2018#xgboost_binary_metrics = "['logloss', 'auc', 'aucpr', 'error']"
2019
2020# Select objectives allowed for LightGBM.
2021# Added to allowed mutations (the default mse is in sample list 2 times if selected).
2022# "binary" refers to logistic regression.
2023# Note: If choose quantile/huber or fair and data is not normalized,
2024# recommendation is to use params_lightgbm to specify reasonable
2025# value of alpha (for quantile or huber) or fairc (for fair) to LightGBM.
2026# Note: mse is same as rmse correponding to L2 loss.  mae is L1 loss.
2027# Note: tweedie, gamma, poisson are only valid for targets with positive values.
2028# Note: The objective relates to the form of the (regularized) loss function,
2029# used to determine the split with maximum information gain,
2030# while the metric is the non-regularized metric
2031# measured on the validation set (external or internally generated by DAI).
2032# 
2033#lightgbm_reg_objectives = "['mse', 'mae']"
2034
2035# Select metrics allowed for LightGBM.
2036# Added to allowed mutations (the default rmse is in sample list three times if selected).
2037# Note: If choose huber or fair and data is not normalized,
2038# recommendation is to use params_lightgbm to specify reasonable
2039# value of alpha (for huber or quantile) or fairc (for fair) to LightGBM.
2040# Note: tweedie, gamma, poisson are only valid for targets with positive values.
2041# 
2042#lightgbm_reg_metrics = "['rmse', 'mse', 'mae']"
2043
2044# Select objectives allowed for LightGBM.
2045# Added to allowed mutations (the default binary is in sample list 2 times if selected)
2046#lightgbm_binary_objectives = "['binary', 'xentropy']"
2047
2048# Select which binary metrics allowed for LightGBM.
2049# Added to allowed mutations (all evenly sampled).
2050#lightgbm_binary_metrics = "['binary', 'binary', 'auc']"
2051
2052# Select which metrics allowed for multiclass LightGBM.
2053# Added to allowed mutations (evenly sampled if selected).
2054#lightgbm_multi_metrics = "['multiclass', 'multi_error']"
2055
2056# tweedie_variance_power parameters to try for XGBoostModel and LightGBMModel if tweedie is used.
2057# First value is default.
2058#tweedie_variance_power_list = "[1.5, 1.2, 1.9]"
2059
2060# huber parameters to try for LightGBMModel if huber is used.
2061# First value is default.
2062#huber_alpha_list = "[0.9, 0.3, 0.5, 0.6, 0.7, 0.8, 0.1, 0.99]"
2063
2064# fair c parameters to try for LightGBMModel if fair is used.
2065# First value is default.
2066#fair_c_list = "[1.0, 0.1, 0.5, 0.9]"
2067
2068# poisson max_delta_step parameters to try for LightGBMModel if poisson is used.
2069# First value is default.
2070#poisson_max_delta_step_list = "[0.7, 0.9, 0.5, 0.2]"
2071
2072# quantile alpha parameters to try for LightGBMModel if quantile is used.
2073# First value is default.
2074#quantile_alpha = "[0.9, 0.95, 0.99, 0.6]"
2075
2076# Default reg_lambda regularization for GLM.
2077#reg_lambda_glm_default = 0.0004
2078
2079#lossguide_drop_factor = 4.0
2080
2081#lossguide_max_depth_extend_factor = 8.0
2082
2083# Parameters for LightGBM to override DAI parameters
2084# e.g. ``'eval_metric'`` instead of ``'metric'`` should be used
2085# e.g. ``params_lightgbm="{'objective': 'binary', 'n_estimators': 100, 'max_leaves': 64, 'random_state': 1234}"``
2086# e.g. ``params_lightgbm="{'n_estimators': 600, 'learning_rate': 0.1, 'reg_alpha': 0.0, 'reg_lambda': 0.5, 'gamma': 0, 'max_depth': 0, 'max_bin': 128, 'max_leaves': 256, 'scale_pos_weight': 1.0, 'max_delta_step': 3.469919910597877, 'min_child_weight': 1, 'subsample': 0.9, 'colsample_bytree': 0.3, 'tree_method': 'gpu_hist', 'grow_policy': 'lossguide', 'min_data_in_bin': 3, 'min_child_samples': 5, 'early_stopping_rounds': 20, 'num_classes': 2, 'objective': 'binary', 'eval_metric': 'binary', 'random_state': 987654, 'early_stopping_threshold': 0.01, 'monotonicity_constraints': False, 'silent': True, 'debug_verbose': 0, 'subsample_freq': 1}"``
2087# avoid including "system"-level parameters like ``'n_gpus': 1, 'gpu_id': 0, , 'n_jobs': 1, 'booster': 'lightgbm'``
2088# also likely should avoid parameters like: 'objective': 'binary', unless one really knows what one is doing (e.g. alternative objectives)
2089# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2090# And see: https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst
2091# Can also pass objective parameters if choose (or in case automatically chosen) certain objectives
2092# https://lightgbm.readthedocs.io/en/latest/Parameters.html#metric-parameters
2093#params_lightgbm = "{}"
2094
2095# Parameters for XGBoost to override DAI parameters
2096# similar parameters as LightGBM since LightGBM parameters are transcribed from XGBoost equivalent versions
2097# e.g. ``params_xgboost="{'n_estimators': 100, 'max_leaves': 64, 'max_depth': 0, 'random_state': 1234}"``
2098# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2099#params_xgboost = "{}"
2100
2101# Like params_xgboost but for XGBoost random forest.
2102#params_xgboost_rf = "{}"
2103
2104# Like params_xgboost but for XGBoost's dart method
2105#params_dart = "{}"
2106
2107# Parameters for XGBoost's gblinear to override DAI parameters
2108# e.g. ``params_gblinear="{'n_estimators': 100}"``
2109# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2110#params_gblinear = "{}"
2111
2112# Parameters for Decision Tree to override DAI parameters
2113# parameters should be given as XGBoost equivalent unless unique LightGBM parameter
2114# e.g. ``'eval_metric'`` instead of ``'metric'`` should be used
2115# e.g. ``params_decision_tree="{'objective': 'binary', 'n_estimators': 100, 'max_leaves': 64, 'random_state': 1234}"``
2116# e.g. ``params_decision_tree="{'n_estimators': 1, 'learning_rate': 1, 'reg_alpha': 0.0, 'reg_lambda': 0.5, 'gamma': 0, 'max_depth': 0, 'max_bin': 128, 'max_leaves': 256, 'scale_pos_weight': 1.0, 'max_delta_step': 3.469919910597877, 'min_child_weight': 1, 'subsample': 0.9, 'colsample_bytree': 0.3, 'tree_method': 'gpu_hist', 'grow_policy': 'lossguide', 'min_data_in_bin': 3, 'min_child_samples': 5, 'early_stopping_rounds': 20, 'num_classes': 2, 'objective': 'binary', 'eval_metric': 'logloss', 'random_state': 987654, 'early_stopping_threshold': 0.01, 'monotonicity_constraints': False, 'silent': True, 'debug_verbose': 0, 'subsample_freq': 1}"``
2117# avoid including "system"-level parameters like ``'n_gpus': 1, 'gpu_id': 0, , 'n_jobs': 1, 'booster': 'lightgbm'``
2118# also likely should avoid parameters like: ``'objective': 'binary:logistic'``, unless one really knows what one is doing (e.g. alternative objectives)
2119# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2120# And see: https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst
2121# Can also pass objective parameters if choose (or in case automatically chosen) certain objectives
2122# https://lightgbm.readthedocs.io/en/latest/Parameters.html#metric-parameters
2123#params_decision_tree = "{}"
2124
2125# Parameters for Rulefit to override DAI parameters
2126# e.g. ``params_rulefit="{'max_leaves': 64}"``
2127# See: https://xgboost.readthedocs.io/en/latest/parameter.html
2128#params_rulefit = "{}"
2129
2130# Parameters for FTRL to override DAI parameters
2131#params_ftrl = "{}"
2132
2133# Parameters for GrowNet to override DAI parameters
2134#params_grownet = "{}"
2135
2136# How to handle tomls like params_tune_lightgbm.
2137# override: For any key in the params_tune_ toml dict, use the list of values instead of DAI's list of values.
2138# override_and_first_as_default: like override, but also use first entry in tuple/list (if present) as override as replacement for (e.g.) params_lightgbm when using params_tune_lightgbm.
2139# exclusive: Only tune the keys in the params_tune_ toml dict, unless no keys are present.  Otherwise use DAI's default values.
2140# exclusive_and_first_as_default: Like exclusive but same first as default behavior as override_and_first_as_default.
2141# In order to fully control hyperparameter tuning, either one should set "override" mode and include every hyperparameter and at least one value in each list within the dictionary, or choose "exclusive" and then rely upon DAI unchanging default values for any keys not given.
2142# For custom recipes, one can use recipe_dict to pass hyperparameters and if using the "get_one()" function in a custom recipe, and if user_tune passed contains the hyperparameter dictionary equivalent of params_tune_ tomls, then this params_tune_mode will also work for custom recipes.
2143#params_tune_mode = "override_and_first_as_default"
2144
2145# Whether to adjust GBM trees, learning rate, and early_stopping_rounds for GBM models or recipes with _is_gbm=True.
2146# True: auto mode, that changes trees/LR/stopping if tune_learning_rate=false and early stopping is supported by the model and model is GBM or from custom individual with parameter in adjusted_params.
2147# False: disable any adjusting from tuning-evolution into final model.
2148# Setting this to false is required if (e.g.) one changes params_lightgbm or params_tune_lightgbm and wanted to preserve the tuning-evolution values into the final model.
2149# One should also set tune_learning_rate to true to tune the learning_rate, else it will be fixed to some single value.
2150#params_final_auto_adjust = true
2151
2152# Dictionary of key:lists of values to use for LightGBM tuning, overrides DAI's choice per key
2153# e.g. ``params_tune_lightgbm="{'min_child_samples': [1,2,5,100,1000], 'min_data_in_bin': [1,2,3,10,100,1000]}"``
2154#params_tune_lightgbm = "{}"
2155
2156# Like params_tune_lightgbm but for XGBoost
2157# e.g. ``params_tune_xgboost="{'max_leaves': [8, 16, 32, 64]}"``
2158#params_tune_xgboost = "{}"
2159
2160# Like params_tune_lightgbm but for XGBoost random forest
2161# e.g. ``params_tune_xgboost_rf="{'max_leaves': [8, 16, 32, 64]}"``
2162#params_tune_xgboost_rf = "{}"
2163
2164# Dictionary of key:lists of values to use for LightGBM Decision Tree tuning, overrides DAI's choice per key
2165# e.g. ``params_tune_decision_tree="{'min_child_samples': [1,2,5,100,1000], 'min_data_in_bin': [1,2,3,10,100,1000]}"``
2166#params_tune_decision_tree = "{}"
2167
2168# Like params_tune_lightgbm but for XGBoost's Dart
2169# e.g. ``params_tune_dart="{'max_leaves': [8, 16, 32, 64]}"``
2170#params_tune_dart = "{}"
2171
2172# [DEPRECATED] Like params_tune_lightgbm but for TensorFlow
2173# e.g. ``params_tune_tensorflow="{'layers': [(10,10,10), (10, 10, 10, 10)]}"``
2174#params_tune_tensorflow = "{}"
2175
2176# Like params_tune_lightgbm but for gblinear
2177# e.g. ``params_tune_gblinear="{'reg_lambda': [.01, .001, .0001, .0002]}"``
2178#params_tune_gblinear = "{}"
2179
2180# Like params_tune_lightgbm but for rulefit
2181# e.g. ``params_tune_rulefit="{'max_depth': [4, 5, 6]}"``
2182#params_tune_rulefit = "{}"
2183
2184# Like params_tune_lightgbm but for ftrl
2185#params_tune_ftrl = "{}"
2186
2187# Like params_tune_lightgbm but for GrowNet
2188# e.g. ``params_tune_grownet="{'input_dropout': [0.2, 0.5]}"``
2189#params_tune_grownet = "{}"
2190
2191# Whether to force max_leaves and max_depth to be 0 if grow_policy is depthwise and lossguide, respectively.
2192#params_tune_grow_policy_simple_trees = true
2193
2194# Maximum number of GBM trees or GLM iterations. Can be reduced for lower accuracy and/or higher interpretability.
2195# Early-stopping usually chooses less. Ignored if fixed_max_nestimators is > 0.
2196# 
2197#max_nestimators = 3000
2198
2199# Fixed maximum number of GBM trees or GLM iterations. If > 0, ignores max_nestimators and disables automatic reduction
2200# due to lower accuracy or higher interpretability. Early-stopping usually chooses less.
2201# 
2202#fixed_max_nestimators = -1
2203
2204# LightGBM dart mode and normal rf mode do not use early stopping,
2205# and they will sample from these values for n_estimators.
2206# XGBoost Dart mode will also sample from these n_estimators.
2207# Also applies to XGBoost Dask models that do not yet support early stopping or callbacks.
2208# For default parameters it chooses first value in list, while mutations sample from the list.
2209# 
2210#n_estimators_list_no_early_stopping = "[50, 100, 150, 200, 250, 300]"
2211
2212# Lower limit on learning rate for final ensemble GBM models.
2213# In some cases, the maximum number of trees/iterations is insufficient for the final learning rate,
2214# which can lead to no early stopping triggered and poor final model performance.
2215# Then, one can try increasing the learning rate by raising this minimum,
2216# or one can try increasing the maximum number of trees/iterations.
2217# 
2218#min_learning_rate_final = 0.01
2219
2220# Upper limit on learning rate for final ensemble GBM models
2221#max_learning_rate_final = 0.05
2222
2223# factor by which max_nestimators is reduced for tuning and feature evolution
2224#max_nestimators_feature_evolution_factor = 0.2
2225
2226# Lower limit on learning rate for feature engineering GBM models
2227#min_learning_rate = 0.05
2228
2229# Upper limit on learning rate for GBM models
2230# If want to override min_learning_rate and min_learning_rate_final, set this to smaller value
2231# 
2232#max_learning_rate = 0.5
2233
2234# Whether to lock learning rate, tree count, early stopping rounds for GBM algorithms to the final model values.
2235#lock_ga_to_final_trees = false
2236
2237# Whether to tune learning rate for GBM algorithms (if not doing just single final model).
2238# If tuning with Optuna, might help isolate optimal learning rate.
2239# 
2240#tune_learning_rate = false
2241
2242# Max. number of epochs for FTRL models
2243#max_epochs = 50
2244
2245# Maximum tree depth (and corresponding max max_leaves as 2**max_max_depth)
2246#max_max_depth = 12
2247
2248# Default max_bin for tree methods
2249#default_max_bin = 256
2250
2251# Default max_bin for LightGBM (64 recommended for GPU LightGBM for speed)
2252#default_lightgbm_max_bin = 249
2253
2254# Maximum max_bin for tree features
2255#max_max_bin = 256
2256
2257# Minimum max_bin for any tree
2258#min_max_bin = 32
2259
2260# Amount of memory which can handle max_bin = 256 can handle 125 columns and max_bin = 32 for 1000 columns
2261# As available memory on system goes higher than this scale, can handle proportionally more columns at higher max_bin
2262# Currently set to 10GB
2263#scale_mem_for_max_bin = 10737418240
2264
2265# Factor by which rf gets more depth than gbdt
2266#factor_rf = 1.25
2267
2268# For Pytorch Image fitting including both models and transformers. See also max_fit_cores for all models.
2269#image_max_cores = 4
2270
2271# How many cores to use for each Bert Model and Transformer, regardless if GPU or CPU based (0 = auto mode)
2272#bert_cores = 0
2273
2274# Whether Bert will use all CPU cores, or if it will split among all transformers.  Only for transformers, not Bert model.
2275#bert_use_all_cores = true
2276
2277# For Bert models, maximum number of cores to use if bert_cores=0 (auto mode), because Bert model is inefficient at using many cores.  See also max_fit_cores for all models.
2278#bert_model_max_cores = 8
2279
2280# Max number of rules to be used for RuleFit models (-1 for all)
2281#rulefit_max_num_rules = -1
2282
2283# Max tree depth for RuleFit models
2284#rulefit_max_tree_depth = 6
2285
2286# Max number of trees for RuleFit models
2287#rulefit_max_num_trees = 500
2288
2289# Enable One-Hot-Encoding (which does binning to limit to number of bins to no more than 100 anyway) for categorical columns with fewer than this many unique values
2290# Set to 0 to disable
2291#one_hot_encoding_cardinality_threshold = 50
2292
2293# How many levels to choose one-hot by default instead of other encodings, restricted down to 10x less (down to 2 levels) when number of columns able to be used with OHE exceeds 500. Note the total number of bins is reduced if bigger data independently of this.
2294#one_hot_encoding_cardinality_threshold_default_use = 40
2295
2296# Treat text columns also as categorical columns if the cardinality is <= this value.
2297# Set to 0 to treat text columns only as text.
2298#text_as_categorical_cardinality_threshold = 1000
2299
2300# If num_as_cat is true, then treat numeric columns also as categorical columns if the cardinality is > this value.
2301# Setting to 0 allows all numeric to be treated as categorical if num_as_cat is True.
2302#numeric_as_categorical_cardinality_threshold = 2
2303
2304# If num_as_cat is true, then treat numeric columns also as categorical columns to possibly one-hot encode if the cardinality is > this value.
2305# Setting to 0 allows all numeric to be treated as categorical to possibly ohe-hot encode if num_as_cat is True.
2306#numeric_as_ohe_categorical_cardinality_threshold = 2
2307
2308#one_hot_encoding_show_actual_levels_in_features = false
2309
2310# Fixed ensemble_level
2311# -1 = auto, based upon ensemble_accuracy_switch, accuracy, size of data, etc.
2312# 0 = No ensemble, only final single model on validated iteration/tree count
2313# 1 = 1 model, multiple ensemble folds (cross-validation)
2314# >=2 = >=2 models, multiple ensemble folds (cross-validation)
2315# 
2316#fixed_ensemble_level = -1
2317
2318# If enabled, use cross-validation to determine optimal parameters for single final model,
2319# and to be able to create training holdout predictions.
2320#cross_validate_single_final_model = true
2321
2322# Model to combine base model predictions, for experiments that create a final pipeline
2323# consisting of multiple base models.
2324# blender: Creates a linear blend with non-negative weights that add to 1 (blending) - recommended
2325# extra_trees: Creates a tree model to non-linearly combine the base models (stacking) - experimental, and recommended to also set enable cross_validate_meta_learner.
2326# neural_net: Creates a neural net model to non-linearly combine the base models (stacking) - experimental, and recommended to also set enable cross_validate_meta_learner.
2327# 
2328#ensemble_meta_learner = "blender"
2329
2330# If enabled, use cross-validation to create an ensemble for the meta learner itself. Especially recommended for
2331# ``ensemble_meta_learner='extra_trees'``, to make unbiased training holdout predictions.
2332# Will disable MOJO if enabled. Not needed for ``ensemble_meta_learner='blender'``."
2333# 
2334#cross_validate_meta_learner = false
2335
2336# Number of models to tune during pre-evolution phase
2337# Can make this lower to avoid excessive tuning, or make higher to do enhanced tuning.
2338# ``-1 : auto``
2339# 
2340#parameter_tuning_num_models = -1
2341
2342# Number of models (out of all parameter_tuning_num_models) to have as SEQUENCE instead of random features/parameters.
2343# ``-1 : auto, use at least one default individual per model class tuned``
2344# 
2345#parameter_tuning_num_models_sequence = -1
2346
2347# Number of models to add during tuning that cover other cases, like for TS having no TE on time column groups.
2348# ``-1 : auto, adds additional models to protect against overfit on high-gain training features.``
2349# 
2350#parameter_tuning_num_models_extra = -1
2351
2352# Dictionary of model class name (keys) and number (values) of instances.
2353#num_tuning_instances = "{}"
2354
2355#validate_meta_learner = true
2356
2357#validate_meta_learner_extra = false
2358
2359# Specify the fixed number of cross-validation folds (if >= 2) for feature evolution. (The actual number of splits allowed can be less and is determined at experiment run-time).
2360#fixed_num_folds_evolution = -1
2361
2362# Specify the fixed number of cross-validation folds (if >= 2) for the final model. (The actual number of splits allowed can be less and is determined at experiment run-time).
2363#fixed_num_folds = -1
2364
2365# set "on" to force only first fold for models - useful for quick runs regardless of data
2366#fixed_only_first_fold_model = "auto"
2367
2368# Set the number of repeated cross-validation folds for feature evolution and final models (if > 0), 0 is default. Only for ensembles that do cross-validation (so no external validation and not time-series), not for single final models.
2369#fixed_fold_reps = 0
2370
2371#num_fold_ids_show = 10
2372
2373#fold_scores_instability_warning_threshold = 0.25
2374
2375# Upper limit on the number of rows x number of columns for feature evolution (applies to both training and validation/holdout splits)
2376# feature evolution is the process that determines which features will be derived.
2377# Depending on accuracy settings, a fraction of this value will be used
2378# 
2379#feature_evolution_data_size = 300000000
2380
2381# Upper limit on the number of rows x number of columns for training final pipeline.
2382# 
2383#final_pipeline_data_size = 1000000000
2384
2385# Whether to automatically limit validation data size using feature_evolution_data_size (giving max_rows_feature_evolution shown in logs) for tuning-evolution, and using final_pipeline_data_size, max_validation_to_training_size_ratio_for_final_ensemble for final model.
2386#limit_validation_size = true
2387
2388# Smaller values can speed up final pipeline model training, as validation data is only used for early stopping.
2389# Note that final model predictions and scores will always be provided on the full dataset provided.
2390# 
2391#max_validation_to_training_size_ratio_for_final_ensemble = 2.0
2392
2393# Ratio of minority to majority class of the target column beyond which stratified sampling is done for binary classification. Otherwise perform random sampling. Set to 0 to always do random sampling. Set to 1 to always do stratified sampling.
2394#force_stratified_splits_for_imbalanced_threshold_binary = 0.01
2395
2396#force_stratified_splits_for_binary_max_rows = 1000000
2397
2398# Specify whether to do stratified sampling for validation fold creation for iid regression problems. Otherwise perform random sampling.
2399#stratify_for_regression = true
2400
2401# Sampling method for imbalanced binary classification problems. Choices are:
2402# "auto": sample both classes as needed, depending on data
2403# "over_under_sampling": over-sample the minority class and under-sample the majority class, depending on data
2404# "under_sampling": under-sample the majority class to reach class balance
2405# "off": do not perform any sampling
2406# 
2407#imbalance_sampling_method = "off"
2408
2409# For smaller data, there's no generally no benefit in using imbalanced sampling methods.
2410#imbalance_sampling_threshold_min_rows_original = 100000
2411
2412# For imbalanced binary classification: ratio of majority to minority class equal and above which to enable
2413# special imbalanced models with sampling techniques (specified by imbalance_sampling_method) to attempt to improve model performance.
2414# 
2415#imbalance_ratio_sampling_threshold = 5
2416
2417# For heavily imbalanced binary classification: ratio of majority to minority class equal and above which to enable only
2418# special imbalanced models on full original data, without upfront sampling.
2419# 
2420#heavy_imbalance_ratio_sampling_threshold = 25
2421
2422# Special handling can include special models, special scorers, special feature engineering.
2423# 
2424#imbalance_ratio_multiclass_threshold = 5
2425
2426# Special handling can include special models, special scorers, special feature engineering.
2427# 
2428#heavy_imbalance_ratio_multiclass_threshold = 25
2429
2430# -1: automatic
2431#imbalance_sampling_number_of_bags = -1
2432
2433# -1: automatic
2434#imbalance_sampling_max_number_of_bags = 10
2435
2436# Only for shift/leakage/tuning/feature evolution models. Not used for final models. Final models can
2437# be limited by imbalance_sampling_max_number_of_bags.
2438#imbalance_sampling_max_number_of_bags_feature_evolution = 3
2439
2440# Max. size of data sampled during imbalanced sampling (in terms of dataset size),
2441# controls number of bags (approximately). Only for imbalance_sampling_number_of_bags == -1.
2442#imbalance_sampling_max_multiple_data_size = 1.0
2443
2444# Rank averaging can be helpful when ensembling diverse models when ranking metrics like AUC/Gini
2445# metrics are optimized. No MOJO support yet.
2446#imbalance_sampling_rank_averaging = "auto"
2447
2448# A value of 0.5 means that models/algorithms will be presented a balanced target class distribution
2449# after applying under/over-sampling techniques on the training data. Sometimes it makes sense to
2450# choose a smaller value like 0.1 or 0.01 when starting from an extremely imbalanced original target
2451# distribution. -1.0: automatic
2452#imbalance_sampling_target_minority_fraction = -1.0
2453
2454# For binary classification: ratio of majority to minority class equal and above which to notify
2455# of imbalance in GUI to say slightly imbalanced.
2456# More than ``imbalance_ratio_sampling_threshold`` will say problem is imbalanced.
2457# 
2458#imbalance_ratio_notification_threshold = 2.0
2459
2460# List of possible bins for FTRL (largest is default best value)
2461#nbins_ftrl_list = "[1000000, 10000000, 100000000]"
2462
2463# Samples the number of automatic FTRL interactions terms to no more than this value (for each of 2nd, 3rd, 4th order terms)
2464#ftrl_max_interaction_terms_per_degree = 10000
2465
2466# List of possible bins for target encoding (first is default value)
2467#te_bin_list = "[25, 10, 100, 250]"
2468
2469# List of possible bins for weight of evidence encoding (first is default value)
2470# If only want one value: woe_bin_list = [2]
2471#woe_bin_list = "[25, 10, 100, 250]"
2472
2473# List of possible bins for ohe hot encoding (first is default value).  If left as default, the actual list is changed for given data size and dials.
2474#ohe_bin_list = "[10, 25, 50, 75, 100]"
2475
2476# List of max possible number of bins for numeric binning (first is default value). If left as default, the actual list is changed for given data size and dials. The binner will automatically reduce the number of bins based on predictive power.
2477#binner_bin_list = "[5, 10, 20]"
2478
2479# If dataset has more columns, then will check only first such columns. Set to 0 to disable.
2480#drop_redundant_columns_limit = 1000
2481
2482# Whether to drop columns with constant values
2483#drop_constant_columns = true
2484
2485# Whether to detect duplicate rows in training, validation and testing datasets. Done after doing type detection and dropping of redundant or missing columns across datasets, just before the experiment starts, still before leakage detection. Any further dropping of columns can change the amount of duplicate rows. Informative only, if want to drop rows in training data, make sure to check the drop_duplicate_rows setting. Uses a sample size, given by detect_duplicate_rows_max_rows_x_cols.
2486#detect_duplicate_rows = true
2487
2488#drop_duplicate_rows_timeout = 60
2489
2490# Whether to drop duplicate rows in training data. Done at the start of Driverless AI, only considering columns to drop as given by the user, not considering validation or training datasets or leakage or redundant columns. Any further dropping of columns can change the amount of duplicate rows. Time limited by drop_duplicate_rows_timeout seconds.
2491# 'auto': "off""
2492# 'weight': If duplicates, then convert dropped duplicates into a weight column for training.  Useful when duplicates are added to preserve some distribution of instances expected.  Only allowed if no weight columnn is present, else duplicates are just dropped.
2493# 'drop': Drop any duplicates, keeping only first instances.
2494# 'off': Do not drop any duplicates.  This may lead to over-estimation of accuracy.
2495#drop_duplicate_rows = "auto"
2496
2497# If > 0, then acts as sampling size for informative duplicate row detection. If set to 0, will do checks for all dataset sizes.
2498#detect_duplicate_rows_max_rows_x_cols = 10000000
2499
2500# Whether to drop columns that appear to be an ID
2501#drop_id_columns = true
2502
2503# Whether to avoid dropping any columns (original or derived)
2504#no_drop_features = false
2505
2506# Direct control over columns to drop in bulk so can copy-paste large lists instead of selecting each one separately in GUI
2507#cols_to_drop = "[]"
2508
2509#cols_to_drop_sanitized = "[]"
2510
2511# Control over columns to group by for CVCatNumEncode Transformer, default is empty list that means DAI automatically searches all columns,
2512# selected randomly or by which have top variable importance.
2513# The CVCatNumEncode Transformer takes a list of categoricals (or these cols_to_group_by) and uses those columns
2514# as new feature to perform aggregations on (agg_funcs_for_group_by).
2515#cols_to_group_by = "[]"
2516
2517#cols_to_group_by_sanitized = "[]"
2518
2519# Whether to sample from given features to group by (True) or to always group by all features (False) when using cols_to_group_by.
2520#sample_cols_to_group_by = false
2521
2522# Aggregation functions to use for groupby operations for CVCatNumEncode Transformer, see also cols_to_group_by and sample_cols_to_group_by.
2523#agg_funcs_for_group_by = "['mean', 'sd', 'min', 'max', 'count']"
2524
2525# Out of fold aggregations ensure less overfitting, but see less data in each fold.  For controlling how many folds used by CVCatNumEncode Transformer.
2526#folds_for_group_by = 5
2527
2528# Control over columns to force-in.  Forced-in features are are handled by the most interpretable transformer allowed by experiment
2529# options, and they are never removed (although model may assign 0 importance to them still).
2530# Transformers used by default include:
2531# OriginalTransformer for numeric,
2532# CatOriginalTransformer or FrequencyTransformer for categorical,
2533# TextOriginalTransformer for text,
2534# DateTimeOriginalTransformer for date-times,
2535# DateOriginalTransformer for dates,
2536# ImageOriginalTransformer, ImageVectorizerV2Transformer for images,
2537# etc.
2538#cols_to_force_in = "[]"
2539
2540#cols_to_force_in_sanitized = "[]"
2541
2542# Strategy to apply when doing mutations on transformers.
2543# Sample mode is default, with tendency to sample transformer parameters.
2544# Batched mode tends to do multiple types of the same transformation together.
2545# Full mode does even more types of the same transformation together.
2546# 
2547#mutation_mode = "sample"
2548
2549# 'baseline': Explore exemplar set of models with baselines as reference.
2550# 'random': Explore 10 random seeds for same setup.  Useful since nature of genetic algorithm is noisy and repeats might get better results, or one can ensemble the custom individuals from such repeats.
2551# 'line': Explore good model with all features and original features with all models.  Useful as first exploration.
2552# 'line_all': Like 'line', but enable all models and transformers possible instead of only what base experiment setup would have inferred.
2553# 'product': Explore one-by-one Cartesian product of each model and transformer.  Useful for exhaustive exploration.
2554#leaderboard_mode = "baseline"
2555
2556# Controls whether users can launch an experiment in Leaderboard mode form the UI.
2557#leaderboard_off = false
2558
2559# Allows control over default accuracy knob setting.
2560# If default models are too complex, set to -1 or -2, etc.
2561# If default models are not accurate enough, set to 1 or 2, etc.
2562# 
2563#default_knob_offset_accuracy = 0
2564
2565# Allows control over default time knob setting.
2566# If default experiments are too slow, set to -1 or -2, etc.
2567# If default experiments finish too fast, set to 1 or 2, etc.
2568# 
2569#default_knob_offset_time = 0
2570
2571# Allows control over default interpretability knob setting.
2572# If default models are too simple, set to -1 or -2, etc.
2573# If default models are too complex, set to 1 or 2, etc.
2574# 
2575#default_knob_offset_interpretability = 0
2576
2577# Whether to enable checking text for shift, currently only via label encoding.
2578#shift_check_text = false
2579
2580# Whether to use LightGBM random forest mode without early stopping for shift detection.
2581#use_rf_for_shift_if_have_lgbm = true
2582
2583# Normalized training variable importance above which to check the feature for shift
2584# Useful to avoid checking likely unimportant features
2585#shift_key_features_varimp = 0.01
2586
2587# Whether to only check certain features based upon the value of shift_key_features_varimp
2588#shift_check_reduced_features = true
2589
2590# Number of trees to use to train model to check shift in distribution
2591# No larger than max_nestimators
2592#shift_trees = 100
2593
2594# The value of max_bin to use for trees to use to train model to check shift in distribution
2595#shift_max_bin = 256
2596
2597# The min. value of max_depth to use for trees to use to train model to check shift in distribution
2598#shift_min_max_depth = 4
2599
2600# The max. value of max_depth to use for trees to use to train model to check shift in distribution
2601#shift_max_max_depth = 8
2602
2603# If distribution shift detection is enabled, show features for which shift AUC is above this value
2604# (AUC of a binary classifier that predicts whether given feature value belongs to train or test data)
2605#detect_features_distribution_shift_threshold_auc = 0.55
2606
2607# Minimum number of features to keep, keeping least shifted feature at least if 1
2608#drop_features_distribution_shift_min_features = 1
2609
2610# Shift beyond which shows HIGH notification, else MEDIUM
2611#shift_high_notification_level = 0.8
2612
2613# Whether to enable checking text for leakage, currently only via label encoding.
2614#leakage_check_text = true
2615
2616# Normalized training variable importance (per 1 minus AUC/R2 to control for leaky varimp dominance) above which to check the feature for leakage
2617# Useful to avoid checking likely unimportant features
2618#leakage_key_features_varimp = 0.001
2619
2620# Like leakage_key_features_varimp, but applies if early stopping disabled when can trust multiple leaks to get uniform varimp.
2621#leakage_key_features_varimp_if_no_early_stopping = 0.05
2622
2623# Whether to only check certain features based upon the value of leakage_key_features_varimp.  If any feature has AUC near 1, will consume all variable importance, even if another feature is also leaky.  So False is safest option, but True generally good if many columns.
2624#leakage_check_reduced_features = true
2625
2626# Whether to use LightGBM random forest mode without early stopping for leakage detection.
2627#use_rf_for_leakage_if_have_lgbm = true
2628
2629# Number of trees to use to train model to check for leakage
2630# No larger than max_nestimators
2631#leakage_trees = 100
2632
2633# The value of max_bin to use for trees to use to train model to check for leakage
2634#leakage_max_bin = 256
2635
2636# The value of max_depth to use for trees to use to train model to check for leakage
2637#leakage_min_max_depth = 6
2638
2639# The value of max_depth to use for trees to use to train model to check for leakage
2640#leakage_max_max_depth = 8
2641
2642# When leakage detection is enabled, if AUC (R2 for regression) on original data (label-encoded)
2643# is above or equal to this value, then trigger per-feature leakage detection
2644# 
2645#detect_features_leakage_threshold_auc = 0.95
2646
2647# When leakage detection is enabled, show features for which AUC (R2 for regression,
2648# for whether that predictor/feature alone predicts the target) is above or equal to this value.
2649# Feature is dropped if AUC/R2 is above or equal to drop_features_leakage_threshold_auc
2650# 
2651#detect_features_per_feature_leakage_threshold_auc = 0.8
2652
2653# Minimum number of features to keep, keeping least leakage feature at least if 1
2654#drop_features_leakage_min_features = 1
2655
2656# Ratio of train to validation holdout when testing for leakage
2657#leakage_train_test_split = 0.25
2658
2659# Whether to enable detailed traces (in GUI Trace)
2660#detailed_traces = false
2661
2662# Whether to enable debug log level (in log files)
2663#debug_log = false
2664
2665# Whether to add logging of system information such as CPU, GPU, disk space at the start of each experiment log. Same information is already logged in system logs.
2666#log_system_info_per_experiment = true
2667
2668#check_system = true
2669
2670#check_system_basic = true
2671
2672# How close to the optimal value (usually 1 or 0) does the validation score need to be to be considered perfect (to stop the experiment)?
2673#abs_tol_for_perfect_score = 0.0001
2674
2675# Timeout in seconds to wait for data ingestion.
2676#data_ingest_timeout = 86400.0
2677
2678# How many seconds to allow mutate to take, nominally only takes few seconds at most.  But on busy system doing many individuals, might take longer.  Optuna sometimes live lock hangs in scipy random distribution maker.
2679#mutate_timeout = 600
2680
2681# Whether to trust GPU locking for submission of GPU jobs to limit memory usage.
2682# If False, then wait for as GPU submissions to be less than number of GPUs,
2683# even if later jobs could be purely CPU jobs that did not need to wait.
2684# Only applicable if not restricting number of GPUs via num_gpus_per_experiment,
2685# else have to use resources instead of relying upon locking.
2686# 
2687#gpu_locking_trust_pool_submission = true
2688
2689# Whether to steal GPU locks when process is neither on GPU PID list nor using CPU resources at all (e.g. sleeping).  Only steal from multi-GPU locks that are incomplete.  Prevents deadlocks in case multi-GPU model hangs.
2690#gpu_locking_free_dead = true
2691
2692#check_pred_contribs_sum = false
2693
2694#debug_daimodel_level = 0
2695
2696#debug_debug_xgboost_splits = false
2697
2698#log_predict_info = true
2699
2700#log_fit_info = true
2701
2702# Amount of time to stall (in seconds) before killing the job (assumes it hung). Reference time is scaled by train data shape of rows * cols to get used stalled_time_kill
2703#stalled_time_kill_ref = 440.0
2704
2705# Amount of time between checks for some process taking long time, every cycle full process list will be dumped to console or experiment logs if possible.
2706#long_time_psdump = 1800
2707
2708# Whether to dump ps every long_time_psdump
2709#do_psdump = false
2710
2711# Whether to check every long_time_psdump seconds and SIGUSR1 to all children to see where maybe stuck or taking long time.
2712#livelock_signal = false
2713
2714# Value to override number of sockets, in case DAIs determination is wrong, for non-trivial systems.  0 means auto.
2715#num_cpu_sockets_override = 0
2716
2717# Value to override number of GPUs, in case DAIs determination is wrong, for non-trivial systems.  -1 means auto.Can also set min_num_cores_per_gpu=-1 to allowany number of GPUs for each experiment regardlessof number of cores.
2718#num_gpus_override = -1
2719
2720# Whether to show GPU usage only when locking.  'auto' means 'on' if num_gpus_override is different than actual total visible GPUs, else it means 'off'
2721#show_gpu_usage_only_if_locked = "auto"
2722
2723# Show inapplicable models in preview, to be sure not missing models one could have used
2724#show_inapplicable_models_preview = false
2725
2726# Show inapplicable transformers in preview, to be sure not missing transformers one could have used
2727#show_inapplicable_transformers_preview = false
2728
2729# Show warnings for models (image auto, Dask multinode/multi-GPU) if conditions are met to use but not chosen to avoid missing models that could benefit accuracy/performance
2730#show_warnings_preview = false
2731
2732# Show warnings for models that have no transformers for certain features.
2733#show_warnings_preview_unused_map_features = true
2734
2735# Up to how many input features to determine, during GUI/client preview, unused features. Too many slows preview down.
2736#max_cols_show_unused_features = 1000
2737
2738# Up to how many input features to show transformers used for each input feature.
2739#max_cols_show_feature_transformer_mapping = 1000
2740
2741# Up to how many input features to show, in preview, that are unused features.
2742#warning_unused_feature_show_max = 3
2743
2744#interaction_finder_max_rows_x_cols = 200000.0
2745
2746#interaction_finder_corr_threshold = 0.95
2747
2748# Required GINI relative improvement for InteractionTransformer.
2749# If GINI is not better than this relative improvement compared to original features considered
2750# in the interaction, then the interaction is not returned.  If noisy data, and no clear signal
2751# in interactions but still want interactions, then can decrease this number.
2752#interaction_finder_gini_rel_improvement_threshold = 0.5
2753
2754# Number of transformed Interactions to make as best out of many generated trial interactions.
2755#interaction_finder_return_limit = 5
2756
2757# Whether to enable bootstrap sampling. Provides error bars to validation and test scores based on the standard error of the bootstrap mean.
2758#enable_bootstrap = true
2759
2760# Minimum number of bootstrap samples to use for estimating score and its standard deviation
2761# Actual number of bootstrap samples will vary between the min and max,
2762# depending upon row count (more rows, fewer samples) and accuracy settings (higher accuracy, more samples)
2763# 
2764#min_bootstrap_samples = 1
2765
2766# Maximum number of bootstrap samples to use for estimating score and its standard deviation
2767# Actual number of bootstrap samples will vary between the min and max,
2768# depending upon row count (more rows, fewer samples) and accuracy settings (higher accuracy, more samples)
2769# 
2770#max_bootstrap_samples = 100
2771
2772# Minimum fraction of row size to take as sample size for bootstrap estimator
2773# Actual sample size used for bootstrap estimate will vary between the min and max,
2774# depending upon row count (more rows, smaller sample size) and accuracy settings (higher accuracy, larger sample size)
2775# 
2776#min_bootstrap_sample_size_factor = 1.0
2777
2778# Maximum fraction of row size to take as sample size for bootstrap estimator
2779# Actual sample size used for bootstrap estimate will vary between the min and max,
2780# depending upon row count (more rows, smaller sample size) and accuracy settings (higher accuracy, larger sample size)
2781# 
2782#max_bootstrap_sample_size_factor = 10.0
2783
2784# Seed to use for final model bootstrap sampling, -1 means use experiment-derived seed.
2785# E.g. one can retrain final model with different seed to get different final model error bars for scores.
2786# 
2787#bootstrap_final_seed = -1
2788
2789# Benford's law: mean absolute deviance threshold equal and above which integer valued columns are treated as categoricals too
2790#benford_mad_threshold_int = 0.03
2791
2792# Benford's law: mean absolute deviance threshold equal and above which real valued columns are treated as categoricals too
2793#benford_mad_threshold_real = 0.1
2794
2795# Variable importance below which feature is dropped (with possible replacement found that is better)
2796# This also sets overall scale for lower interpretability settings.
2797# Set to lower value if ok with many weak features despite choosing high interpretability,
2798# or if see drop in performance due to the need for weak features.
2799# 
2800#varimp_threshold_at_interpretability_10 = 0.001
2801
2802# Whether to avoid setting stabilize_varimp=false and stabilize_fs=false for time series experiments.
2803#allow_stabilize_varimp_for_ts = false
2804
2805# Variable importance is used by genetic algorithm to decide which features are useful,
2806# so this can stabilize the feature selection by the genetic algorithm.
2807# This is by default disabled for time series experiments, which can have real diverse behavior in each split.
2808# But in some cases feature selection is improved in presence of highly shifted variables that are not handled
2809# by lag transformers and one can set allow_stabilize_varimp_for_ts=true.
2810# 
2811#stabilize_varimp = true
2812
2813# Whether to take minimum (True) or mean (False) of delta improvement in score when aggregating feature selection scores across multiple folds/depths.
2814# Delta improvement of score corresponds to original metric minus metric of shuffled feature frame if maximizing metric,
2815# and corresponds to negative of such a score difference if minimizing.
2816# Feature selection by permutation importance considers the change in score after shuffling a feature, and using minimum operation
2817# ignores optimistic scores in favor of pessimistic scores when aggregating over folds.
2818# Note, if using tree methods, multiple depths may be fitted, in which case regardless of this toml setting,
2819# only features that are kept for all depths are kept by feature selection.
2820# If interpretability >= config toml value of fs_data_vary_for_interpretability, then half data (or setting of fs_data_frac)
2821# is used as another fit, in which case regardless of this toml setting,
2822# only features that are kept for all data sizes are kept by feature selection.
2823# Note: This is disabled for small data since arbitrary slices of small data can lead to disjoint features being important and only aggregated average behavior has signal.
2824# 
2825#stabilize_fs = true
2826
2827# Whether final pipeline uses fixed features for some transformers that would normally
2828# perform search, such as InteractionsTransformer.
2829# Use what learned from tuning and evolution (True) or to freshly search for new features (False).
2830# This can give a more stable pipeline, especially for small data or when using interaction transformer
2831# as pretransformer in multi-layer pipeline.
2832# 
2833#stabilize_features = true
2834
2835#fraction_std_bootstrap_ladder_factor = 0.01
2836
2837#bootstrap_ladder_samples_limit = 10
2838
2839#features_allowed_by_interpretability = "{1: 10000000, 2: 10000, 3: 1000, 4: 500, 5: 300, 6: 200, 7: 150, 8: 100, 9: 80, 10: 50, 11: 50, 12: 50, 13: 50}"
2840
2841#nfeatures_max_threshold = 200
2842
2843#rdelta_percent_score_penalty_per_feature_by_interpretability = "{1: 0.0, 2: 0.1, 3: 1.0, 4: 2.0, 5: 5.0, 6: 10.0, 7: 20.0, 8: 30.0, 9: 50.0, 10: 100.0, 11: 100.0, 12: 100.0, 13: 100.0}"
2844
2845#drop_low_meta_weights = true
2846
2847#meta_weight_allowed_by_interpretability = "{1: 1E-7, 2: 1E-5, 3: 1E-4, 4: 1E-3, 5: 1E-2, 6: 0.03, 7: 0.05, 8: 0.08, 9: 0.10, 10: 0.15, 11: 0.15, 12: 0.15, 13: 0.15}"
2848
2849#meta_weight_allowed_for_reference = 1.0
2850
2851#feature_cost_mean_interp_for_penalty = 5
2852
2853#features_cost_per_interp = 0.25
2854
2855#varimp_threshold_shift_report = 0.3
2856
2857#apply_featuregene_limits_after_tuning = true
2858
2859#remove_scored_0gain_genes_in_postprocessing_above_interpretability = 13
2860
2861#remove_scored_0gain_genes_in_postprocessing_above_interpretability_final_population = 2
2862
2863#remove_scored_by_threshold_genes_in_postprocessing_above_interpretability_final_population = 7
2864
2865#show_full_pipeline_details = false
2866
2867#num_transformed_features_per_pipeline_show = 10
2868
2869#fs_data_vary_for_interpretability = 7
2870
2871#fs_data_frac = 0.5
2872
2873#many_columns_count = 400
2874
2875#columns_count_interpretable = 200
2876
2877#round_up_indivs_for_busy_gpus = true
2878
2879#tuning_share_varimp = "best"
2880
2881# Graphviz is an optional requirement for native installations (RPM/DEP/Tar-SH, outside of Docker)to convert .dot files into .png files for pipeline visualizations as part of experiment artifacts
2882#require_graphviz = true
2883
2884# Unnormalized probability to add genes or instances of transformers with specific attributes.
2885# If no genes can be added, other mutations
2886# (mutating models hyper parmaters, pruning genes, pruning features, etc.) are attempted.
2887# 
2888#prob_add_genes = 0.5
2889
2890# Unnormalized probability, conditioned on prob_add_genes,
2891# to add genes or instances of transformers with specific attributes
2892# that have shown to be beneficial to other individuals within the population.
2893# 
2894#prob_addbest_genes = 0.5
2895
2896# Unnormalized probability to prune genes or instances of transformers with specific attributes.
2897# If a variety of transformers with many attributes exists, default value is reasonable.
2898# However, if one has fixed set of transformers that should not change or no new transformer attributes
2899# can be added, then setting this to 0.0 is reasonable to avoid undesired loss of transformations.
2900# 
2901#prob_prune_genes = 0.5
2902
2903# Unnormalized probability change model hyper parameters.
2904# 
2905#prob_perturb_xgb = 0.25
2906
2907# Unnormalized probability to prune features that have low variable importance, as opposed to pruning entire instances of genes/transformers when prob_prune_genes used.
2908# If prob_prune_genes=0.0 and prob_prune_by_features==0.0 and prob_prune_by_top_features==0.0, then genes/transformers and transformed features are only pruned if they are:
2909# 1) inconsistent with the genome
2910# 2) inconsistent with the column data types
2911# 3) had no signal (for interactions and cv_in_cv for target encoding)
2912# 4) transformation failed
2913# E.g. these are toml settings are then ignored:
2914# 1) ngenes_max
2915# 2) limit_features_by_interpretability
2916# 3) varimp_threshold_at_interpretability_10
2917# 4) features_allowed_by_interpretability
2918# 5) remove_scored_0gain_genes_in_postprocessing_above_interpretability
2919# 6) nfeatures_max_threshold
2920# 7) features_cost_per_interp
2921# So this acts similar to no_drop_features, except no_drop_features also applies to shift and leak detection, constant columns are not dropped, ID columns are not dropped.
2922#prob_prune_by_features = 0.25
2923
2924# Unnormalized probability to prune features that have high variable importance,
2925# in case they have high gain but negaive perfomrance on validation and would otherwise maintain poor validation scores.
2926# Similar to prob_prune_by_features but for high gain features.
2927#prob_prune_by_top_features = 0.25
2928
2929# Maximum number of high gain features to prune for each mutation call, to control behavior of prob_prune_by_top_features.
2930#max_num_prune_by_top_features = 1
2931
2932# Like prob_prune_genes but only for pretransformers, i.e. those transformers in layers except last layer that connects to model.
2933#prob_prune_pretransformer_genes = 0.5
2934
2935# Like prob_prune_by_features but only for pretransformers, i.e. those transformers in layers except last layer that connects to model.
2936#prob_prune_pretransformer_by_features = 0.25
2937
2938# Like prob_prune_by_top_features but only for pretransformers, i.e. those transformers in layers except last layer that connects to model.
2939#prob_prune_pretransformer_by_top_features = 0.25
2940
2941# When doing restart, retrain, refit, reset these individual parameters to new toml values.
2942#override_individual_from_toml_list = "['prob_perturb_xgb', 'prob_add_genes', 'prob_addbest_genes', 'prob_prune_genes', 'prob_prune_by_features', 'prob_prune_by_top_features', 'prob_prune_pretransformer_genes', 'prob_prune_pretransformer_by_features', 'prob_prune_pretransformer_by_top_features']"
2943
2944# Max. number of trees to use for all tree model predictions. For testing, when predictions don't matter. -1 means disabled.
2945#fast_approx_max_num_trees_ever = -1
2946
2947# Max. number of trees to use for fast_approx=True (e.g., for AutoDoc/MLI).
2948#fast_approx_num_trees = 250
2949
2950# Whether to speed up fast_approx=True further, by using only one fold out of all cross-validation folds (e.g., for AutoDoc/MLI).
2951#fast_approx_do_one_fold = true
2952
2953# Whether to speed up fast_approx=True further, by using only one model out of all ensemble models (e.g., for AutoDoc/MLI).
2954#fast_approx_do_one_model = false
2955
2956# Max. number of trees to use for fast_approx_contribs=True (e.g., for 'Fast Approximation' in GUI when making Shapley predictions, and for AutoDoc/MLI).
2957#fast_approx_contribs_num_trees = 50
2958
2959# Whether to speed up fast_approx_contribs=True further, by using only one fold out of all cross-validation folds (e.g., for 'Fast Approximation' in GUI when making Shapley predictions, and for AutoDoc/MLI).
2960#fast_approx_contribs_do_one_fold = true
2961
2962# Whether to speed up fast_approx_contribs=True further, by using only one model out of all ensemble models (e.g., for 'Fast Approximation' in GUI when making Shapley predictions, and for AutoDoc/MLI).
2963#fast_approx_contribs_do_one_model = true
2964
2965# Approximate interval between logging of progress updates when making predictions. >=0 to enable, -1 to disable.
2966#prediction_logging_interval = 300
2967
2968# Whether to use exploit-explore logic like DAI 1.8.x.  False will explore more.
2969#use_187_prob_logic = true
2970
2971# Whether to enable cross-validated OneHotEncoding+LinearModel transformer
2972#enable_ohe_linear = false
2973
2974#max_absolute_feature_expansion = 1000
2975
2976#booster_for_fs_permute = "auto"
2977
2978#model_class_name_for_fs_permute = "auto"
2979
2980#switch_from_tree_to_lgbm_if_can = true
2981
2982#model_class_name_for_shift = "auto"
2983
2984#model_class_name_for_leakage = "auto"
2985
2986#default_booster = "lightgbm"
2987
2988#default_model_class_name = "LightGBMModel"
2989
2990#num_as_cat_false_if_ohe = true
2991
2992#no_ohe_try = true
2993
2994# Compute empirical prediction intervals (based on holdout predictions).
2995#prediction_intervals = true
2996
2997# Confidence level for prediction intervals.
2998#prediction_intervals_alpha = 0.9
2999
3000# DISCLAIMER: THIS IS AN EXPERIMENTAL FEATURE, USE AT YOUR OWN RISK.
3001# The new methods will simulate error propagation over future prediction across horizons, not intend to be realistic model prediction pattern as model does not trained in AR(auto-regressive) fashion.
3002# error_propagation: Assume normal distribution with std sigma then set bands as y_hat +/- z * sigma and inflating with horizon. Good when residuals are roughly Gaussian but relies on correct inflation model;
3003# bootstrap_simulation: Resample historical residuals (with replacement) to simulate future errors, take simulation percentiles per group/horizon and add to y_hat. Capture skew/heavy tails without parametric assumption but computational expensive and performance can drift if error distribution shift;
3004# monte_carlo_simulation: Fit a parametric error model (Gaussian), simulate many error draws, take percentiles to add to y_hat. Smoother and more stable than bootstrap with limited data but risk of Misspecification if the chosen distribution is wrong.
3005#prediction_intervals_simulation_method = ""
3006
3007# DISCLAIMER: THIS IS AN EXPERIMENTAL FEATURE, USE AT YOUR OWN RISK.
3008# Sample size to simulate future errors, used by ``bootstrap_simulation`` and ``monte_carlo_simulation``.
3009#prediction_intervals_sampling_errors = 1000
3010
3011# DISCLAIMER: THIS IS AN EXPERIMENTAL FEATURE, USE AT YOUR OWN RISK.
3012# A heuristic approach that will greatly reduces the memory cost due to the expensive join and group operations while being horizon-aware.
3013# Note: If buckets is 1 then only the fixed median of entire horizon will be utilized, thus no effect of horizon at all.
3014# If buckets is <= 0, then all horizons will be considered.
3015# It is highly recommend to tune this parameter for the best tradeoff as experiment may become unstable and subject to failure due to the amount of memory/cpu exhausted depending on the size of training data.
3016#prediction_intervals_bin_horizon = 0
3017
3018# DISCLAIMER: THIS IS AN EXPERIMENTAL FEATURE, USE AT YOUR OWN RISK.
3019# Control the spread of error accumulation over the horizon:
3020# If == 1.0: intervals use the raw residual standard deviation and the growth based on strong assumption of independent, constant residuals;
3021# If > 1.0: widens intervals (more conservative). Useful if your residual underestimates true predictive uncertainty;
3022# If < 1.0 (default 0.9): narrows intervals (sharper). Useful if the raw variance + growth is too pessimistic for your data.
3023#prediction_interval_monte_carlo_calibration_ratio = 0.9
3024
3025# Appends one extra output column with predicted target class (after the per-class probabilities).
3026# Uses argmax for multiclass, and the threshold defined by the optimal scorer controlled by the
3027# 'threshold_scorer' expert setting for binary problems. This setting controls the training, validation and test
3028# set predictions (if applicable) that are created by the experiment. MOJO, scoring pipeline and client APIs
3029# control this behavior via their own version of this parameter.
3030#pred_labels = true
3031
3032# Class count above which do not use TextLin Transformer.
3033#textlin_num_classes_switch = 5
3034
3035#text_gene_dim_reduction_choices = "[50]"
3036
3037#text_gene_max_ngram = "[1, 2, 3]"
3038
3039# Max size (in tokens) of the vocabulary created during fitting of Tfidf/Count/Comatrix based text
3040# transformers (not CNN/BERT). If multiple values are provided, will use the first one for initial models, and use remaining
3041# values during parameter tuning and feature evolution. Values smaller than 10000 are recommended for speed,
3042# and a reasonable set of choices include: 100, 1000, 5000, 10000, 50000, 100000, 500000.
3043# Note: If force_enable_text_comatrix_preprocess is set to True, then only selective set of top vocabularies will be used due to computational and memory complexity.
3044#text_transformers_max_vocabulary_size = "[1000, 5000]"
3045
3046# Enables caching of BERT embeddings by temporally saving the embedding vectors to the experiment directory. Set to -1 to cache all text, set to 0 to disable caching.
3047#number_of_texts_to_cache_in_bert_transformer = -1
3048
3049# Modify early stopping behavior for tree-based models (LightGBM, XGBoostGBM, CatBoost) such
3050# that training score (on training data, not holdout) and validation score differ no more than this absolute value
3051# (i.e., stop adding trees once abs(train_score - valid_score) > max_abs_score_delta_train_valid).
3052# Keep in mind that the meaning of this value depends on the chosen scorer and the dataset (i.e., 0.01 for
3053# LogLoss is different than 0.01 for MSE). Experimental option, only for expert use to keep model complexity low.
3054# To disable, set to 0.0
3055#max_abs_score_delta_train_valid = 0.0
3056
3057# Modify early stopping behavior for tree-based models (LightGBM, XGBoostGBM, CatBoost) such
3058# that training score (on training data, not holdout) and validation score differ no more than this relative value
3059# (i.e., stop adding trees once abs(train_score - valid_score) > max_rel_score_delta_train_valid * abs(train_score)).
3060# Keep in mind that the meaning of this value depends on the chosen scorer and the dataset (i.e., 0.01 for
3061# LogLoss is different than 0.01 for MSE). Experimental option, only for expert use to keep model complexity low.
3062# To disable, set to 0.0
3063#max_rel_score_delta_train_valid = 0.0
3064
3065# Whether to search for optimal lambda for given alpha for XGBoost GLM.
3066# If 'auto', disabled if training data has more rows * cols than final_pipeline_data_size or for multiclass experiments.
3067# Disabled always for ensemble_level = 0.
3068# Not always a good approach, can be slow for little payoff compared to grid search.
3069# 
3070#glm_lambda_search = "auto"
3071
3072# If XGBoost GLM lambda search is enabled, whether to do search by the eval metric (True)
3073# or using the actual DAI scorer (False).
3074#glm_lambda_search_by_eval_metric = false
3075
3076#gbm_early_stopping_rounds_min = 1
3077
3078#gbm_early_stopping_rounds_max = 10000000000
3079
3080# Whether to enable early stopping threshold for LightGBM, varying by accuracy.
3081# Stops training once validation score changes by less than the threshold.
3082# This leads to fewer trees, usually avoiding wasteful trees, but may lower accuracy.
3083# However, it may also improve generalization by avoiding fine-tuning to validation set.
3084# 0 leads to value of 0 used, i.e. disabled
3085# > 0 means non-automatic mode using that *relative* value, scaled by first tree results of the metric for any metric.
3086# -1 means always enable, but the threshold itself is automatic (lower the accuracy, the larger the threshold).
3087# -2 means fully automatic mode, i.e. disabled unless reduce_mojo_size is true.  In true, the lower the accuracy, the larger the threshold.
3088# NOTE: Automatic threshold is set so relative value of metric's min_delta in LightGBM's callback for early stopping is:
3089# if accuracy <= 1:
3090# early_stopping_threshold = 1e-1
3091# elif accuracy <= 4:
3092# early_stopping_threshold = 1e-2
3093# elif accuracy <= 7:
3094# early_stopping_threshold = 1e-3
3095# elif accuracy <= 9:
3096# early_stopping_threshold = 1e-4
3097# else:
3098# early_stopping_threshold = 0
3099# 
3100#enable_early_stopping_threshold = -2.0
3101
3102#glm_optimal_refit = true
3103
3104# Whether to force enable co-occurrence text preprocess, only applicable to TextTransformer, default is False.Note: This setting will override choice made from Gene. Currently MOJO does not support co-occurrence matrix operation.
3105#force_enable_text_comatrix_preprocess = false
3106
3107# Window size of the neighboring vocabulary being counted during fitting of Co-Occurrence based text
3108# transformers (not CNN/BERT). If multiple values are provided, will use the first one for initial models, and use remaining
3109# values during parameter tuning and feature evolution. Values smaller than 5 are recommended for speed and memory,
3110# defaults are 3, 2, 4.
3111#text_gene_comatrix_window_size_choices = "[3, 2, 4]"
3112
3113# Max. number of top variable importances to save per iteration (GUI can only display a max. of 14)
3114#max_varimp_to_save = 100
3115
3116# Max. number of top variable importances to show in logs during feature evolution
3117#max_num_varimp_to_log = 10
3118
3119# Max. number of top variable importance shifts to show in logs and GUI after final model built
3120#max_num_varimp_shift_to_log = 10
3121
3122# Skipping just avoids the failed transformer.
3123# Sometimes python multiprocessing swallows exceptions,
3124# so skipping and logging exceptions is also more reliable way to handle them.
3125# Recipe can raise h2oaicore.systemutils.IgnoreError to ignore error and avoid logging error.
3126# Features that fail are pruned from the individual.
3127# If that leaves no features in the individual, then backend tuning, feature/model tuning, final model building, etc.
3128# will still fail since DAI should not continue if all features are from a failed state.
3129# 
3130#skip_transformer_failures = true
3131
3132# Skipping just avoids the failed model.  Failures are logged depending upon detailed_skip_failure_messages_level."
3133# Recipe can raise h2oaicore.systemutils.IgnoreError to ignore error and avoid logging error.
3134# 
3135#skip_model_failures = true
3136
3137# Skipping just avoids the failed scorer if among many scorers.  Failures are logged depending upon detailed_skip_failure_messages_level."
3138# Recipe can raise h2oaicore.systemutils.IgnoreError to ignore error and avoid logging error.
3139# Default is True to avoid failing in, e.g., final model building due to a single scorer.
3140# 
3141#skip_scorer_failures = true
3142
3143# Skipping avoids the failed recipe.  Failures are logged depending upon detailed_skip_failure_messages_level."
3144# Default is False because runtime data recipes are one-time at start of experiment and expected to work by default.
3145# 
3146#skip_data_recipe_failures = false
3147
3148# Whether can skip final model transformer failures for layer > first layer for multi-layer pipeline.
3149#can_skip_final_upper_layer_failures = true
3150
3151# How much verbosity to log failure messages for failed and then skipped transformers or models.
3152# Full failures always go to disk as *.stack files,
3153# which upon completion of experiment goes into details folder within experiment log zip file.
3154# 
3155#detailed_skip_failure_messages_level = 1
3156
3157# Whether to not just log errors of recipes (models and transformers) but also show high-level notification in GUI.
3158# 
3159#notify_failures = true
3160
3161# Instructions for 'Add to config.toml via toml string' in GUI expert page
3162# Self-referential toml parameter, for setting any other toml parameters as string of tomls separated by
3163# (spaces around
3164# are ok).
3165# Useful when toml parameter is not in expert mode but want per-experiment control.
3166# Setting this will override all other choices.
3167# In expert page, each time expert options saved, the new state is set without memory of any prior settings.
3168# The entered item is a fully compliant toml string that would be processed directly by toml.load().
3169# One should include 2 double quotes around the entire setting, or double quotes need to be escaped.
3170# One enters into the expert page text as follows:
3171# e.g. ``enable_glm="off"
3172# enable_xgboost_gbm="off"
3173# enable_lightgbm="on"``
3174# e.g. ``""enable_glm="off"
3175# enable_xgboost_gbm="off"
3176# enable_lightgbm="off"""``
3177# e.g. ``fixed_num_individuals=4``
3178# e.g. ``params_lightgbm="{'objective':'poisson'}"``
3179# e.g. ``""params_lightgbm="{'objective':'poisson'}"""``
3180# e.g. ``max_cores=10
3181# data_precision="float32"
3182# max_rows_feature_evolution=50000000000
3183# ensemble_accuracy_switch=11
3184# feature_engineering_effort=1
3185# target_transformer="identity"
3186# tournament_feature_style_accuracy_switch=5``
3187# e.g. ""max_cores=10
3188# data_precision="float32"
3189# max_rows_feature_evolution=50000000000
3190# ensemble_accuracy_switch=11
3191# feature_engineering_effort=1
3192# target_transformer="identity"
3193# tournament_feature_style_accuracy_switch=5""
3194# If you see: "toml.TomlDecodeError" then ensure toml is set correctly.
3195# When set in the expert page of an experiment, these changes only affect experiments and not the server
3196# Usually should keep this as empty string in this toml file.
3197# 
3198#config_overrides = ""
3199
3200# Whether to dump every scored individual's variable importance to csv/tabulated/json file produces files like:
3201# individual_scored_id%d.iter%d.<hash>.features.txt for transformed features.
3202# individual_scored_id%d.iter%d.<hash>.features_orig.txt for original features.
3203# individual_scored_id%d.iter%d.<hash>.coefs.txt for absolute importance of transformed features.
3204# There are txt, tab.txt, and json formats for some files, and "best_" prefix means it is the best individual for that iteration
3205# The hash in the name matches the hash in the files produced by dump_modelparams_every_scored_indiv=true that can be used to track mutation history.
3206#dump_varimp_every_scored_indiv = false
3207
3208# Whether to dump every scored individual's model parameters to csv/tabulated/json file
3209# produces files like: individual_scored.params.[txt, csv, json].
3210# Each individual has a hash that matches the hash in the filenames produced if dump_varimp_every_scored_indiv=true,
3211# and the "unchanging hash" is the first parent hash (None if that individual is the first parent itself).
3212# These hashes can be used to track the history of the mutations.
3213# 
3214#dump_modelparams_every_scored_indiv = true
3215
3216# Number of features to show in model dump every scored individual
3217#dump_modelparams_every_scored_indiv_feature_count = 3
3218
3219# Number of past mutations to show in model dump every scored individual
3220#dump_modelparams_every_scored_indiv_mutation_count = 3
3221
3222# Whether to append (false) or have separate files, files like: individual_scored_id%d.iter%d*params*, (true) for modelparams every scored indiv
3223#dump_modelparams_separate_files = false
3224
3225# Whether to dump every scored fold's timing and feature info to a *timings*.txt file
3226# 
3227#dump_trans_timings = false
3228
3229# whether to delete preview timings if wrote transformer timings
3230#delete_preview_trans_timings = true
3231
3232# Attempt to create at most this many exemplars (actual rows behaving like cluster centroids) for the Aggregator
3233# algorithm in unsupervised experiment mode.
3234# 
3235#unsupervised_aggregator_n_exemplars = 100
3236
3237# Attempt to create at least this many clusters for clustering algorithm in unsupervised experiment mode.
3238# 
3239#unsupervised_clustering_min_clusters = 2
3240
3241# Attempt to create no more than this many clusters for clustering algorithm in unsupervised experiment mode.
3242# 
3243#unsupervised_clustering_max_clusters = 10
3244
3245#use_random_text_file = false
3246
3247#runtime_estimation_train_frame = ""
3248
3249#enable_bad_scorer = false
3250
3251#debug_col_dict_prefix = ""
3252
3253#return_early_debug_col_dict_prefix = false
3254
3255#return_early_debug_preview = false
3256
3257#wizard_random_attack = false
3258
3259#wizard_enable_back_button = true
3260
3261#wizard_deployment = ""
3262
3263#wizard_repro_level = -1
3264
3265#wizard_sample_size = 100000
3266
3267#wizard_model = "rf"
3268
3269# Maximum number of columns to start an experiment. This threshold exists to constraint the # complexity and the length of the Driverless AI's processes.
3270#wizard_max_cols = 100000
3271
3272# How many seconds to allow preview to take for Wizard.
3273#wizard_timeout_preview = 30
3274
3275# How many seconds to allow leakage detection to take for Wizard.
3276#wizard_timeout_leakage = 60
3277
3278# How many seconds to allow duplicate row detection to take for Wizard.
3279#wizard_timeout_dups = 30
3280
3281# How many seconds to allow variable importance calculation to take for Wizard.
3282#wizard_timeout_varimp = 30
3283
3284# How many seconds to allow dataframe schema calculation to take for Wizard.
3285#wizard_timeout_schema = 60
3286
3287#max_reorder_experiments = 100
3288
3289# Default the upper bound number of experiments owned per user. Negative value means infinite quota.
3290#default_experiments_quota_per_user = -1
3291
3292# Dictionary of key:list of experiments quota values for users, overrides above defaults with specified set of users
3293# e.g: ``override_experiments_quota_for_users="{'user1':10,'user2':20,'user3':30}"`` to set user1 with 10 experiments quota,
3294# user2 with 20 experiments quota and user3 with 30 experiments quota.
3295# 
3296#override_experiments_quota_for_users = "{}"
3297
3298# authentication_method
3299# unvalidated : Accepts user id and password. Does not validate password.
3300# none: Does not ask for user id or password. Authenticated as admin.
3301# openid: Users OpenID Connect provider for authentication. See additional OpenID settings below.
3302# oidc: Renewed OpenID Connect authentication using authorization code flow. See additional OpenID settings below.
3303# pam: Accepts user id and password. Validates user with operating system.
3304# ldap: Accepts user id and password. Validates against an ldap server. Look
3305# for additional settings under LDAP settings.
3306# local: Accepts a user id and password. Validated against an htpasswd file provided in local_htpasswd_file.
3307# ibm_spectrum_conductor: Authenticate with IBM conductor auth api.
3308# tls_certificate: Authenticate with Driverless by providing a TLS certificate.
3309# jwt: Authenticate by JWT obtained from the request metadata.
3310# 
3311#authentication_method = "unvalidated"
3312
3313# Additional authentication methods that will be enabled for for the clients.Login forms for each method will be available on the``/login/<authentication_method>`` path.Comma separated list.
3314#additional_authentication_methods = "[]"
3315
3316# The default amount of time in hours before a user is signed out and must log in again. This setting is used when a default timeout value is not provided by ``authentication_method``.
3317#authentication_default_timeout_hours = 72.0
3318
3319# When enabled, the user's session is automatically prolonged, even when they are not interacting directly with the application.
3320#authentication_gui_polling_prolongs_session = false
3321
3322# OpenID Connect Settings:
3323# Refer to the OpenID Connect Basic Client Implementation Guide for details on how OpenID authentication flow works
3324# https://openid.net/specs/openid-connect-basic-1_0.html
3325# base server URI to the OpenID Provider server (ex: https://oidp.ourdomain.com
3326#auth_openid_provider_base_uri = ""
3327
3328# URI to pull OpenID config data from (you can extract most of required OpenID config from this url)
3329# usually located at: /auth/realms/master/.well-known/openid-configuration
3330#auth_openid_configuration_uri = ""
3331
3332# URI to start authentication flow
3333#auth_openid_auth_uri = ""
3334
3335# URI to make request for token after callback from OpenID server was received
3336#auth_openid_token_uri = ""
3337
3338# URI to get user information once access_token has been acquired (ex: list of groups user belongs to will be provided here)
3339#auth_openid_userinfo_uri = ""
3340
3341# URI to logout user
3342#auth_openid_logout_uri = ""
3343
3344# callback URI that OpenID provide will use to send 'authentication_code'
3345# This is OpenID callback endpoint in Driverless AI. Most OpenID providers need this to be HTTPs.
3346# (ex. https://driverless.ourdomin.com/openid/callback)
3347#auth_openid_redirect_uri = ""
3348
3349# OAuth2 grant type (usually authorization_code for OpenID, can be access_token also)
3350#auth_openid_grant_type = ""
3351
3352# OAuth2 response type (usually code)
3353#auth_openid_response_type = ""
3354
3355# Client ID registered with OpenID provider
3356#auth_openid_client_id = ""
3357
3358# Client secret provided by OpenID provider when registering Client ID
3359#auth_openid_client_secret = ""
3360
3361# Scope of info (usually openid). Can be list of more than one, space delimited, possible
3362# values listed at https://openid.net/specs/openid-connect-basic-1_0.html#Scopes
3363#auth_openid_scope = ""
3364
3365# What key in user_info JSON should we check to authorize user
3366#auth_openid_userinfo_auth_key = ""
3367
3368# What value should the key have in user_info JSON in order to authorize user
3369#auth_openid_userinfo_auth_value = ""
3370
3371# Key that specifies username in user_info JSON (we will use the value of this key as username in Driverless AI)
3372#auth_openid_userinfo_username_key = ""
3373
3374# Quote method from urllib.parse used to encode payload dict in Authentication Request
3375#auth_openid_urlencode_quote_via = "quote"
3376
3377# Key in Token Response JSON that holds the value for access token expiry
3378#auth_openid_access_token_expiry_key = "expires_in"
3379
3380# Key in Token Response JSON that holds the value for access token expiry
3381#auth_openid_refresh_token_expiry_key = "refresh_expires_in"
3382
3383# Expiration time in seconds for access token
3384#auth_openid_token_expiration_secs = 3600
3385
3386# Enables advanced matching for OpenID Connect authentication.
3387# When enabled ObjectPath (<http://objectpath.org/>) expression is used to
3388# evaluate the user identity.
3389# 
3390#auth_openid_use_objectpath_match = false
3391
3392# ObjectPath (<http://objectpath.org/>) expression that will be used
3393# to evaluate whether user is allowed to login into Driverless.
3394# Any expression that evaluates to True means user is allowed to log in.
3395# Examples:
3396# Simple claim equality: `$.our_claim is "our_value"`
3397# List of claims contains required value: `"expected_role" in @.roles`
3398# 
3399#auth_openid_use_objectpath_expression = ""
3400
3401# Sets token introspection URL for OpenID Connect authentication. (needs to be an absolute URL) Needs to be set when API token introspection is enabled. Is used to get the token TTL when set and IDP does not provide expires_in field in the token endpoint response.
3402#auth_openid_token_introspection_url = ""
3403
3404# Sets an URL where the user is being redirected after being logged out when set. (needs to be an absolute URL)
3405#auth_openid_end_session_endpoint_url = ""
3406
3407# If set, server will use these scopes when it asks for the token on the login. (space separated list)
3408#auth_openid_default_scopes = ""
3409
3410# Specifies the source from which user identity and username is retrieved.
3411# Currently supported sources are:
3412# user_info: Retrieves username from UserInfo endpoint response
3413# id_token: Retrieves username from ID Token using
3414# `auth_openid_id_token_username_key` claim
3415# 
3416#auth_oidc_identity_source = "userinfo"
3417
3418# Claim of preferred username in a message holding the user identity, which will be used as a username in application. The user identity source is specified by `auth_oidc_identity_source`, and can be e.g. UserInfo endpoint response or ID Token
3419#auth_oidc_username_claim = ""
3420
3421# OpenID-Connect Issuer URL, which is used for automatic provider infodiscovery. E.g. https://login.microsoftonline.com/<client-id>/v2.0
3422#auth_oidc_issuer_url = ""
3423
3424# OpenID-Connect Token endpoint URL. Setting this is optional and if it's empty, it'll be automatically set by provider info discovery.
3425#auth_oidc_token_endpoint_url = ""
3426
3427# OpenID-Connect Token introspection endpoint URL. Setting this is optional and if it's empty, it'll be automatically set by provider info discovery.
3428#auth_oidc_introspection_endpoint_url = ""
3429
3430# Absolute URL to which user is redirected, after they log out from the application, in case OIDC authentication is used. Usually this is absolute URL of DriverlessAI Login page e.g. https://1.2.3.4:12345/login
3431#auth_oidc_post_logout_url = ""
3432
3433# Key-value mapping of extra HTTP query parameters in an OIDC authorization request.
3434#auth_oidc_authorization_query_params = "{}"
3435
3436# When set to True, will skip cert verification.
3437#auth_oidc_skip_cert_verification = false
3438
3439# When set will use this value as the location for the CA cert, this takes precedence over auth_oidc_skip_cert_verification.
3440#auth_oidc_ca_cert_location = ""
3441
3442# Enables option to use Bearer token for authentication with the RPC endpoint.
3443#api_token_introspection_enabled = false
3444
3445# Sets the method that is used to introspect the bearer token.
3446# OAUTH2_TOKEN_INTROSPECTION: Uses  OAuth 2.0 Token Introspection (RPC 7662)
3447# endpoint to introspect the bearer token.
3448# This useful when 'openid' is used as the authentication method.
3449# Uses 'auth_openid_client_id' and 'auth_openid_client_secret' and to
3450# authenticate with the authorization server and
3451# `auth_openid_token_introspection_url` to perform the introspection.
3452# 
3453#api_token_introspection_method = "OAUTH2_TOKEN_INTROSPECTION"
3454
3455# Sets the minimum of the scopes that the access token needs to have
3456# in order to pass the introspection. Space separated./
3457# This is passed to the introspection endpoint and also verified after response
3458# for the servers that don't enforce scopes.
3459# Keeping this empty turns any the verification off.
3460# 
3461#api_token_oauth2_scopes = ""
3462
3463# Which field of the response returned by the token introspection endpoint should be used as a username.
3464#api_token_oauth2_username_field_name = "username"
3465
3466# Enables the option to initiate a PKCE flow from the UI in order to obtaintokens usable with Driverless clients
3467#oauth2_client_tokens_enabled = false
3468
3469# Sets up client id that will be used in the OAuth 2.0 Authorization Code Flow to obtain the tokens. Client needs to be public and be able to use PKCE with S256 code challenge.
3470#oauth2_client_tokens_client_id = ""
3471
3472# Sets up the absolute url to the authorize endpoint.
3473#oauth2_client_tokens_authorize_url = ""
3474
3475# Sets up the absolute url to the token endpoint.
3476#oauth2_client_tokens_token_url = ""
3477
3478# Sets up the absolute url to the token introspection endpoint.It's displayed in the UI so that clients can inspect the token expiration.
3479#oauth2_client_tokens_introspection_url = ""
3480
3481# Sets up the absolute to the redirect url where Driverless handles the redirect part of the Authorization Code Flow. this <Driverless base url>/oauth2/client_token
3482#oauth2_client_tokens_redirect_url = ""
3483
3484# Sets up the scope for the requested tokens. Space seprated list.
3485#oauth2_client_tokens_scope = "openid profile ai.h2o.storage"
3486
3487# ldap server domain or ip
3488#ldap_server = ""
3489
3490# ldap server port
3491#ldap_port = ""
3492
3493# Complete DN of the LDAP bind user
3494#ldap_bind_dn = ""
3495
3496# Password for the LDAP bind
3497#ldap_bind_password = ""
3498
3499# Provide Cert file location
3500#ldap_tls_file = ""
3501
3502# use true to use ssl or false
3503#ldap_use_ssl = false
3504
3505# the location in the DIT where the search will start
3506#ldap_search_base = ""
3507
3508# A string that describes what you are searching for. You can use Pythonsubstitution to have this constructed dynamically.(only {{DAI_USERNAME}} is supported)
3509#ldap_search_filter = ""
3510
3511# ldap attributes to return from search
3512#ldap_search_attributes = ""
3513
3514# specify key to find user name
3515#ldap_user_name_attribute = ""
3516
3517# When using this recipe, needs to be set to "1"
3518#ldap_recipe = "0"
3519
3520# Deprecated do not use
3521#ldap_user_prefix = ""
3522
3523# Deprecated, Use ldap_bind_dn
3524#ldap_search_user_id = ""
3525
3526# Deprecated, ldap_bind_password
3527#ldap_search_password = ""
3528
3529# Deprecated, use ldap_search_base instead
3530#ldap_ou_dn = ""
3531
3532# Deprecated, use ldap_base_dn
3533#ldap_dc = ""
3534
3535# Deprecated, use ldap_search_base
3536#ldap_base_dn = ""
3537
3538# Deprecated, use ldap_search_filter
3539#ldap_base_filter = ""
3540
3541# Path to the CRL file that will be used to verify client certificate.
3542#auth_tls_crl_file = ""
3543
3544# What field of the subject would used as source for username or other values used for further validation.
3545#auth_tls_subject_field = "CN"
3546
3547# Regular expression that will be used to parse subject field to obtain the username or other values used for further validation.
3548#auth_tls_field_parse_regexp = "(?P<username>.*)"
3549
3550# Sets up the way how user identity would be obtained
3551# REGEXP_ONLY: Will use 'auth_tls_subject_field' and 'auth_tls_field_parse_regexp'
3552# to extract the username from the client certificate.
3553# LDAP_LOOKUP: Will use LDAP server to lookup for the username.
3554# 'auth_tls_ldap_server', 'auth_tls_ldap_port',
3555# 'auth_tls_ldap_use_ssl', 'auth_tls_ldap_tls_file',
3556# 'auth_tls_ldap_bind_dn', 'auth_tls_ldap_bind_password'
3557# options are used to establish the connection with the LDAP server.
3558# 'auth_tls_subject_field' and 'auth_tls_field_parse_regexp'
3559# options are used to parse the certificate.
3560# 'auth_tls_ldap_search_base', 'auth_tls_ldap_search_filter', and
3561# 'auth_tls_ldap_username_attribute' options are used to do the
3562# lookup.
3563# 
3564#auth_tls_user_lookup = "REGEXP_ONLY"
3565
3566# Hostname or IP address of the LDAP server used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3567#auth_tls_ldap_server = ""
3568
3569# Port of the LDAP server used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3570#auth_tls_ldap_port = ""
3571
3572# Whether to SSL to when connecting to the LDAP server used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3573#auth_tls_ldap_use_ssl = false
3574
3575# Path to the SSL certificate used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3576#auth_tls_ldap_tls_file = ""
3577
3578# Complete DN of the LDAP bind user used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3579#auth_tls_ldap_bind_dn = ""
3580
3581# Password for the LDAP bind used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3582#auth_tls_ldap_bind_password = ""
3583
3584# Location in the DIT where the search will start used with LDAP_LOOKUP with 'tls_certificate' authentication method.
3585#auth_tls_ldap_search_base = ""
3586
3587# LDAP filter that will be used to lookup for the user
3588# with LDAP_LOOKUP with 'tls_certificate' authentication method.
3589# Can be built dynamically using the named capturing groups from the
3590# 'auth_tls_field_parse_regexp' for substitution.
3591# Example:
3592# ``auth_tls_field_parse_regexp="\w+ (?P<id>\d+)"``
3593# ``auth_tls_ldap_search_filter="(&(objectClass=person)(id={{id}}))"``
3594# 
3595#auth_tls_ldap_search_filter = ""
3596
3597# Specified what LDAP record attribute will be used as username with LDAP_LOOKUP with 'tls_certificate' authentication method.
3598#auth_tls_ldap_username_attribute = ""
3599
3600# Sets optional additional lookup filter that is performed after the
3601# user is found. This can be used for example to check whether the is member of
3602# particular group.
3603# Filter can be built dynamically from the attributes returned by the lookup.
3604# Authorization fails when search does not return any entry. If one ore more
3605# entries are returned authorization succeeds.
3606# Example:
3607# ``auth_tls_field_parse_regexp="\w+ (?P<id>\d+)"``
3608# ``ldap_search_filter="(&(objectClass=person)(id={{id}}))"``
3609# ``auth_tls_ldap_authorization_lookup_filter="(&(objectClass=group)(member=uid={{uid}},dc=example,dc=com))"``
3610# If this option is empty no additional lookup is done and just a successful user
3611# lookup is enough to authorize the user.
3612# 
3613#auth_tls_ldap_authorization_lookup_filter = ""
3614
3615# Base DN where to start the Authorization lookup. Used when 'auth_tls_ldap_authorization_lookup_filter' is set.
3616#auth_tls_ldap_authorization_search_base = ""
3617
3618# Sets up the way how the token will picked from the request
3619# COOKIE: Will use 'auth_jwt_cookie_name' cookie content parsed with
3620# 'auth_jwt_source_parse_regexp' to obtain the token content.
3621# HEADER: Will use 'auth_jwt_header_name' header value parsed with
3622# 'auth_jwt_source_parse_regexp' to obtain the token content.
3623# 
3624#auth_jwt_token_source = "HEADER"
3625
3626# Specifies name of the cookie that will be used to obtain JWT.
3627#auth_jwt_cookie_name = ""
3628
3629# Specifies name http header that will be used to obtain JWT
3630#auth_jwt_header_name = ""
3631
3632# Regular expression that will be used to parse JWT source. Expression is in Python syntax and must contain named group 'token' with capturing the token value.
3633#auth_jwt_source_parse_regexp = "(?P<token>.*)"
3634
3635# Which JWT claim will be used as username for Driverless.
3636#auth_jwt_username_claim_name = "sub"
3637
3638# Whether to verify the signature of the JWT.
3639#auth_jwt_verify = true
3640
3641# Signature algorithm that will be used to verify the signature according to RFC 7518.
3642#auth_jwt_algorithm = "HS256"
3643
3644# Specifies the secret content for HMAC or public key for RSA and DSA signature algorithms.
3645#auth_jwt_secret = ""
3646
3647# Number of seconds after JWT still can be accepted if when already expired
3648#auth_jwt_exp_leeway_seconds = 0
3649
3650# List of accepted 'aud' claims for the JWTs. When empty, anyaudience is accepted
3651#auth_jwt_required_audience = "[]"
3652
3653# Value of the 'iss' claim that JWTs need to have in order to be accepted.
3654#auth_jwt_required_issuer = ""
3655
3656# Local password file
3657# Generating a htpasswd file: see syntax below
3658# ``htpasswd -B '<location_to_place_htpasswd_file>' '<username>'``
3659# note: -B forces use of brcypt, a secure encryption method
3660#local_htpasswd_file = ""
3661
3662# Specify the name of the report.
3663#autodoc_report_name = "report"
3664
3665# AutoDoc template path. Provide the full path to your custom AutoDoc template or leave as 'default'to generate the standard AutoDoc.
3666#autodoc_template = ""
3667
3668# Location of the additional AutoDoc templates
3669#autodoc_additional_template_folder = ""
3670
3671# Specify the AutoDoc output type.
3672#autodoc_output_type = "docx"
3673
3674# Specify the type of sub-templates to use.
3675# Options are 'auto', 'docx' or  'md'.
3676#autodoc_subtemplate_type = "auto"
3677
3678# Specify the maximum number of classes in the confusion
3679# matrix.
3680#autodoc_max_cm_size = 10
3681
3682# Specify the number of top features to display in
3683# the document. setting to -1 disables this restriction.
3684#autodoc_num_features = 50
3685
3686# Specify the minimum relative importance in order
3687# for a feature to be displayed. autodoc_min_relative_importance
3688# must be a float >= 0 and <= 1.
3689#autodoc_min_relative_importance = 0.003
3690
3691# Whether to compute permutation based feature
3692# importance.
3693#autodoc_include_permutation_feature_importance = false
3694
3695# Number of permutations to make per feature when computing
3696# feature importance.
3697#autodoc_feature_importance_num_perm = 1
3698
3699# Name of the scorer to be used to calculate feature
3700# importance. Leave blank to use experiments default scorer.
3701#autodoc_feature_importance_scorer = ""
3702
3703# The autodoc_pd_max_rows configuration controls the
3704# number of rows shown for the partial dependence plots (PDP) and Shapley
3705# values summary plot in the AutoDoc. Random sampling is used for
3706# datasets with more than the autodoc_pd_max_rows limit.
3707#autodoc_pd_max_rows = 10000
3708
3709# Maximum number of seconds Partial Dependency computation
3710# can take when generating report. Set to -1 for no time limit.
3711#autodoc_pd_max_runtime = 45
3712
3713# Whether to enable fast approximation for predictions that are needed for the
3714# generation of partial dependence plots. Can help when want to create many PDP
3715# plots in short time. Amount of approximation is controlled by fast_approx_num_trees,
3716# fast_approx_do_one_fold, fast_approx_do_one_model experiment expert settings.
3717# 
3718#autodoc_pd_fast_approx = true
3719
3720# Max number of unique values for integer/real columns to be treated as categoricals (test applies to first statistical_threshold_data_size_small rows only)
3721# Similar to max_int_as_cat_uniques used for experiment, but here used to control PDP making.
3722#autodoc_pd_max_int_as_cat_uniques = 50
3723
3724# Number of standard deviations outside of the range of
3725# a column to include in partial dependence plots. This shows how the
3726# model will react to data it has not seen before.
3727#autodoc_out_of_range = 3
3728
3729# Specify the number of rows to include in PDP and ICE plot
3730# if individual rows are not specified.
3731#autodoc_num_rows = 0
3732
3733# Whether to include population stability index if
3734# experiment is binary classification/regression.
3735#autodoc_population_stability_index = false
3736
3737# Number of quantiles to use for population stability index
3738# .
3739#autodoc_population_stability_index_n_quantiles = 10
3740
3741# Whether to include population stability index for features (not just predictions) if
3742# experiment is binary classification/regression. This calculates PSI across
3743# train/validation/test for each feature to detect data drift.
3744#autodoc_feature_population_stability_index = false
3745
3746# Whether to include prediction statistics information if
3747# experiment is binary classification/regression.
3748#autodoc_prediction_stats = false
3749
3750# Number of quantiles to use for prediction statistics.
3751#autodoc_prediction_stats_n_quantiles = 20
3752
3753# Whether to include response rates information if
3754# experiment is binary classification.
3755#autodoc_response_rate = false
3756
3757# Number of quantiles to use for response rates information
3758# .
3759#autodoc_response_rate_n_quantiles = 10
3760
3761# Whether to show the Gini Plot.
3762#autodoc_gini_plot = false
3763
3764# Show Shapley values results in the AutoDoc.
3765#autodoc_enable_shapley_values = true
3766
3767# The number feature in a KLIME global GLM coefficients
3768# table. Must be an integer greater than 0 or -1. To
3769# show all features set to -1.
3770#autodoc_global_klime_num_features = 10
3771
3772# Set the number of KLIME global GLM coefficients tables. Set
3773# to 1 to show one table with coefficients sorted by absolute
3774# value. Set to 2 to two tables one with the top positive
3775# coefficients and one with the top negative coefficients.
3776#autodoc_global_klime_num_tables = 1
3777
3778# Number of features to be show in data summary. Value
3779# must be an integer. Values lower than 1, f.e. 0 or -1, indicate that
3780# all columns should be shown.
3781#autodoc_data_summary_col_num = -1
3782
3783# List of percentile values to include in the numeric data summary table.
3784# Available percentiles are: 1, 25, 50, 75, 99. Default is [].
3785# Example: [1, 25, 50, 75, 99] to show all computed percentiles.
3786#autodoc_data_numeric_percentiles = "[1, 99]"
3787
3788# Whether to show all config settings. If False, only
3789# the changed settings (config overrides) are listed, otherwise all
3790# settings are listed.
3791#autodoc_list_all_config_settings = false
3792
3793# Line length of the keras model architecture summary. Must
3794# be an integer greater than 0 or -1. To use the default line length set
3795# value -1.
3796#autodoc_keras_summary_line_length = -1
3797
3798# Maximum number of lines shown for advanced transformer
3799# architecture in the Feature section. Note that the full architecture
3800# can be found in the Appendix.
3801#autodoc_transformer_architecture_max_lines = 30
3802
3803# Show full NLP/Image transformer architecture in
3804# the Appendix.
3805#autodoc_full_architecture_in_appendix = false
3806
3807# Specify whether to show the full glm coefficient
3808# table(s) in the appendix. coef_table_appendix_results_table must be
3809# a boolean: True to show tables in appendix, False to not show them
3810# .
3811#autodoc_coef_table_appendix_results_table = false
3812
3813# Set the number of models for which a glm coefficients
3814# table is shown in the AutoDoc. coef_table_num_models must
3815# be -1 or an integer >= 1 (-1 shows all models).
3816#autodoc_coef_table_num_models = 1
3817
3818# Set the number of folds per model for which a glm
3819# coefficients table is shown in the AutoDoc.
3820# coef_table_num_folds must be -1 or an integer >= 1
3821# (-1 shows all folds per model).
3822#autodoc_coef_table_num_folds = -1
3823
3824# Set the number of coefficients to show within a glm
3825# coefficients table in the AutoDoc. coef_table_num_coef, controls
3826# the number of rows shown in a glm table and must be -1 or
3827# an integer >= 1 (-1 shows all coefficients).
3828#autodoc_coef_table_num_coef = 50
3829
3830# Set the number of classes to show within a glm
3831# coefficients table in the AutoDoc. coef_table_num_classes controls
3832# the number of class-columns shown in a glm table and must be -1 or
3833# an integer >= 4 (-1 shows all classes).
3834#autodoc_coef_table_num_classes = 9
3835
3836# When histogram plots are available: The number of
3837# top (default 10) features for which to show histograms.
3838#autodoc_num_histogram_plots = 10
3839
3840#pdp_max_threads = -1
3841
3842# If True, will force AutoDoc to run in only the main server, not on remote workers in case of a multi-node setup
3843#autodoc_force_singlenode = false
3844
3845# Whether to include images of sub pipelines for ensemble models
3846#autodoc_include_ensemble_sub_pipelines = false
3847
3848# Whether to include tree structure of sub pipelines for ensemble models
3849#autodoc_include_ensemble_trees = false
3850
3851# IP address and port of autoviz process.
3852#vis_server_ip = "127.0.0.1"
3853
3854# IP and port of autoviz process.
3855#vis_server_port = 12346
3856
3857# Maximum number of columns autoviz will work with.
3858# If dataset has more columns than this number,
3859# autoviz will pick columns randomly, prioritizing numerical columns
3860# 
3861#autoviz_max_num_columns = 50
3862
3863#autoviz_max_aggregated_rows = 500
3864
3865# When enabled, experiment will try to use feature transformations recommended by Autoviz
3866#autoviz_enable_recommendations = true
3867
3868# Key-value pairs of column names, and transformations that Autoviz recommended
3869#autoviz_recommended_transformation = "{}"
3870
3871#autoviz_enable_transformer_acceptance_tests = false
3872
3873# Enable custom recipes.
3874#enable_custom_recipes = true
3875
3876# Enable uploading of custom recipes from local file system.
3877#enable_custom_recipes_upload = true
3878
3879# Enable downloading of custom recipes from external URL.
3880#enable_custom_recipes_from_url = true
3881
3882# Enable upload recipe files to be zip, containing custom recipe(s) in root folder,
3883# while any other code or auxiliary files must be in some sub-folder.
3884# 
3885#enable_custom_recipes_from_zip = true
3886
3887#must_have_custom_transformers = false
3888
3889#must_have_custom_transformers_2 = false
3890
3891#must_have_custom_transformers_3 = false
3892
3893#must_have_custom_models = false
3894
3895#must_have_custom_scorers = false
3896
3897# When set to true, it enable downloading custom recipes third party packages from the web, otherwise the python environment will be transferred from main worker.
3898#enable_recreate_custom_recipes_env = true
3899
3900#extra_migration_custom_recipes_missing_modules = false
3901
3902# Include custom recipes in default inclusion lists (warning: enables all custom recipes)
3903#include_custom_recipes_by_default = false
3904
3905#force_include_custom_recipes_by_default = false
3906
3907# Whether to enable use of H2O recipe server.  In some casees, recipe server (started at DAI startup) may enter into an unstable state, and this might affect other experiments.  Then one can avoid triggering use of the recipe server by setting this to false.
3908#enable_h2o_recipes = true
3909
3910# URL of H2O instance for use by transformers, models, or scorers.
3911#h2o_recipes_url = "None"
3912
3913# IP of H2O instance for use by transformers, models, or scorers.
3914#h2o_recipes_ip = "None"
3915
3916# Port of H2O instance for use by transformers, models, or scorers. No other instances must be on that port or on next port.
3917#h2o_recipes_port = 50361
3918
3919# Name of H2O instance for use by transformers, models, or scorers.
3920#h2o_recipes_name = "None"
3921
3922# Number of threads for H2O instance for use by transformers, models, or scorers. -1 for all.
3923#h2o_recipes_nthreads = 8
3924
3925# Log Level of H2O instance for use by transformers, models, or scorers.
3926#h2o_recipes_log_level = "None"
3927
3928# Maximum memory size of H2O instance for use by transformers, models, or scorers.
3929#h2o_recipes_max_mem_size = "None"
3930
3931# Minimum memory size of H2O instance for use by transformers, models, or scorers.
3932#h2o_recipes_min_mem_size = "None"
3933
3934# General user overrides of kwargs dict to pass to h2o.init() for recipe server.
3935#h2o_recipes_kwargs = "{}"
3936
3937# Number of trials to give h2o-3 recipe server to start.
3938#h2o_recipes_start_trials = 5
3939
3940# Number of seconds to sleep before starting h2o-3 recipe server.
3941#h2o_recipes_start_sleep0 = 1
3942
3943# Number of seconds to sleep between trials of starting h2o-3 recipe server.
3944#h2o_recipes_start_sleep = 5
3945
3946# Lock source for recipes to a specific github repo.
3947# If True then all custom recipes must come from the repo specified in setting: custom_recipes_git_repo
3948#custom_recipes_lock_to_git_repo = false
3949
3950# If custom_recipes_lock_to_git_repo is set to True, only this repo can be used to pull recipes from
3951#custom_recipes_git_repo = "https://github.com/h2oai/driverlessai-recipes"
3952
3953# Branch constraint for recipe source repo. Any branch allowed if unset or None
3954#custom_recipes_git_branch = "None"
3955
3956#custom_recipes_excluded_filenames_from_repo_download = "[]"
3957
3958#allow_old_recipes_use_datadir_as_data_directory = true
3959
3960# Internal helper to allow memory of if changed recipe
3961#last_recipe = ""
3962
3963# Dictionary to control recipes for each experiment and particular custom recipes.
3964# E.g. if inserting into the GUI as any toml string, can use:
3965# ""recipe_dict="{'key1': 2, 'key2': 'value2'}"""
3966# E.g. if putting into config.toml as a dict, can use:
3967# recipe_dict="{'key1': 2, 'key2': 'value2'}"
3968# 
3969#recipe_dict = "{}"
3970
3971# Dictionary to control some mutation parameters.
3972# E.g. if inserting into the GUI as any toml string, can use:
3973# ""mutation_dict="{'key1': 2, 'key2': 'value2'}"""
3974# E.g. if putting into config.toml as a dict, can use:
3975# mutation_dict="{'key1': 2, 'key2': 'value2'}"
3976# 
3977#mutation_dict = "{}"
3978
3979#enable_custom_transformers = true
3980
3981#enable_custom_pretransformers = true
3982
3983#enable_custom_models = true
3984
3985#enable_custom_scorers = true
3986
3987#enable_custom_datas = true
3988
3989#enable_custom_explainers = true
3990
3991#enable_custom_individuals = true
3992
3993#enable_connectors_recipes = true
3994
3995# Whether to validate recipe names provided in included lists, like included_models,
3996# or (if False) whether to just log warning to server logs and ignore any invalid names of recipes.
3997# 
3998#raise_on_invalid_included_list = false
3999
4000#contrib_relative_directory = "contrib"
4001
4002# location of custom recipes packages installed (relative to data_directory)
4003# We will try to install packages dynamically, but can also do (before or after server started):
4004# (inside docker running docker instance if running docker, or as user server is running as (e.g. dai user) if deb/tar native installation:
4005# PYTHONPATH=<full tmp dir>/<contrib_env_relative_directory>/lib/python3.6/site-packages/ <path to dai>dai-env.sh python -m pip install --prefix=<full tmp dir>/<contrib_env_relative_directory> <packagename> --upgrade --upgrade-strategy only-if-needed --log-file pip_log_file.log
4006# where <path to dai> is /opt/h2oai/dai/ for native rpm/deb installation
4007# Note can also install wheel files if <packagename> is name of wheel file or archive.
4008# 
4009#contrib_env_relative_directory = "contrib/env"
4010
4011# List of package versions to ignore.  Useful when small version change but likely to function still with old package version.
4012# 
4013#ignore_package_version = "[]"
4014
4015# List of package versions to remove if encounter conflict.  Useful when want new version of package, and old recipes likely to function still.
4016# 
4017#clobber_package_version = "['catboost', 'h2o_featurestore']"
4018
4019# List of package versions to remove if encounter conflict.
4020# Useful when want new version of package, and old recipes likely to function still.
4021# Also useful when do not need to use old versions of recipes even if they would no longer function.
4022# 
4023#swap_package_version = "{'catboost==0.26.1': 'catboost==1.2.5', 'catboost==0.25.1': 'catboost==1.2.5', 'catboost==0.24.1': 'catboost==1.2.5', 'catboost==1.0.4': 'catboost==1.2.5', 'catboost==1.0.5': 'catboost==1.2.5', 'catboost==1.0.6': 'catboost==1.2.5', 'catboost': 'catboost==1.2.5'}"
4024
4025# If user uploads recipe with changes to package versions,
4026# allow upgrade of package versions.
4027# If DAI protected packages are attempted to be changed, can try using pip_install_options toml with ['--no-deps'].
4028# Or to ignore entirely DAI versions of packages, can try using pip_install_options toml with ['--ignore-installed'].
4029# Any other experiments relying on recipes with such packages will be affected, use with caution.
4030#allow_version_change_user_packages = false
4031
4032# pip install retry for call to pip.  Sometimes need to try twice
4033#pip_install_overall_retries = 2
4034
4035# pip install verbosity level (number of -v's given to pip, up to 3
4036#pip_install_verbosity = 2
4037
4038# pip install timeout in seconds, Sometimes internet issues would mean want to fail faster
4039#pip_install_timeout = 15
4040
4041# pip install retry count
4042#pip_install_retries = 5
4043
4044# Whether to use DAI constraint file to help pip handle versions.  pip can make mistakes and try to install updated packages for no reason.
4045#pip_install_use_constraint = true
4046
4047# pip install options: string of list of other options, e.g. ['--proxy', 'http://user:password@proxyserver:port']
4048#pip_install_options = "[]"
4049
4050# Whether to enable basic acceptance testing.  Tests if can pickle the state, etc.
4051#enable_basic_acceptance_tests = true
4052
4053# Whether acceptance tests should run for custom genes / models / scorers / etc.
4054#enable_acceptance_tests = true
4055
4056#acceptance_tests_use_weather_data = false
4057
4058#acceptance_tests_mojo_benchmark = false
4059
4060# Whether to skip disabled recipes (True) or fail and show GUI message (False).
4061#skip_disabled_recipes = false
4062
4063# Minutes to wait until a recipe's acceptance testing is aborted.  A recipe is rejected if acceptance
4064# testing is enabled and times out.
4065# One may also set timeout for a specific recipe by setting the class's staticmethod function called
4066# acceptance_test_timeout to return number of minutes to wait until timeout doing acceptance testing.
4067# This timeout does not include the time to install required packages.
4068# 
4069#acceptance_test_timeout = 20.0
4070
4071# Whether to re-check recipes during server startup (if per_user_directories == false)
4072# or during user login (if per_user_directories == true).
4073# If any inconsistency develops, the bad recipe will be removed during re-doing acceptance testing.  This process
4074# can make start-up take alot longer for many recipes, but in LTS releases the risk of recipes becoming out of date
4075# is low.  If set to false, will disable acceptance re-testing during sever start but note that previews or experiments may fail if those inconsistent recipes are used.
4076# Such inconsistencies can occur when API changes for recipes or more aggressive acceptance tests are performed.
4077# 
4078#contrib_reload_and_recheck_server_start = true
4079
4080# Whether to at least install packages required for recipes during server startup (if per_user_directories == false)
4081# or during user login (if per_user_directories == true).
4082# Important to keep True so any later use of recipes (that have global packages installed) will work.
4083# 
4084#contrib_install_packages_server_start = true
4085
4086# Whether to re-check recipes after uploaded from main server to worker in multinode.
4087# Expensive for every task that has recipes to do this.
4088#contrib_reload_and_recheck_worker_tasks = false
4089
4090#data_recipe_isolate = true
4091
4092# Space-separated string list of URLs for recipes that are loaded at user login time
4093#server_recipe_url = ""
4094
4095#num_rows_acceptance_test_custom_transformer = 200
4096
4097#num_rows_acceptance_test_custom_model = 100
4098
4099# List of recipes (per dict key by type) that are applicable for given experiment. This is especially relevant
4100# for situations such as new `experiment with same params` where the user should be able to
4101# use the same recipe versions as the parent experiment if he/she wishes to.
4102# 
4103#recipe_activation = "{'transformers': [], 'models': [], 'scorers': [], 'data': [], 'individuals': []}"
4104
4105# File System Support
4106# upload : standard upload feature
4107# file : local file system/server file system
4108# hdfs : Hadoop file system, remember to configure the HDFS config folder path and keytab below
4109# dtap : Blue Data Tap file system, remember to configure the DTap section below
4110# s3 : Amazon S3, optionally configure secret and access key below
4111# gcs : Google Cloud Storage, remember to configure gcs_path_to_service_account_json below
4112# gbq : Google Big Query, remember to configure gcs_path_to_service_account_json below
4113# minio : Minio Cloud Storage, remember to configure secret and access key below
4114# snow : Snowflake Data Warehouse, remember to configure Snowflake credentials below (account name, username, password)
4115# kdb : KDB+ Time Series Database, remember to configure KDB credentials below (hostname and port, optionally: username, password, classpath, and jvm_args)
4116# azrbs : Azure Blob Storage, remember to configure Azure credentials below (account name, account key)
4117# jdbc: JDBC Connector, remember to configure JDBC below. (jdbc_app_configs)
4118# hive: Hive Connector, remember to configure Hive below. (hive_app_configs)
4119# recipe_file: Custom recipe file upload
4120# recipe_url: Custom recipe upload via url
4121# h2o_drive: H2O Drive, remember to configure `h2o_drive_endpoint_url` below
4122# feature_store: Feature Store, remember to configure feature_store_endpoint_url below
4123# databricks: Databricks connector.
4124# delta_table: Delta Table connector.
4125# 
4126#enabled_file_systems = "['upload', 'file', 'hdfs', 's3', 'recipe_file', 'recipe_url']"
4127
4128#max_files_listed = 100
4129
4130# The option disable access to DAI data_directory from file browser
4131#file_hide_data_directory = true
4132
4133# Enable usage of path filters
4134#file_path_filtering_enabled = false
4135
4136# List of absolute path prefixes to restrict access to in file system browser.
4137# First add the following environment variable to your command line to enable this feature:
4138# file_path_filtering_enabled=true
4139# This feature can be used in the following ways (using specific path or using logged user's directory):
4140# file_path_filter_include="['/data/stage']"
4141# file_path_filter_include="['/data/stage','/data/prod']"
4142# file_path_filter_include=/home/{{DAI_USERNAME}}/
4143# file_path_filter_include="['/home/{{DAI_USERNAME}}/','/data/stage','/data/prod']"
4144# 
4145#file_path_filter_include = "[]"
4146
4147# (Required) HDFS connector
4148# Specify HDFS Auth Type, allowed options are:
4149# noauth : (default) No authentication needed
4150# principal : Authenticate with HDFS with a principal user (DEPRECTATED - use `keytab` auth type)
4151# keytab : Authenticate with a Key tab (recommended). If running
4152# DAI as a service, then the Kerberos keytab needs to
4153# be owned by the DAI user.
4154# keytabimpersonation : Login with impersonation using a keytab
4155#hdfs_auth_type = "noauth"
4156
4157# Kerberos app principal user. Required when hdfs_auth_type='keytab'; recommended otherwise.
4158#hdfs_app_principal_user = ""
4159
4160# Deprecated - Do Not Use, login user is taken from the user name from login
4161#hdfs_app_login_user = ""
4162
4163# JVM args for HDFS distributions, provide args seperate by space
4164# -Djava.security.krb5.conf=<path>/krb5.conf
4165# -Dsun.security.krb5.debug=True
4166# -Dlog4j.configuration=file:///<path>log4j.properties
4167#hdfs_app_jvm_args = ""
4168
4169# hdfs class path
4170#hdfs_app_classpath = ""
4171
4172# List of supported DFS schemas. Ex. "['hdfs://', 'maprfs://', 'swift://']"
4173# Supported schemas list is used as an initial check to ensure valid input to connector
4174# 
4175#hdfs_app_supported_schemes = "['hdfs://', 'maprfs://', 'swift://']"
4176
4177# Maximum number of files viewable in connector ui. Set to larger number to view more files
4178#hdfs_max_files_listed = 100
4179
4180# Starting HDFS path displayed in UI HDFS browser
4181#hdfs_init_path = "hdfs://"
4182
4183# Starting HDFS path for the artifacts upload operations
4184#hdfs_upload_init_path = "hdfs://"
4185
4186# Enables the multi-user mode for MapR integration, which allows to have MapR ticket per user.
4187#enable_mapr_multi_user_mode = false
4188
4189# Blue Data DTap connector settings are similar to HDFS connector settings.
4190# Specify DTap Auth Type, allowed options are:
4191# noauth : No authentication needed
4192# principal : Authenticate with DTab with a principal user
4193# keytab : Authenticate with a Key tab (recommended). If running
4194# DAI as a service, then the Kerberos keytab needs to
4195# be owned by the DAI user.
4196# keytabimpersonation : Login with impersonation using a keytab
4197# NOTE: "hdfs_app_classpath" and "core_site_xml_path" are both required to be set for DTap connector
4198#dtap_auth_type = "noauth"
4199
4200# Dtap (HDFS) config folder path , can contain multiple config files
4201#dtap_config_path = ""
4202
4203# Path of the principal key tab file, dtap_key_tab_path is deprecated. Please use dtap_keytab_path
4204#dtap_key_tab_path = ""
4205
4206# Path of the principal key tab file
4207#dtap_keytab_path = ""
4208
4209# Kerberos app principal user (recommended)
4210#dtap_app_principal_user = ""
4211
4212# Specify the user id of the current user here as user@realm
4213#dtap_app_login_user = ""
4214
4215# JVM args for DTap distributions, provide args seperate by space
4216#dtap_app_jvm_args = ""
4217
4218# DTap (HDFS) class path. NOTE: set 'hdfs_app_classpath' also
4219#dtap_app_classpath = ""
4220
4221# Starting DTAP path displayed in UI DTAP browser
4222#dtap_init_path = "dtap://"
4223
4224# S3 Connector credentials
4225#aws_access_key_id = ""
4226
4227# S3 Connector credentials
4228#aws_secret_access_key = 
4229
4230# S3 Connector credentials
4231#aws_role_arn = ""
4232
4233# What region to use when none is specified in the s3 url.
4234# Ignored when aws_s3_endpoint_url is set.
4235# 
4236#aws_default_region = ""
4237
4238# Sets endpoint URL that will be used to access S3.
4239#aws_s3_endpoint_url = ""
4240
4241# If set to true S3 Connector will try to to obtain credentials associated with
4242# the role attached to the EC2 instance.
4243#aws_use_ec2_role_credentials = false
4244
4245# Starting S3 path displayed in UI S3 browser
4246#s3_init_path = "s3://"
4247
4248# S3 Connector will skip cert verification if this is set to true, (mostly used for S3-like connectors, e.g. Ceph)
4249#s3_skip_cert_verification = false
4250
4251# path/to/cert/bundle.pem - A filename of the CA cert bundle to use for the S3 connector
4252#s3_connector_cert_location = ""
4253
4254# GCS Connector credentials
4255# example (suggested) -- '/licenses/my_service_account_json.json'
4256#gcs_path_to_service_account_json = ""
4257
4258# GCS Connector service account credentials in JSON, this configuration takes precedence over gcs_path_to_service_account_json.
4259#gcs_service_account_json = "{}"
4260
4261# GCS Connector impersonated account
4262#gbq_access_impersonated_account = ""
4263
4264# Starting GCS path displayed in UI GCS browser
4265#gcs_init_path = "gs://"
4266
4267# Space-seperated list of OAuth2 scopes for the access token used to authenticate in Google Cloud Storage
4268#gcs_access_token_scopes = ""
4269
4270# When ``google_cloud_use_oauth`` is enabled, Google Cloud client cannot automatically infer the default project, thus it must be explicitly specified
4271#gcs_default_project_id = ""
4272
4273# Space-seperated list of OAuth2 scopes for the access token used to authenticate in Google BigQuery
4274#gbq_access_token_scopes = ""
4275
4276# By default the DriverlessAI Google Cloud Storage and BigQuery connectors are using service account file to retrieve authentication credentials.When enabled, the Storage and BigQuery connectors will use OAuth2 user access tokens to authenticate in Google Cloud instead.
4277#google_cloud_use_oauth = false
4278
4279# Minio Connector credentials
4280#minio_endpoint_url = ""
4281
4282# Minio Connector credentials
4283#minio_access_key_id = ""
4284
4285# Minio Connector credentials
4286#minio_secret_access_key = 
4287
4288# Minio Connector will skip cert verification if this is set to true
4289#minio_skip_cert_verification = false
4290
4291# path/to/cert/bundle.pem - A filename of the CA cert bundle to use for the Minio connector
4292#minio_connector_cert_location = ""
4293
4294# Starting Minio path displayed in UI Minio browser
4295#minio_init_path = "/"
4296
4297# H2O Drive server endpoint URL
4298#h2o_drive_endpoint_url = ""
4299
4300# Space seperated list of OpenID scopes for the access token used by the H2O Drive connector
4301#h2o_drive_access_token_scopes = ""
4302
4303# Maximum duration (in seconds) for a session with the H2O Drive
4304#h2o_drive_session_duration = 10800
4305
4306# Recommended Provide: url, user, password
4307# Optionally Provide: account, user, password
4308# Example URL: https://<snowflake_account>.<region>.snowflakecomputing.com
4309# Snowflake Connector credentials
4310#snowflake_url = ""
4311
4312# Snowflake Connector credentials
4313#snowflake_user = ""
4314
4315# Snowflake Connector credentials
4316#snowflake_password = ""
4317
4318# Snowflake Connector credentials
4319#snowflake_account = ""
4320
4321# Snowflake Connector authenticator, can be used when Snowflake is using native SSO with Okta.
4322# E.g.: snowflake_authenticator = "https://<okta_account_name>.okta.com"
4323# 
4324#snowflake_authenticator = ""
4325
4326# Keycloak endpoint for retrieving external IdP tokens for Snowflake. (https://www.keycloak.org/docs/latest/server_admin/#retrieving-external-idp-tokens)
4327#snowflake_keycloak_broker_token_endpoint = ""
4328
4329# Token type that should be used from the response from Keycloak endpoint for retrieving external IdP tokens for Snowflake. See `snowflake_keycloak_broker_token_endpoint`.
4330#snowflake_keycloak_broker_token_type = "access_token"
4331
4332# ID of the OAuth client configured in H2O Secure Store for authentication with Snowflake.
4333#snowflake_h2o_secure_store_oauth_client_id = ""
4334
4335# Snowflake hostname to connect to when running Driverless AI in Snowpark Container Services.
4336#snowflake_host = ""
4337
4338# Snowflake port to connect to when running Driverless AI in Snowpark Container Services.
4339#snowflake_port = ""
4340
4341# Snowflake filepath that stores the token of the session, when running
4342# Driverless AI in Snowpark Container Services.
4343# E.g.: snowflake_session_token_filepath = "/snowflake/session/token"
4344# 
4345#snowflake_session_token_filepath = ""
4346
4347# Setting to allow or disallow Snowflake connector from using Snowflake stages during queries.
4348# True - will permit the connector to use stages and generally improves performance. However,
4349# if the Snowflake user does not have permission to create/use stages will end in errors.
4350# False - will prevent the connector from using stages, thus Snowflake users without permission
4351# to create/use stages will have successful queries, however may significantly negatively impact
4352# query performance.
4353# 
4354#snowflake_allow_stages = true
4355
4356# Sets the file format to be used when Snowflake stages are enabled for
4357# query execution.
4358# 
4359#snowflake_stages_file_format = "CSV"
4360
4361# Sets the upper size limit (in bytes) of each file to be generated when
4362# Snowflake stages are enabled for query execution.
4363# 
4364#snowflake_stages_max_file_size = 16777216
4365
4366# Optional schema name where temporary Snowflake stages should be created.
4367# If set, the Snowflake connector creates all temporary stages in this schema instead of the table’s schema.
4368# Requirements:
4369# - The Snowflake user/role must have permission to create and use stages
4370# in the specified schema.
4371# - If unset, the Snowflake connector creates stages in the table’s schema
4372# (default Snowflake behavior).
4373# Applies only when 'snowflake_allow_stages' is True
4374# 
4375#snowflake_staging_schema = ""
4376
4377# Sets the number of rows to be fetched by Snowflake cursor at one time. This is only used if setting
4378# `snowflake_allow_stages` is set to False, may help with performance depending on the type and size
4379# of data being queried.
4380# 
4381#snowflake_batch_size = 10000
4382
4383# KDB Connector credentials
4384#kdb_user = ""
4385
4386# KDB Connector credentials
4387#kdb_password = ""
4388
4389# KDB Connector credentials
4390#kdb_hostname = ""
4391
4392# KDB Connector credentials
4393#kdb_port = ""
4394
4395# KDB Connector credentials
4396#kdb_app_classpath = ""
4397
4398# KDB Connector credentials
4399#kdb_app_jvm_args = ""
4400
4401# Account name for Azure Blob Store Connector
4402#azure_blob_account_name = ""
4403
4404# Account key for Azure Blob Store Connector
4405#azure_blob_account_key = 
4406
4407# Connection string for Azure Blob Store Connector
4408#azure_connection_string = 
4409
4410# SAS token for Azure Blob Store Connector
4411#azure_sas_token = 
4412
4413# Starting Azure blob store path displayed in UI Azure blob store browser
4414#azure_blob_init_path = "https://"
4415
4416# When enabled, Azure Blob Store Connector will use access token derived  from the credentials received on login with OpenID Connect.
4417#azure_blob_use_access_token = false
4418
4419# Configures the scopes for the access token used by Azure Blob Store  Connector when the azure_blob_use_access_token us enabled. (space separated list)
4420#azure_blob_use_access_token_scopes = "https://storage.azure.com/.default"
4421
4422# Sets the source of the access token for accessing the Azure bob store
4423# KEYCLOAK: Will exchange the session access token for the federated
4424# refresh token with Keycloak and use it to obtain the access token
4425# directly with the Azure AD.
4426# SESSION: Will use the access token derived  from the credentials
4427# received on login with OpenID Connect.
4428# 
4429#azure_blob_use_access_token_source = "SESSION"
4430
4431# Application (client) ID registered on Azure AD when the KEYCLOAK source is enabled.
4432#azure_blob_keycloak_aad_client_id = ""
4433
4434# Application (client) secret when the KEYCLOAK source is enabled.
4435#azure_blob_keycloak_aad_client_secret = ""
4436
4437# A URL that identifies a token authority. It should be of the format https://login.microsoftonline.com/your_tenant
4438#azure_blob_keycloak_aad_auth_uri = ""
4439
4440# Keycloak Endpoint for Retrieving External IDP Tokens (https://www.keycloak.org/docs/latest/server_admin/#retrieving-external-idp-tokens)
4441#azure_blob_keycloak_broker_token_endpoint = ""
4442
4443# (DEPRECATED, use azure_blob_use_access_token and
4444# azure_blob_use_access_token_source="KEYCLOAK" instead.)
4445# (When enabled only DEPRECATED options azure_ad_client_id,
4446# azure_ad_client_secret, azure_ad_auth_uri and
4447# azure_keycloak_idp_token_endpoint will be effective)
4448# This is equivalent to setting
4449# azure_blob_use_access_token_source = "KEYCLOAK"
4450# and setting azure_blob_keycloak_aad_client_id,
4451# azure_blob_keycloak_aad_client_secret,
4452# azure_blob_keycloak_aad_auth_uri and
4453# azure_blob_keycloak_broker_token_endpoint
4454# options.
4455# )
4456# If true, enable the Azure Blob Storage Connector to use Azure AD tokens
4457# obtained from the Keycloak for auth.
4458# 
4459#azure_enable_token_auth_aad = false
4460
4461# (DEPRECATED, use azure_blob_keycloak_aad_client_id instead.) Application (client) ID registered on Azure AD
4462#azure_ad_client_id = ""
4463
4464# (DEPRECATED, use azure_blob_keycloak_aad_client_secret instead.) Application Client Secret
4465#azure_ad_client_secret = ""
4466
4467# (DEPRECATED, use azure_blob_keycloak_aad_auth_uri instead)A URL that identifies a token authority. It should be of the format https://login.microsoftonline.com/your_tenant
4468#azure_ad_auth_uri = ""
4469
4470# (DEPRECATED, use azure_blob_use_access_token_scopes instead.)Scopes requested to access a protected API (a resource).
4471#azure_ad_scopes = "[]"
4472
4473# (DEPRECATED, use azure_blob_keycloak_broker_token_endpoint instead.)Keycloak Endpoint for Retrieving External IDP Tokens (https://www.keycloak.org/docs/latest/server_admin/#retrieving-external-idp-tokens)
4474#azure_keycloak_idp_token_endpoint = ""
4475
4476# ID of the application's Microsoft Entra tenant, also called its 'directory' ID.
4477# This is used for Azure Workload Identity.
4478# 
4479#azure_workload_identity_tenant_id = ""
4480
4481# The client ID of a Microsoft Entra app registration.
4482# This is used for Azure Workload Identity.
4483# 
4484#azure_workload_identity_client_id = ""
4485
4486# The path to a file containing a Kubernetes service account token that authenticates the identity.
4487# This is used for Azure Workload Identity.
4488# 
4489#azure_workload_identity_token_file_path = ""
4490
4491# Desired scopes for the access token when the Databricks connector is using
4492# Azure Workflow Identity authentication. At least one scope should be specified.
4493# For more information about scopes, see https://learn.microsoft.com/entra/identity-platform/scopes-oidc.
4494# 
4495#databricks_azure_workload_identity_scopes = ""
4496
4497# Desired scopes for the access token when the Azure Blob connector is using
4498# Azure Workflow Identity authentication. At least one scope should be specified.
4499# For more information about scopes, see https://learn.microsoft.com/entra/identity-platform/scopes-oidc.
4500# 
4501#azure_blob_workload_identity_scopes = ""
4502
4503# Name of the Databricks workspace instance. Please refer
4504# https://learn.microsoft.com/en-us/azure/databricks/workspace/workspace-details
4505# on how to obtains the name of your Databricks workspace instance.
4506# 
4507#databricks_workspace_instance_name = ""
4508
4509# Sets the number of rows to be fetched by the Databricks cursor at one time.
4510#databricks_batch_size = 100000
4511
4512# Configuration for JDBC Connector.
4513# JSON/Dictionary String with multiple keys.
4514# Format as a single line without using carriage returns (the following example is formatted for readability).
4515# Use triple quotations to ensure that the text is read as a single string.
4516# Example:
4517# '{
4518# "postgres": {
4519# "url": "jdbc:postgresql://ip address:port/postgres",
4520# "jarpath": "/path/to/postgres_driver.jar",
4521# "classpath": "org.postgresql.Driver"
4522# },
4523# "mysql": {
4524# "url":"mysql connection string",
4525# "jarpath": "/path/to/mysql_driver.jar",
4526# "classpath": "my.sql.classpath.Driver"
4527# }
4528# }'
4529# 
4530#jdbc_app_configs = "{}"
4531
4532# extra jvm args for jdbc connector
4533#jdbc_app_jvm_args = "-Xmx4g"
4534
4535# alternative classpath for jdbc connector
4536#jdbc_app_classpath = ""
4537
4538# Configuration for Hive Connector.
4539# Note that inputs are similar to configuring HDFS connectivity.
4540# important keys:
4541# * hive_conf_path - path to hive configuration, may have multiple files. typically: hive-site.xml, hdfs-site.xml, etc
4542# * auth_type - one of `noauth`, `keytab`, `keytabimpersonation` for kerberos authentication
4543# * keytab_path - path to the kerberos keytab to use for authentication, can be "" if using `noauth` auth_type
4544# * principal_user - Kerberos app principal user. Required when using auth_type `keytab` or `keytabimpersonation`
4545# JSON/Dictionary String with multiple keys. Example:
4546# '{
4547# "hive_connection_1": {
4548# "hive_conf_path": "/path/to/hive/conf",
4549# "auth_type": "one of ['noauth', 'keytab', 'keytabimpersonation']",
4550# "keytab_path": "/path/to/<filename>.keytab",
4551# "principal_user": "hive/localhost@EXAMPLE.COM",
4552# },
4553# "hive_connection_2": {
4554# "hive_conf_path": "/path/to/hive/conf_2",
4555# "auth_type": "one of ['noauth', 'keytab', 'keytabimpersonation']",
4556# "keytab_path": "/path/to/<filename_2>.keytab",
4557# "principal_user": "my_user/localhost@EXAMPLE.COM",
4558# }
4559# }'
4560# 
4561#hive_app_configs = "{}"
4562
4563# Extra jvm args for hive connector
4564#hive_app_jvm_args = "-Xmx4g"
4565
4566# Alternative classpath for hive connector. Can be used to add additional jar files to classpath.
4567#hive_app_classpath = ""
4568
4569# extra JVM args for the Delta Table connector.
4570#delta_table_app_jvm_args = "-Xmx4g"
4571
4572# Alternative Java classpath for the Delta Table connector
4573#delta_table_app_classpath = ""
4574
4575# Replace all the downloads on the experiment page to exports and allow users to push to the artifact store configured with artifacts_store
4576#enable_artifacts_upload = false
4577
4578# Artifacts store.
4579# file_system: stores artifacts on a file system directory denoted by artifacts_file_system_directory.
4580# s3: stores artifacts to S3 bucket.
4581# bitbucket: stores data into Bitbucket repository.
4582# azure: stores data into Azure Blob Store.
4583# hdfs: stores data into a Hadoop distributed file system location.
4584# 
4585#artifacts_store = "file_system"
4586
4587# Decide whether to skip cert verification for Bitbucket when using a repo with HTTPS
4588#bitbucket_skip_cert_verification = false
4589
4590# Local temporary directory to clone artifacts to, relative to data_directory
4591#bitbucket_tmp_relative_dir = "local_git_tmp"
4592
4593# File system location where artifacts will be copied in case artifacts_store is set to file_system
4594#artifacts_file_system_directory = "tmp"
4595
4596# AWS S3 bucket used for experiment artifact export.
4597#artifacts_s3_bucket = ""
4598
4599# Azure Blob Store credentials used for experiment artifact export
4600#artifacts_azure_blob_account_name = ""
4601
4602# Azure Blob Store credentials used for experiment artifact export
4603#artifacts_azure_blob_account_key = 
4604
4605# Azure Blob Store connection string used for experiment artifact export
4606#artifacts_azure_connection_string = 
4607
4608# Azure Blob Store SAS token used for experiment artifact export
4609#artifacts_azure_sas_token = 
4610
4611# Git auth user
4612#artifacts_git_user = "git"
4613
4614# Git auth password
4615#artifacts_git_password = ""
4616
4617# Git repo where artifacts will be pushed upon and upload
4618#artifacts_git_repo = ""
4619
4620# Git branch on the remote repo where artifacts are pushed
4621#artifacts_git_branch = "dev"
4622
4623# File location for the ssh private key used for git authentication
4624#artifacts_git_ssh_private_key_file_location = ""
4625
4626# Feature Store server endpoint URL
4627#feature_store_endpoint_url = ""
4628
4629# Enable TLS communication between DAI and the Feature Store server
4630#feature_store_enable_tls = false
4631
4632# Path to the client certificate to authenticate with the Feature Store server. This is only effective when feature_store_enable_tls=True.
4633#feature_store_tls_cert_path = ""
4634
4635# A list of access token scopes used by the Feature Store connector to authenticate. (Space separate list)
4636#feature_store_access_token_scopes = ""
4637
4638# When defined, will be used as an alternative recipe implementation for the FeatureStore connector.
4639#feature_store_custom_recipe_location = ""
4640
4641# If enabled, GPT functionalities such as summarization would be available. If `openai_api_secret_key` config is provided, OpenAI API would be used. Make sure this does not break your internal policy.
4642#enable_gpt = false
4643
4644# OpenAI API secret key. Beware that if this config is set and `enable_gpt` is `true`, we will send some metadata about datasets and experiments to OpenAI (during dataset and experiment summarization). Make sure that passing such data to OpenAI does not break your internal policy.
4645#openai_api_secret_key = 
4646
4647# OpenAI model to use.
4648#openai_api_model = "gpt-4"
4649
4650# h2oGPT URL endpoint that will be used for GPT-related purposes (e.g. summarization). If both `h2ogpt_url` and `openai_api_secret_key` are provided, we will use only h2oGPT URL.
4651#h2ogpt_url = ""
4652
4653# The h2oGPT Key required for specific h2oGPT URLs, enabling authorized access for GPT-related tasks like summarization.
4654#h2ogpt_key = 
4655
4656# Name of the h2oGPT model that should be used. If not specified the default model in the h2oGPT will be used.
4657#h2ogpt_model_name = ""
4658
4659# Default AWS credentials to be used for scorer deployments.
4660#deployment_aws_access_key_id = ""
4661
4662# Default AWS credentials to be used for scorer deployments.
4663#deployment_aws_secret_access_key = ""
4664
4665# AWS S3 bucket to be used for scorer deployments.
4666#deployment_aws_bucket_name = ""
4667
4668# Approximate upper limit of time for Triton to take to compute latency and throughput performance numbers when performing 'Benchmark' operations for a deployment. Higher values result in more accurate performance numbers.
4669#triton_benchmark_runtime = 5
4670
4671# Approximate upper limit of time for Triton to take to compute latency and throughput performance numbers after loading up the deployment, per model. Higher values result in more accurate performance numbers.
4672#triton_quick_test_runtime = 2
4673
4674# Number of Triton deployments to show per page of the Deploy Wizard
4675#deploy_wizard_num_per_page = 10
4676
4677# Whether to allow user to change non-server toml parameters per experiment in expert page.
4678#allow_config_overrides_in_expert_page = true
4679
4680# Maximum number of columns in each head and tail to log when ingesting data or running experiment on data.
4681#max_cols_log_headtail = 1000
4682
4683# Maximum number of columns in each head and tail to show in GUI, useful when head or tail has all necessary columns, but too many for UI or web server to handle.
4684# -1 means no limit.
4685# A reasonable value is 500, after which web server or browser can become overloaded and use too much memory.
4686# Some values of column counts in UI may not show up correctly, and some dataset details functions may not work.
4687# To select (from GUI or client) any columns as being target, weight column, fold column, time column, time column groups, or dropped columns, the dataset should have those columns within the selected head or tail set of columns.
4688#max_cols_gui_headtail = 1000
4689
4690# Supported file formats (file name endings must match for files to show up in file browser)
4691#supported_file_types = "['csv', 'tsv', 'txt', 'dat', 'tgz', 'gz', 'bz2', 'zip', 'xz', 'xls', 'xlsx', 'jay', 'feather', 'bin', 'arff', 'parquet', 'pkl', 'orc', 'avro']"
4692
4693# Supported file formats of data recipe files (file name endings must match for files to show up in file browser)
4694#recipe_supported_file_types = "['py', 'pyc', 'zip']"
4695
4696# By default, only supported file types (based on the file extensions listed above) will be listed for import into DAI
4697# Some data pipelines generate parquet files without any extensions. Enabling the below option will cause files
4698# without an extension to be listed in the file import dialog.
4699# DAI will import files without extensions as parquet files; if cannot be imported, an error is generated
4700# 
4701#list_files_without_extensions = false
4702
4703# Allow using browser localstorage, to improve UX.
4704#allow_localstorage = true
4705
4706# Allow original dataset columns to be present in downloaded predictions CSV
4707#allow_orig_cols_in_predictions = true
4708
4709# Allow the browser to store e.g. login credentials in login form (set to false for higher security)
4710#allow_form_autocomplete = true
4711
4712# Enable Projects workspace (alpha version, for evaluation)
4713#enable_projects = true
4714
4715# Default application language - options are 'en', 'ja', 'cn', 'ko'
4716#app_language = "en"
4717
4718# If true, Logout button is not visible in the GUI.
4719#disablelogout = false
4720
4721# Local path to the location of the Driverless AI Python Client. If empty, will download from s3
4722#python_client_path = ""
4723
4724# If disabled, server won't verify if WHL package specified in `python_client_path` is valid DAI python client. Default True
4725#python_client_verify_integrity = true
4726
4727# When enabled, new experiment requires to specify expert name
4728#gui_require_experiment_name = false
4729
4730# When disabled, Deploy option will be disabled on finished experiment page
4731#gui_enable_deploy_button = true
4732
4733# Display experiment tour
4734#enable_gui_product_tour = true
4735
4736# Whether user can download dataset as csv file
4737#enable_dataset_downloading = true
4738
4739# If enabled, user can export experiment as a Zip file
4740#enable_experiment_export = true
4741
4742# If enabled, user can import experiments, exported as Zip files from DriverlessAI
4743#enable_experiment_import = true
4744
4745# (EXPERIMENTAL) If enabled, user can launch experiment via new `Predict Wizard` options, which navigates to the new Nitro wizard.
4746#enable_experiment_wizard = true
4747
4748# (EXPERIMENTAL) If enabled, user can do joins via new `Join Wizard` options, which navigates to the new Nitro wizard.
4749#enable_join_wizard = true
4750
4751# URL address of the H2O AI link
4752#hac_link_url = "https://www.h2o.ai/freetrial/?utm_source=dai&ref=dai"
4753
4754#show_all_filesystems = false
4755
4756# Switches Driverless AI to use H2O.ai License Management Server to manage licenses/permission to use software
4757#enable_license_manager = false
4758
4759# Address at which to communicate with H2O.ai License Management Server.
4760# Requires above value, `enable_license_manager` set to True.
4761# Format: {http/https}://{ip address}:{port number}
4762# 
4763#license_manager_address = "http://127.0.0.1:9999"
4764
4765# Name of license manager project that Driverless AI will attempt to retrieve leases from.
4766# NOTE: requires an active license within the License Manager Server to function properly
4767# 
4768#license_manager_project_name = "default"
4769
4770# Number of milliseconds a lease for users will be expected to last,
4771# if using the H2O.ai License Manager server, before the lease REQUIRES renewal.
4772# Default: 3600000 (1 hour) = 1 hour * 60 min / hour * 60 sec / min * 1000 milliseconds / sec
4773# 
4774#license_manager_lease_duration = 3600000
4775
4776# Number of milliseconds a lease for Driverless AI worker nodes will be expected to last,
4777# if using the H2O.ai License Manager server, before the lease REQUIRES renewal.
4778# Default: 21600000 (6 hour) = 6 hour * 60 min / hour * 60 sec / min * 1000 milliseconds / sec
4779# 
4780#license_manager_worker_lease_duration = 21600000
4781
4782# To be used only if License Manager server is started with HTTPS
4783# Accepts a boolean: true/false, or a path to a file/directory. Denotates whether or not to attempt
4784# SSL Certificate verification when making a request to the License Manager server.
4785# True: attempt ssl certificate verification, will fail if certificates are self signed
4786# False: skip ssl certificate verification.
4787# /path/to/cert/directory: load certificates <cert.pem> in directory and use those for certificate verification
4788# Behaves in the same manner as python requests package:
4789# https://requests.readthedocs.io/en/latest/user/advanced/#ssl-cert-verification
4790# 
4791#license_manager_ssl_certs = "true"
4792
4793# Amount of time that Driverless AI workers will keep retrying to startup and obtain a lease from
4794# the license manager before timing out. Time out will cause worker startup to fail.
4795# 
4796#license_manager_worker_startup_timeout = 3600000
4797
4798# Emergency setting that will allow Driverless AI to run even if there is issues communicating with
4799# or obtaining leases from, the License Manager server.
4800# This is an encoded string that can be obtained from either the license manager ui or the logs of the license
4801# manager server.
4802# 
4803#license_manager_dry_run_token = ""
4804
4805# Number of days before license expiry when the UI warning notification should appear.
4806# When the remaining days are less than or equal to this value, a notification bar
4807# will be displayed in the interface.
4808# 
4809#license_expiry_warning_days = 7
4810
4811# Choose LIME method to be used for creation of surrogate models.
4812#mli_lime_method = "k-LIME"
4813
4814# Choose whether surrogate models should be built for original or transformed features.
4815#mli_use_raw_features = true
4816
4817# Choose whether time series based surrogate models should be built for original features.
4818#mli_ts_use_raw_features = false
4819
4820# Choose whether to run all explainers on the sampled dataset.
4821#mli_sample = true
4822
4823# Set maximum number of features for which to build Surrogate Partial Dependence Plot. Use -1 to calculate Surrogate Partial Dependence Plot for all features.
4824#mli_vars_to_pdp = 10
4825
4826# Set the number of cross-validation folds for surrogate models.
4827#mli_nfolds = 3
4828
4829# Set the number of columns to bin in case of quantile binning.
4830#mli_qbin_count = 0
4831
4832# Number of threads for H2O instance for use by MLI.
4833#h2o_mli_nthreads = 8
4834
4835# Use this option to disable MOJO scoring pipeline. Scoring pipeline is chosen automatically (from MOJO and Python pipelines) by default. In case of certain models MOJO vs. Python choice can impact pipeline performance and robustness.
4836#mli_enable_mojo_scorer = true
4837
4838# When number of rows are above this limit sample for MLI for scoring UI data.
4839#mli_sample_above_for_scoring = 1000000
4840
4841# When number of rows are above this limit sample for MLI for training surrogate models.
4842#mli_sample_above_for_training = 100000
4843
4844# The sample size, number of rows, used for MLI surrogate models.
4845#mli_sample_size = 100000
4846
4847# Number of bins for quantile binning.
4848#mli_num_quantiles = 10
4849
4850# Number of trees for Random Forest surrogate model.
4851#mli_drf_num_trees = 100
4852
4853# Speed up predictions with a fast approximation (can reduce the number of trees or cross-validation folds).
4854#mli_fast_approx = true
4855
4856# Maximum number of interpreters status cache entries.
4857#mli_interpreter_status_cache_size = 1000
4858
4859# Max depth for Random Forest surrogate model.
4860#mli_drf_max_depth = 20
4861
4862# not only sample training, but also sample scoring.
4863#mli_sample_training = true
4864
4865# Regularization strength for k-LIME GLM's.
4866#klime_lambda = "[1e-06, 1e-08]"
4867
4868# Regularization distribution between L1 and L2 for k-LIME GLM's.
4869#klime_alpha = 0.0
4870
4871# Max cardinality for numeric variables in surrogate models to be considered categorical.
4872#mli_max_numeric_enum_cardinality = 25
4873
4874# Maximum number of features allowed for k-LIME k-means clustering.
4875#mli_max_number_cluster_vars = 6
4876
4877# Use all columns for k-LIME k-means clustering (this will override `mli_max_number_cluster_vars` if set to `True`).
4878#use_all_columns_klime_kmeans = false
4879
4880# Strict version check for MLI
4881#mli_strict_version_check = true
4882
4883# MLI cloud name
4884#mli_cloud_name = ""
4885
4886# Compute original model ICE using per feature's bin predictions (true) or use "one frame" strategy (false).
4887#mli_ice_per_bin_strategy = false
4888
4889# By default DIA will run for categorical columns with cardinality <= mli_dia_default_max_cardinality.
4890#mli_dia_default_max_cardinality = 10
4891
4892# By default DIA will run for categorical columns with cardinality >= mli_dia_default_min_cardinality.
4893#mli_dia_default_min_cardinality = 2
4894
4895# When number of rows are above this limit, then sample for MLI transformed Shapley calculation.
4896#mli_shapley_sample_size = 100000
4897
4898# Enable MLI keeper which ensures efficient use of filesystem/memory/DB by MLI.
4899#enable_mli_keeper = true
4900
4901# Enable MLI Sensitivity Analysis
4902#enable_mli_sa = true
4903
4904# Enable priority queues based explainers execution. Priority queues restrict available system resources and prevent system over-utilization. Interpretation execution time might be (significantly) slower.
4905#enable_mli_priority_queues = true
4906
4907# Explainers are run sequentially by default. This option can be used to run all explainers in parallel which can - depending on hardware strength and the number of explainers - decrease interpretation duration. Consider explainer dependencies, random explainers order and hardware over utilization.
4908#mli_sequential_task_execution = true
4909
4910# When number of rows are above this limit, then sample for Disparate Impact Analysis.
4911#mli_dia_sample_size = 100000
4912
4913# When number of rows are above this limit, then sample for Partial Dependence Plot.
4914#mli_pd_sample_size = 25000
4915
4916# Use dynamic switching between Partial Dependence Plot numeric and categorical binning and UI chart selection in case of features which were used both as numeric and categorical by experiment.
4917#mli_pd_numcat_num_chart = true
4918
4919# If 'mli_pd_numcat_num_chart' is enabled, then use numeric binning and chart if feature unique values count is bigger than threshold, else use categorical binning and chart.
4920#mli_pd_numcat_threshold = 11
4921
4922# In New Interpretation screen show only datasets which can be used to explain a selected model. This can slow down the server significantly.
4923#new_mli_list_only_explainable_datasets = false
4924
4925# Enable async/await-based non-blocking MLI API
4926#enable_mli_async_api = true
4927
4928# Enable main chart aggregator in Sensitivity Analysis
4929#enable_mli_sa_main_chart_aggregator = true
4930
4931# When to sample for Sensitivity Analysis (number of rows after sampling).
4932#mli_sa_sampling_limit = 500000
4933
4934# Run main chart aggregator in Sensitivity Analysis when the number of dataset instances is bigger than given limit.
4935#mli_sa_main_chart_aggregator_limit = 1000
4936
4937# Use predict_safe() (true) or predict_base() (false) in MLI (PD, ICE, SA, ...).
4938#mli_predict_safe = false
4939
4940# Number of max retries should the surrogate model fail to build.
4941#mli_max_surrogate_retries = 5
4942
4943# Allow use of symlinks (instead of file copy) by MLI explainer procedures.
4944#enable_mli_symlinks = true
4945
4946# Fraction of memory to allocate for h2o MLI jar
4947#h2o_mli_fraction_memory = 0.45
4948
4949# Add TOML string to Driverless AI server config.toml configuration file.
4950#mli_custom = ""
4951
4952# To exclude e.g. Sensitivity Analysis explainer use: excluded_mli_explainers=['h2oaicore.mli.byor.recipes.sa_explainer.SaExplainer'].
4953#excluded_mli_explainers = "[]"
4954
4955# Enable RPC API performance monitor.
4956#enable_ws_perfmon = false
4957
4958# Number of parallel workers when scoring using MOJO in Kernel Explainer.
4959#mli_kernel_explainer_workers = 4
4960
4961# Use Kernel Explainer to obtain Shapley values for original features.
4962#mli_run_kernel_explainer = false
4963
4964# Sample input dataset for Kernel Explainer.
4965#mli_kernel_explainer_sample = true
4966
4967# Sample size for input dataset passed to Kernel Explainer.
4968#mli_kernel_explainer_sample_size = 1000
4969
4970# 'auto' or int. Number of times to re-evaluate the model when explaining each prediction. More samples lead to lower variance estimates of the SHAP values. The 'auto' setting uses nsamples = 2 * X.shape[1] + 2048. This setting is disabled by default and DAI determines the right number internally.
4971#mli_kernel_explainer_nsamples = "auto"
4972
4973# 'num_features(int)', 'auto' (default for now, but deprecated), 'aic', 'bic', or float. The l1 regularization to use for feature selection (the estimation procedure is based on a debiased lasso). The 'auto' option currently uses aic when less that 20% of the possible sample space is enumerated, otherwise it uses no regularization. THE BEHAVIOR OF 'auto' WILL CHANGE in a future version to be based on 'num_features' instead of AIC. The aic and bic options use the AIC and BIC rules for regularization. Using 'num_features(int)' selects a fix number of top features. Passing a float directly sets the alpha parameter of the sklearn.linear_model.Lasso model used for feature selection.
4974#mli_kernel_explainer_l1_reg = "aic"
4975
4976# Max runtime for Kernel Explainer in seconds. Default is 900, which equates to 15 minutes. Setting this parameter to -1 means to honor the Kernel Shapley sample size provided regardless of max runtime.
4977#mli_kernel_explainer_max_runtime = 900
4978
4979# Tokenizer used to extract tokens from text columns for MLI.
4980#mli_nlp_tokenizer = "tfidf"
4981
4982# Number of tokens used for MLI NLP explanations. -1 means all.
4983#mli_nlp_top_n = 20
4984
4985# Maximum number of records used by MLI NLP explainers.
4986#mli_nlp_sample_limit = 10000
4987
4988# Minimum number of documents in which token has to appear. Integer mean absolute count, float means percentage.
4989#mli_nlp_min_df = 3
4990
4991# Maximum number of documents in which token has to appear. Integer mean absolute count, float means percentage.
4992#mli_nlp_max_df = 0.9
4993
4994# The minimum value in the ngram range. The tokenizer will generate all possible tokens in the (mli_nlp_min_ngram, mli_nlp_max_ngram) range.
4995#mli_nlp_min_ngram = 1
4996
4997# The maximum value in the ngram range. The tokenizer will generate all possible tokens in the (mli_nlp_min_ngram, mli_nlp_max_ngram) range.
4998#mli_nlp_max_ngram = 1
4999
5000# Mode used to choose N tokens for MLI NLP.
5001# "top" chooses N top tokens.
5002# "bottom" chooses N bottom tokens.
5003# "top-bottom" chooses math.floor(N/2) top and math.ceil(N/2) bottom tokens.
5004# "linspace" chooses N evenly spaced out tokens.
5005#mli_nlp_min_token_mode = "top"
5006
5007# The number of top tokens to be used as features when building token based feature importance.
5008#mli_nlp_tokenizer_max_features = -1
5009
5010# The number of top tokens to be used as features when computing text LOCO.
5011#mli_nlp_loco_max_features = -1
5012
5013# The tokenizer method to use when tokenizing a dataset for surrogate models. Can either choose 'TF-IDF' or 'Linear Model + TF-IDF', which first runs TF-IDF to get tokens and then fits a linear model between the tokens and the target to get importances of tokens, which are based on coefficients of the linear model. Default is 'Linear Model + TF-IDF'. Only applies to NLP models.
5014#mli_nlp_surrogate_tokenizer = "Linear Model + TF-IDF"
5015
5016# The number of top tokens to be used as features when building surrogate models. Only applies to NLP models.
5017#mli_nlp_surrogate_tokens = 100
5018
5019# Ignore stop words for MLI NLP.
5020#mli_nlp_use_stop_words = true
5021
5022# List of words to filter out before generation of text tokens, which are passed to MLI NLP LOCO and surrogate models (if enabled). Default is 'english'. Pass in custom stop-words as a list, e.g., ['great', 'good'].
5023#mli_nlp_stop_words = "english"
5024
5025# Append passed in list of custom stop words to default 'english' stop words.
5026#mli_nlp_append_to_english_stop_words = false
5027
5028# Enable MLI for image experiments.
5029#mli_image_enable = true
5030
5031# The maximum number of rows allowed to get the local explanation result, increase the value may jeopardize overall performance, change the value only if necessary.
5032#mli_max_explain_rows = 500
5033
5034# The maximum number of rows allowed to get the NLP token importance result, increasing the value may consume too much memory and negatively impact the performance, change the value only if necessary.
5035#mli_nlp_max_tokens_rows = 50
5036
5037# The minimum number of rows to enable parallel execution for NLP local explanations calculation.
5038#mli_nlp_min_parallel_rows = 10
5039
5040# Run legacy defaults in addition to current default explainers in MLI.
5041#mli_run_legacy_defaults = false
5042
5043# Run explainers sequentially for one given MLI job.
5044#mli_run_explainers_sequentially = false
5045
5046# Set dask CUDA/RAPIDS cluster settings for single node workers.
5047# Additional environment variables can be set, see: https://dask-cuda.readthedocs.io/en/latest/ucx.html#dask-scheduler
5048# e.g. for ucx use: {} dict version of: dict(n_workers=None, threads_per_worker=1, processes=True, memory_limit='auto', device_memory_limit=None, CUDA_VISIBLE_DEVICES=None, data=None, local_directory=None, protocol='ucx', enable_tcp_over_ucx=True, enable_infiniband=False, enable_nvlink=False, enable_rdmacm=False, ucx_net_devices='auto', rmm_pool_size='1GB')
5049# WARNING: Do not add arguments like {'n_workers': 1, 'processes': True, 'threads_per_worker': 1} this will lead to hangs, cuda cluster handles this itself.
5050# 
5051#dask_cuda_cluster_kwargs = "{'scheduler_port': 0, 'dashboard_address': ':0', 'protocol': 'tcp'}"
5052
5053# Set dask cluster settings for single node workers.
5054# 
5055#dask_cluster_kwargs = "{'n_workers': 1, 'processes': True, 'threads_per_worker': 1, 'scheduler_port': 0, 'dashboard_address': ':0', 'protocol': 'tcp'}"
5056
5057# Whether to start dask workers on this multinode worker.
5058# 
5059#start_dask_worker = true
5060
5061# Set dask scheduler env.
5062# See https://docs.dask.org/en/latest/setup/cli.html
5063# 
5064#dask_scheduler_env = "{}"
5065
5066# Set dask scheduler env.
5067# See https://docs.dask.org/en/latest/setup/cli.html
5068# 
5069#dask_cuda_scheduler_env = "{}"
5070
5071# Set dask scheduler options.
5072# See https://docs.dask.org/en/latest/setup/cli.html
5073# 
5074#dask_scheduler_options = ""
5075
5076# Set dask cuda scheduler options.
5077# See https://docs.dask.org/en/latest/setup/cli.html
5078# 
5079#dask_cuda_scheduler_options = ""
5080
5081# Set dask worker env.
5082# See https://docs.dask.org/en/latest/setup/cli.html
5083# 
5084#dask_worker_env = "{'NCCL_P2P_DISABLE': '1', 'NCCL_DEBUG': 'WARN'}"
5085
5086# Set dask worker options.
5087# See https://docs.dask.org/en/latest/setup/cli.html
5088# 
5089#dask_worker_options = "--memory-limit 0.95"
5090
5091# Set dask cuda worker options.
5092# Similar options as dask_cuda_cluster_kwargs.
5093# See https://dask-cuda.readthedocs.io/en/latest/ucx.html#launching-scheduler-workers-and-clients-separately
5094# "--rmm-pool-size 1GB" can be set to give 1GB to RMM for more efficient rapids
5095# 
5096#dask_cuda_worker_options = "--memory-limit 0.95"
5097
5098# Set dask cuda worker env.
5099# See: https://dask-cuda.readthedocs.io/en/latest/ucx.html#launching-scheduler-workers-and-clients-separately
5100# https://ucx-py.readthedocs.io/en/latest/dask.html
5101# 
5102#dask_cuda_worker_env = "{}"
5103
5104# See https://docs.dask.org/en/latest/setup/cli.html
5105# e.g. ucx is optimal, while tcp is most reliable
5106# 
5107#dask_protocol = "tcp"
5108
5109# See https://docs.dask.org/en/latest/setup/cli.html
5110# 
5111#dask_server_port = 8786
5112
5113# See https://docs.dask.org/en/latest/setup/cli.html
5114# 
5115#dask_dashboard_port = 8787
5116
5117# See https://docs.dask.org/en/latest/setup/cli.html
5118# e.g. ucx is optimal, while tcp is most reliable
5119# 
5120#dask_cuda_protocol = "tcp"
5121
5122# See https://docs.dask.org/en/latest/setup/cli.html
5123# port + 1 is used for dask dashboard
5124# 
5125#dask_cuda_server_port = 8790
5126
5127# See https://docs.dask.org/en/latest/setup/cli.html
5128# 
5129#dask_cuda_dashboard_port = 8791
5130
5131# If empty string, auto-detect IP capable of reaching network.
5132# Required to be set if using worker_mode=multinode.
5133# 
5134#dask_server_ip = ""
5135
5136# Number of processses per dask (not cuda-GPU) worker.
5137# If -1, uses dask default of cpu count + 1 + nprocs.
5138# If -2, uses DAI default of total number of physical cores.  Recommended for heavy feature engineering.
5139# If 1, assumes tasks are mostly multi-threaded and can use entire node per task.  Recommended for heavy multinode model training.
5140# Only applicable to dask (not dask_cuda) workers
5141# 
5142#dask_worker_nprocs = 1
5143
5144# Number of threads per process for dask workers
5145#dask_worker_nthreads = 1
5146
5147# Number of threads per process for dask_cuda workers
5148# If -2, uses DAI default of physical cores per GPU,
5149# since must have 1 worker/GPU only.
5150# 
5151#dask_cuda_worker_nthreads = -2
5152
5153# See https://github.com/dask/dask-lightgbm
5154# 
5155#lightgbm_listen_port = 12400
5156
5157# Whether to enable jupyter server
5158#enable_jupyter_server = false
5159
5160# Port for jupyter server
5161#jupyter_server_port = 8889
5162
5163# Whether to enable jupyter server browser
5164#enable_jupyter_server_browser = false
5165
5166# Whether to root access to jupyter server browser
5167#enable_jupyter_server_browser_root = false
5168
5169# Hostname (or IP address) of remote Triton inference service (outside of DAI), to be used when auto_deploy_triton_scoring_pipeline
5170# and make_triton_scoring_pipeline are not disabled. If set, check triton_model_repository_dir_remote and triton_server_params_remote as well.
5171# 
5172#triton_host_remote = ""
5173
5174# Path to model repository directory for remote Triton inference server outside of Driverless AI. All Triton deployments for all users are stored in this directory. Requires write access to this directory from Driverless AI (shared file system). This setting is optional. If not provided, will upload each model deployment over gRPC protocol.
5175#triton_model_repository_dir_remote = ""
5176
5177# Parameters to connect to remote Triton server, only used if triton_host_remote and
5178# triton_model_repository_dir_remote are set.
5179# Note: 'model-control-mode' need to be set to 'explicit' in order to allow DAI upload model to remote
5180# triton server.
5181# .
5182#triton_server_params_remote = "{'http-port': 8000, 'grpc-port': 8001, 'metrics-port': 8002, 'model-control-mode': 'explicit'}"
5183
5184#triton_log_level = 0
5185
5186#triton_model_reload_on_startup_count = 0
5187
5188#triton_clean_up_temp_python_env_on_startup = true
5189
5190# When set to true, CPU executors will strictly run just CPU tasks.
5191#multinode_enable_strict_queue_policy = false
5192
5193# Controls whether CPU tasks can run on GPU machines.
5194#multinode_enable_cpu_tasks_on_gpu_machines = true
5195
5196# Storage medium to be used to exchange data between main server and remote worker nodes.
5197#multinode_storage_medium = "minio"
5198
5199# How the long running tasks are scheduled.
5200# multiprocessing: forks the current process immediately.
5201# singlenode:      shares the task through redis and needs a worker running.
5202# multinode:       same as singlenode and also shares the data through minio
5203# and allows worker to run on the different machine.
5204# 
5205#worker_mode = "singlenode"
5206
5207# Redis settings
5208#redis_ip = "127.0.0.1"
5209
5210# Redis settings
5211#redis_port = 6379
5212
5213# Redis database. Each DAI instance running on the redis server should have unique integer.
5214#redis_db = 0
5215
5216# Redis password. Will be randomly generated main server startup, and by default it will show up in config file uncommented.If you are running more than one DriverlessAI instance per system, make sure each and every instance is connected to its own redis queue.
5217#main_server_redis_password = "PlWUjvEJSiWu9j0aopOyL5KwqnrKtyWVoZHunqxr"
5218
5219# If set to true, the config will get encrypted before it gets saved into the Redis database.
5220#redis_encrypt_config = false
5221
5222# The port that Minio will listen on, this only takes effect if the current system is a multinode main server.
5223#local_minio_port = 9001
5224
5225# Location of main server's minio server.
5226#main_server_minio_address = "127.0.0.1:9001"
5227
5228# Access key of main server's minio server.
5229#main_server_minio_access_key_id = "GMCSE2K2T3RV6YEHJUYW"
5230
5231# Secret access key of main server's minio server.
5232#main_server_minio_secret_access_key = "JFxmXvE/W1AaqwgyPxAUFsJZRnDWUaeQciZJUe9H"
5233
5234# Name of minio bucket used for file synchronization.
5235#main_server_minio_bucket = "h2oai"
5236
5237# S3 global access key.
5238#main_server_s3_access_key_id = "access_key"
5239
5240# S3 global secret access key
5241#main_server_s3_secret_access_key = "secret_access_key"
5242
5243# S3 bucket.
5244#main_server_s3_bucket = "h2oai-multinode-tests"
5245
5246# Maximum number of local tasks processed at once, limited to no more than total number of physical (not virtual) cores divided by two (minimum of 1).
5247#worker_local_processors = 32
5248
5249# A concurrency limit for the 3 priority queues, only enabled when worker_remote_processors is greater than 0.
5250#worker_priority_queues_processors = 4
5251
5252# A timeout before which a scheduled task is bumped up in priority
5253#worker_priority_queues_time_check = 30
5254
5255# Maximum number of remote tasks processed at once, if value is set to -1 the system will automatically pick a reasonable limit depending on the number of available virtual CPU cores.
5256#worker_remote_processors = -1
5257
5258# If worker_remote_processors >= 3, factor by which each task reduces threads, used by various packages like datatable, lightgbm, xgboost, etc.
5259#worker_remote_processors_max_threads_reduction_factor = 0.7
5260
5261# Temporary file system location for multinode data transfer. This has to be an absolute path with equivalent configuration on both the main server and remote workers.
5262#multinode_tmpfs = ""
5263
5264# When set to true, will use the 'multinode_tmpfs' as datasets store.
5265#multinode_store_datasets_in_tmpfs = false
5266
5267# How often the server should extract results from redis queue in milliseconds.
5268#redis_result_queue_polling_interval = 100
5269
5270# Sleep time for worker loop.
5271#worker_sleep = 0.1
5272
5273# For how many seconds worker should wait for main server minio bucket before it fails
5274#main_server_minio_bucket_ping_timeout = 180
5275
5276# A JSON list of up to two objects, where each object defines a worker node profile with name, num_cpus, num_gpus, memory_gb, gpu_is_mig. Currently, the profiles must be named CPU and GPU. The GPU profile must have num_gpus greater than 0. An example worker_spec: [{"name": "CPU", "num_cpus": 8, "num_gpus": 2, "memory_gb": 32, "gpu_is_mig": true}].
5277#worker_node_spec = ""
5278
5279# How long the worker should wait on redis db initialization in seconds.
5280#worker_start_timeout = 30
5281
5282#worker_no_main_server_wait_time = 1800
5283
5284#worker_no_main_server_wait_time_with_hard_assert = 30
5285
5286# For how many seconds the worker shouldn't respond to be marked unhealthy.
5287#worker_healthy_response_period = 300
5288
5289# Whether to enable priority queue for worker nodes to schedule experiments.
5290# 
5291#enable_experiments_priority_queue = false
5292
5293# Exposes the DriverlessAI base version when enabled.
5294#expose_server_version = true
5295
5296# https settings
5297# You can make a self-signed certificate for testing with the following commands:
5298# sudo openssl req -x509 -newkey rsa:4096 -keyout private_key.pem -out cert.pem -days 3650 -nodes -subj '/O=Driverless AI'
5299# sudo chown dai:dai cert.pem private_key.pem
5300# sudo chmod 600 cert.pem private_key.pem
5301# sudo mv cert.pem private_key.pem /etc/dai
5302#enable_https = false
5303
5304# https settings
5305# You can make a self-signed certificate for testing with the following commands:
5306# sudo openssl req -x509 -newkey rsa:4096 -keyout private_key.pem -out cert.pem -days 3650 -nodes -subj '/O=Driverless AI'
5307# sudo chown dai:dai cert.pem private_key.pem
5308# sudo chmod 600 cert.pem private_key.pem
5309# sudo mv cert.pem private_key.pem /etc/dai
5310#ssl_key_file = "/etc/dai/private_key.pem"
5311
5312# https settings
5313# You can make a self-signed certificate for testing with the following commands:
5314# sudo openssl req -x509 -newkey rsa:4096 -keyout private_key.pem -out cert.pem -days 3650 -nodes -subj '/O=Driverless AI'
5315# sudo chown dai:dai cert.pem private_key.pem
5316# sudo chmod 600 cert.pem private_key.pem
5317# sudo mv cert.pem private_key.pem /etc/dai
5318#ssl_crt_file = "/etc/dai/cert.pem"
5319
5320# https settings
5321# Passphrase for the ssl_key_file,
5322# either use this setting or ssl_key_passphrase_file,
5323# or neither if no passphrase is used.
5324#ssl_key_passphrase = ""
5325
5326# https settings
5327# Passphrase file  for the ssl_key_file,
5328# either use this setting or ssl_key_passphrase,
5329# or neither if no passphrase is used.
5330#ssl_key_passphrase_file = ""
5331
5332# SSL TLS
5333#ssl_no_sslv2 = true
5334
5335# SSL TLS
5336#ssl_no_sslv3 = true
5337
5338# SSL TLS
5339#ssl_no_tlsv1 = true
5340
5341# SSL TLS
5342#ssl_no_tlsv1_1 = true
5343
5344# SSL TLS
5345#ssl_no_tlsv1_2 = false
5346
5347# SSL TLS
5348#ssl_no_tlsv1_3 = false
5349
5350# https settings
5351# Sets the client verification mode.
5352# CERT_NONE: Client does not need to provide the certificate and if it does any
5353# verification errors are ignored.
5354# CERT_OPTIONAL: Client does not need to provide the certificate and if it does
5355# certificate is verified against set up CA chains.
5356# CERT_REQUIRED: Client needs to provide a certificate and certificate is
5357# verified.
5358# You'll need to set 'ssl_client_key_file' and 'ssl_client_crt_file'
5359# When this mode is selected for Driverless to be able to verify
5360# it's own callback requests.
5361# 
5362#ssl_client_verify_mode = "CERT_NONE"
5363
5364# https settings
5365# Path to the Certification Authority certificate file. This certificate will be
5366# used when to verify client certificate when client authentication is turned on.
5367# If this is not set, clients are verified using default system certificates.
5368# 
5369#ssl_ca_file = ""
5370
5371# https settings
5372# path to the private key that Driverless will use to authenticate itself when
5373# CERT_REQUIRED mode is set.
5374# 
5375#ssl_client_key_file = ""
5376
5377# https settings
5378# path to the client certificate that Driverless will use to authenticate itself
5379# when CERT_REQUIRED mode is set.
5380# 
5381#ssl_client_crt_file = ""
5382
5383# If enabled, webserver will serve xsrf cookies and verify their validity upon every POST request
5384#enable_xsrf_protection = true
5385
5386# Sets the `SameSite` attribute for the `_xsrf` cookie; options are "Lax", "Strict", or "".
5387#xsrf_cookie_samesite = ""
5388
5389#enable_secure_cookies = false
5390
5391# When enabled each authenticated access will be verified comparing IP address of initiator of session and current request IP
5392#verify_session_ip = false
5393
5394# Enables automatic detection for forbidden/dangerous constructs in custom recipe
5395#custom_recipe_security_analysis_enabled = false
5396
5397# List of modules that can be imported in custom recipes. Default empty list means all modules are allowed except for banlisted ones
5398#custom_recipe_import_allowlist = "[]"
5399
5400# List of modules that cannot be imported in custom recipes
5401#custom_recipe_import_banlist = "['shlex', 'plumbum', 'pexpect', 'envoy', 'commands', 'fabric', 'subprocess', 'os.system', 'system']"
5402
5403# Regex pattern list of calls which are allowed in custom recipes.
5404# Empty list means everything (except for banlist) is allowed.
5405# E.g. if only `os.path.*` is in allowlist, custom recipe can only call methods
5406# from `os.path` module and the built in ones
5407# 
5408#custom_recipe_method_call_allowlist = "[]"
5409
5410# Regex pattern list of calls which need to be rejected in custom recipes.
5411# E.g. if `os.system` in banlist, custom recipe cannot call `os.system()`.
5412# If `socket.*` in banlist, recipe cannot call any method of socket module such as
5413# `socket.socket()` or any `socket.a.b.c()`
5414# 
5415#custom_recipe_method_call_banlist = "['os\\.system', 'socket\\..*', 'subprocess.*', 'os.spawn.*']"
5416
5417# List of regex patterns representing dangerous sequences/constructs
5418# which could be harmful to whole system and should be banned from code
5419# 
5420#custom_recipe_dangerous_patterns = "['rm -rf', 'rm -fr']"
5421
5422# If enabled, user can log in from 2 browsers (scripts) at the same time
5423#allow_concurrent_sessions = true
5424
5425# Extra HTTP headers.
5426#extra_http_headers = "{}"
5427
5428# If enabled, the webserver will add a Content-Security-Policy header to all responses. This header helps to prevent cross-site scripting (XSS) attacks by specifying which sources of content are allowed to be loaded by the browser.
5429#add_csp_header = true
5430
5431# By default DriverlessAI issues cookies with HTTPOnly and Secure attributes (morsels) enabled. In addition to that, SameSite attribute is set to 'Lax', as it's a default in modern browsers. The config overrides the default key/value (morsels).
5432#http_cookie_attributes = "{'samesite': 'Lax'}"
5433
5434# Enable column imputation
5435#enable_imputation = false
5436
5437# Adds advanced settings panel to experiment setup, which allows creating
5438# custom features and more.
5439# 
5440#enable_advanced_features_experiment = false
5441
5442# Specifies whether DriverlessAI uses H2O Storage or H2O Entity Server for
5443# a shared entities backend.
5444# h2o-storage: Uses legacy H2O Storage.
5445# entity-server: Uses the new HAIC Entity Server.
5446# 
5447#h2o_storage_mode = "h2o-storage"
5448
5449# Address of the H2O Storage endpoint. Keep empty to use the local storage only.
5450#h2o_storage_address = ""
5451
5452# Whether to enable multi-project support in H2O Storage.
5453#enable_multi_projects = false
5454
5455# Whether to use remote projects stored in H2O Storage instead of local projects.
5456#h2o_storage_projects_enabled = false
5457
5458# Whether the channel to the storage should be encrypted.
5459#h2o_storage_tls_enabled = true
5460
5461# Path to the certification authority certificate that H2O Storage server identity will be checked against.
5462#h2o_storage_tls_ca_path = ""
5463
5464# Path to the client certificate to authenticate with H2O Storage server
5465#h2o_storage_tls_cert_path = ""
5466
5467# Path to the client key to authenticate with H2O Storage server
5468#h2o_storage_tls_key_path = ""
5469
5470# UUID of a Storage project to use instead of the remote HOME folder.
5471#h2o_storage_internal_default_project_id = ""
5472
5473# Deadline for RPC calls with H2O Storage in seconds. Sets maximum number of seconds that Driverless waits for RPC call to complete before it cancels it.
5474#h2o_storage_rpc_deadline_seconds = 60
5475
5476# Deadline for RPC bytestrteam calls with H2O Storage in seconds. Sets maximum number of seconds that Driverless waits for RPC call to complete before it cancels it. This value is used for uploading and downloading artifacts.
5477#h2o_storage_rpc_bytestream_deadline_seconds = 7200
5478
5479# Storage client manages it's own access tokens derived from  the refresh token received on the user login. When this option is set access token with the scopes defined here is requested. (space separated list)
5480#h2o_storage_oauth2_scopes = ""
5481
5482# Maximum size of message size of RPC request in bytes. Requests larger than this limit will fail.
5483#h2o_storage_message_size_limit = 1048576000
5484
5485# Maximum size of message size of RPC request in bytes. Requests larger than this limit will fail.
5486#h2o_authz_message_size_limit = 1048576000
5487
5488# If the `h2o_mlops_ui_url` is provided alongside the `enable_storage`, DAI is able to redirect user to the MLOps app upon clicking the Deploy button.
5489#h2o_mlops_ui_url = ""
5490
5491# If the `feature_store_ui_url` is provided alongside the `enable_file_systems`, DAI is able to redirect user to the Feature Store app upon clicking the Feature Store button.
5492#feature_store_ui_url = ""
5493
5494# H2O Secure Store server endpoint URL
5495#h2o_secure_store_endpoint_url = ""
5496
5497# Enable TLS communication between DAI and the H2O Secure Store server
5498#h2o_secure_store_enable_tls = true
5499
5500# Path to the client certificate to authenticate with the H2O Secure Store server. This is only effective when h2o_secure_store_enable_tls=True.
5501#h2o_secure_store_tls_cert_path = ""
5502
5503# Whether to enable or disable linking datasets into projects.
5504#h2o_storage_dataset_linking_enabled = true
5505
5506# Whether to enable or disable linking experiments into projects.
5507#h2o_storage_experiment_linking_enabled = true
5508
5509# Keystore file that contains secure config.toml items like passwords, secret keys etc. Keystore is managed by h2oai.keystore tool.
5510#keystore_file = ""
5511
5512# Verbosity of logging
5513# 0: quiet   (CRITICAL, ERROR, WARNING)
5514# 1: default (CRITICAL, ERROR, WARNING, INFO, DATA)
5515# 2: verbose (CRITICAL, ERROR, WARNING, INFO, DATA, DEBUG)
5516# Affects server and all experiments
5517#log_level = 1
5518
5519# Whether to collect relevant server logs (h2oai_server.log, dai.log from systemctl or docker, and h2o log)
5520# Useful for when sending logs to H2O.ai
5521#collect_server_logs_in_experiment_logs = false
5522
5523# When set, will migrate all user entities to the defined user upon startup, this is mostly useful during
5524# instance migration via H2O's AIEM/Steam.
5525#migrate_all_entities_to_user = ""
5526
5527# Whether to have all user content isolated into a directory for each user.
5528# If set to False, all users content is common to single directory,
5529# recipes are shared, and brain folder for restart/refit is shared.
5530# If set to True, each user has separate folder for all user tasks,
5531# recipes are isolated to each user, and brain folder for restart/refit is
5532# only for the specific user.
5533# Migration from False to True or back to False is allowed for
5534# all experiment content accessible by GUI or python client,
5535# all recipes, and starting experiment with same settings, restart, or refit.
5536# However, if switch to per-user mode, the common brain folder is no longer used.
5537# 
5538#per_user_directories = true
5539
5540# List of file names to ignore during dataset import. Any files with names listed above will be skipped when
5541# DAI creates a dataset. Example, directory contains 3 files: [data_1.csv, data_2.csv, _SUCCESS]
5542# DAI will only attempt to create a dataset using files data_1.csv and data_2.csv, and _SUCCESS file will be ignored.
5543# Default is to ignore _SUCCESS files which are commonly created in exporting data from Hadoop
5544# 
5545#data_import_ignore_file_names = "['_SUCCESS']"
5546
5547# For data import from a directory (multiple files), allow column types to differ and perform upcast during import.
5548#data_import_upcast_multi_file = false
5549
5550# If set to true, will explode columns with list data type when importing parquet files.
5551#data_import_explode_list_type_columns_in_parquet = false
5552
5553# List of file types that Driverless AI should attempt to import data as IF no file extension exists in the file name
5554# If no file extension is provided, Driverless AI will attempt to import the data starting with first type
5555# in the defined list. Default ["parquet", "orc"]
5556# Example: 'test.csv' (file extension exists) vs 'test' (file extension DOES NOT exist)
5557# NOTE: see supported_file_types configuration option for more details on supported file types
5558# 
5559#files_without_extensions_expected_types = "['parquet', 'orc']"
5560
5561# do_not_log_list : add configurations that you do not wish to be recorded in logs here.They will still be stored in experiment information so child experiments can behave consistently.
5562#do_not_log_list = "['cols_to_drop', 'cols_to_drop_sanitized', 'cols_to_group_by', 'cols_to_group_by_sanitized', 'cols_to_force_in', 'cols_to_force_in_sanitized', 'do_not_log_list', 'do_not_store_list', 'pytorch_nlp_pretrained_s3_access_key_id', 'pytorch_nlp_pretrained_s3_secret_access_key', 'auth_openid_end_session_endpoint_url']"
5563
5564# do_not_store_list : add configurations that you do not wish to be stored at all here.Will not be remembered across experiments, so not applicable to data science related itemsthat could be controlled by a user.  These items are automatically not logged.
5565#do_not_store_list = "['h2o_authz_action_prefix', 'h2o_authz_user_prefix', 'h2o_authz_result_cache_ttl_sec', 'pip_install_options', 'local_default_project_key']"
5566
5567# Memory limit in bytes for datatable to use during parsing of CSV files. -1 for unlimited. 0 for automatic. >0 for constraint.
5568#datatable_parse_max_memory_bytes = -1
5569
5570# Delimiter/Separator to use when parsing tabular text files like CSV. Automatic if empty. Must be provided at system start.
5571#datatable_separator = ""
5572
5573# Whether to enable ping of system status during DAI data ingestion.
5574#ping_load_data_file = false
5575
5576# Period between checking DAI status.  Should be small enough to avoid slowing parent who stops ping process.
5577#ping_sleep_period = 0.5
5578
5579# Precision of how data is stored
5580# 'datatable' keeps original datatable storage types (i.e. bool, int, float32, float64) (experimental)
5581# 'float32' best for speed, 'float64' best for accuracy or very large input values, "datatable" best for memory
5582# 'float32' allows numbers up to about +-3E38 with relative error of about 1E-7
5583# 'float64' allows numbers up to about +-1E308 with relative error of about 1E-16
5584# Some calculations, like the GLM standardization, can only handle up to sqrt() of these maximums for data values,
5585# So GLM with 32-bit precision can only handle up to about a value of 1E19 before standardization generates inf values.
5586# If you see "Best individual has invalid score" you may require higher precision.
5587#data_precision = "float32"
5588
5589# Precision of most data transformers (same options and notes as data_precision).
5590# Useful for higher precision in transformers with numerous operations that can accumulate error.
5591# Also useful if want faster performance for transformers but otherwise want data stored in high precision.
5592#transformer_precision = "float32"
5593
5594# Whether to change ulimit soft limits up to hard limits (for DAI server app, which is not a generic user app).
5595# Prevents resource limit problems in some cases.
5596# Restricted to no more than limit_nofile and limit_nproc for those resources.
5597#ulimit_up_to_hard_limit = true
5598
5599#disable_core_files = false
5600
5601# number of file limit
5602# Below should be consistent with start-dai.sh
5603#limit_nofile = 131071
5604
5605# number of threads limit
5606# Below should be consistent with start-dai.sh
5607#limit_nproc = 16384
5608
5609# '
5610# Whether to compute training, validation, and test correlation matrix (table and heatmap pdf) and save to disk
5611# alpha: WARNING: currently single threaded and quadratically slow for many columns
5612#compute_correlation = false
5613
5614# Whether to dump to disk a correlation heatmap
5615#produce_correlation_heatmap = false
5616
5617# Value to report high correlation between original features
5618#high_correlation_value_to_report = 0.95
5619
5620# If True, experiments aborted by server restart will automatically restart and continue upon user login
5621#restart_experiments_after_shutdown = false
5622
5623# When environment variable is set to toml value, consider that an override of any toml value.  Experiment's remember toml values for scoring, and this treats any environment set as equivalent to putting OVERRIDE_ in front of the environment key.
5624#any_env_overrides = false
5625
5626# Include byte order mark (BOM) when writing CSV files. Required to support UTF-8 encoding in Excel.
5627#datatable_bom_csv = false
5628
5629# Whether to enable debug prints (to console/stdout/stderr), e.g. showing up in dai*.log or dai*.txt type files.
5630#debug_print = false
5631
5632# Level (0-4) for debug prints (to console/stdout/stderr), e.g. showing up in dai*.log or dai*.txt type files.  1-2 is normal, 4 would lead to highly excessive debug and is not recommended in production.
5633#debug_print_level = 0
5634
5635#return_quickly_autodl_testing = false
5636
5637#return_quickly_autodl_testing2 = false
5638
5639#return_before_final_model = false
5640
5641# Whether to check if config.toml keys are valid and fail if not valid
5642#check_invalid_config_toml_keys = true
5643
5644#predict_safe_trials = 2
5645
5646#fit_safe_trials = 2
5647
5648#allow_no_pid_host = true
5649
5650#enable_autodl_system_insights = true
5651
5652#enable_deleting_autodl_system_insights_finished_experiments = true
5653
5654#main_logger_with_experiment_ids = true
5655
5656# Reduce memory usage during final ensemble feature engineering (1 uses most memory, larger values use less memory)
5657#final_munging_memory_reduction_factor = 2
5658
5659# How much more memory a typical transformer needs than the input data.
5660# Can be increased if, e.g., final model munging uses too much memory due to parallel operations.
5661#munging_memory_overhead_factor = 5
5662
5663#per_transformer_segfault_protection_ga = false
5664
5665#per_transformer_segfault_protection_final = false
5666
5667# How often to check resources (disk, memory, cpu) to see if need to stall submission.
5668#submit_resource_wait_period = 10
5669
5670# Stall submission of subprocesses if system CPU usage is higher than this threshold in percent (set to 100 to disable). A reasonable number is 90.0 if activated
5671#stall_subprocess_submission_cpu_threshold_pct = 100
5672
5673# Restrict/Stall submission of subprocesses if DAI fork count (across all experiments) per unit ulimit nproc soft limit is higher than this threshold in percent (set to -1 to disable, 0 for minimal forking. A reasonable number is 90.0 if activated
5674#stall_subprocess_submission_dai_fork_threshold_pct = -1.0
5675
5676# Restrict/Stall submission of subprocesses if experiment fork count (across all experiments) per unit ulimit nproc soft limit is higher than this threshold in percent (set to -1 to disable, 0 for minimal forking). A reasonable number is 90.0 if activated. For small data leads to overhead of about 0.1s per task submitted due to checks, so for scoring can slow things down for tests.
5677#stall_subprocess_submission_experiment_fork_threshold_pct = -1.0
5678
5679# Whether to restrict pool workers even if not used, by reducing number of pool workers available. Good if really huge number of experiments, but otherwise, best to have all pool workers ready and only stall submission of tasks so can be dynamic to multi-experiment environment
5680#restrict_initpool_by_memory = true
5681
5682# Whether to terminate experiments if the system memory available falls below memory_limit_gb_terminate
5683#terminate_experiment_if_memory_low = false
5684
5685# Memory in GB beyond which will terminate experiment if terminate_experiment_if_memory_low=true.
5686#memory_limit_gb_terminate = 5
5687
5688# A fraction that with valid values between 0.1 and 1.0 that determines the disk usage quota for a user, this quota will be checked during datasets import or experiment runs.
5689#users_disk_usage_quota = 1.0
5690
5691# Path to use for scoring directory path relative to run path
5692#scoring_data_directory = "tmp"
5693
5694#num_models_for_resume_graph = 1000
5695
5696# Internal helper to allow memory of if changed exclusive mode
5697#last_exclusive_mode = ""
5698
5699#mojo_acceptance_test_errors_fatal = true
5700
5701#mojo_acceptance_test_errors_shap_fatal = true
5702
5703#mojo_acceptance_test_orig_shap = true
5704
5705# Which MOJO runtimes should be tested as part of the mini acceptance tests
5706#mojo_acceptance_test_mojo_types = "['C++', 'Java']"
5707
5708# Create MOJO for feature engineering pipeline only (no predictions)
5709#make_mojo_scoring_pipeline_for_features_only = false
5710
5711# Replaces target encoding features by their input columns. Instead of CVTE_Age:Income:Zip, this will create Age:Income:Zip. Only when make_mojo_scoring_pipeline_for_features_only is enabled.
5712#mojo_replace_target_encoding_with_grouped_input_cols = false
5713
5714# Use pipeline to generate transformed features, when making predictions, bypassing the model that usually converts transformed features into predictions.
5715#predictions_as_transform_only = false
5716
5717# If set to true, will make sure only current instance can access its database
5718#enable_single_instance_db_access = true
5719
5720# DCGM daemon address, DCGM has to be in standalone mode in remote/local host.
5721#dcgm_daemon_address = "127.0.0.1"
5722
5723# Deprecated - maps to enable_pytorch_nlp_transformer and enable_pytorch_nlp_model in 1.10.2+
5724#enable_pytorch_nlp = "auto"
5725
5726# How long to wait per GPU for tensorflow/torch to run during system checks.
5727#check_timeout_per_gpu = 20
5728
5729# Whether to fail start-up if cannot successfully run GPU checks
5730#gpu_exit_if_fails = true
5731
5732# Cache TTL in seconds for authorization checks in the GUI. Cached permission results are reused within this time period to reduce backend load. Default is 300 seconds (5 minutes).
5733#h2o_ui_authz_result_cache_ttl_sec = 300
5734
5735#how_started = ""
5736
5737#wizard_state = ""
5738
5739# Whether to enable pushing telemetry events to a configured telemetry receiver in 'telemetry_plugins_dir'.
5740#enable_telemetry = false
5741
5742# Directory to scan for telemetry recipes.
5743#telemetry_plugins_dir = "./telemetry_plugins"
5744
5745# Whether to enable TLS to communicate to H2O.ai Telemetry Service.
5746#h2o_telemetry_tls_enabled = false
5747
5748# Timeout value when communicating to H2O.ai Telemetry Service.
5749#h2o_telemetry_rpc_deadline_seconds = 60
5750
5751# H2O.ai Telemetry Service address in H2O.ai Cloud.
5752#h2o_telemetry_address = ""
5753
5754# H2O.ai Telemetry Service access token file location.
5755#h2o_telemetry_service_token_location = ""
5756
5757# TLS CA path when communicating to H2O.ai Telemetry Service.
5758#h2o_telemetry_tls_ca_path = ""
5759
5760# TLS certificate path when communicating to H2O.ai Telemetry Service.
5761#h2o_telemetry_tls_cert_path = ""
5762
5763# TLS key path when communicating to H2O.ai Telemetry Service.
5764#h2o_telemetry_tls_key_path = ""
5765
5766# Whether to enable pushing audit events to a configured Audit Trail receiver in 'audit_trail_plugins_dir'.
5767#enable_audit_trail = false
5768
5769# Whether to return all stack trace error log to audit trail API
5770#enable_debug_error_audit_trail = false
5771
5772# Timeout value when communicating to H2O.ai Audit Trail Service.
5773#h2o_audit_trail_rpc_deadline_seconds = 60
5774
5775# H2O.ai Audit Trail Service address in H2O.ai Cloud.
5776#h2o_audit_trail_address = ""
5777
5778# Path to the Kubernetes service account token for Audit Trail and AuthZ.
5779#h2o_k8s_service_token_location = "/var/run/secrets/kubernetes.io/serviceaccount/token"
5780
5781# Enable H2O.ai AuthZ.
5782#enable_h2o_authz = false
5783
5784# The endpoint (host:port) of the H2O.ai AuthZ Policy Server in H2O.ai Cloud.
5785#h2o_authz_policy_server_endpoint = ""
5786
5787# The endpoint (host:port) of the H2O.ai Workspace server in H2O.ai Cloud.
5788#h2o_workspace_server_endpoint = ""
5789
5790# H2O.ai HAIC engine name for driverless instance that contains the
5791# workspace ID. Example:
5792# //engine-manager/workspaces/<workspace name>/daiEngines/<engine name>
5793# 
5794#haic_engine_name = ""
5795
5796# Whether to disable downloading logs via both API and UI. Note: this settings does not apply to admin user.
5797#disable_download_logs = false
5798
5799# Enable time series lag-based recipe with lag transformers. If disabled, the same train-test gap and periods are used, but no lag transformers are enabled. If disabled, the set of feature transformations is quite limited without lag transformers, so consider setting enable_time_unaware_transformers to true in order to treat the problem as more like an IID type problem.
5800#time_series_recipe = true
5801
5802# Whether causal splits are used when time_series_recipe is false orwhether to use same train-gap-test splits when lag transformers are disabled (default behavior).For train-test gap, period, etc. to be used when lag-based recipe is disabled, this must be false.
5803#time_series_causal_split_recipe = false
5804
5805# Whether to use lag transformers when using causal-split for validation
5806# (as occurs when not using time-based lag recipe).
5807# If no time groups columns, lag transformers will still use time-column as sole time group column.
5808# 
5809#use_lags_if_causal_recipe = false
5810
5811# 'diverse': explore a diverse set of models built using various expert settings. Note that it's possible to rerun another such diverse leaderboard on top of the best-performing model(s), which will effectively help you compose these expert settings.
5812# 'sliding_window': If the forecast horizon is N periods, create a separate model for each of the (gap, horizon) pairs of (0,n), (n,n), (2*n,n), ..., (2*N-1, n) in units of time periods.
5813# The number of periods to predict per model n is controlled by the expert setting 'time_series_leaderboard_periods_per_model', which defaults to 1.
5814#time_series_leaderboard_mode = "diverse"
5815
5816# Fine-control to limit the number of models built in the 'sliding_window' mode. Larger values lead to fewer models.
5817#time_series_leaderboard_periods_per_model = 1
5818
5819# Whether to create larger validation splits that are not bound to the length of the forecast horizon.
5820#time_series_merge_splits = true
5821
5822# Maximum ratio of training data samples used for validation across splits when larger validation splits are created.
5823#merge_splits_max_valid_ratio = -1.0
5824
5825# Whether to keep a fixed-size train timespan across time-based splits.
5826# That leads to roughly the same amount of train samples in every split.
5827# 
5828#fixed_size_train_timespan = false
5829
5830# Provide date or datetime timestamps (in same format as the time column) for custom training and validation splits like this: "tr_start1, tr_end1, va_start1, va_end1, ..., tr_startN, tr_endN, va_startN, va_endN"
5831#time_series_validation_fold_split_datetime_boundaries = ""
5832
5833# Set fixed number of time-based splits for internal model validation (actual number of splits allowed can be less and is determined at experiment run-time).
5834#time_series_validation_splits = -1
5835
5836# Maximum overlap between two time-based splits. Higher values increase the amount of possible splits.
5837#time_series_splits_max_overlap = 0.5
5838
5839# Earliest allowed datetime (in %Y%m%d format) for which to allow automatic conversion of integers to a time column during parsing. For example, 2010 or 201004 or 20100402 or 201004022312 can be converted to a valid date/datetime, but 1000 or 100004 or 10000402 or 10004022313 can not, and neither can 201000 or 20100500 etc.
5840#min_ymd_timestamp = 19000101
5841
5842# Latest allowed datetime (in %Y%m%d format) for which to allow automatic conversion of integers to a time column during parsing. For example, 2010 or 201004 or 20100402 can be converted to a valid date/datetime, but 3000 or 300004 or 30000402 or 30004022313 can not, and neither can 201000 or 20100500 etc.
5843#max_ymd_timestamp = 21000101
5844
5845# maximum number of data samples (randomly selected rows) for date/datetime format detection
5846#max_rows_datetime_format_detection = 100000
5847
5848# Manually disables certain datetime formats during data ingest and experiments.
5849# For example, ['%y'] will avoid parsing columns that contain '00', '01', '02' string values as a date column.
5850# 
5851#disallowed_datetime_formats = "['%y']"
5852
5853# Whether to use datetime cache
5854#use_datetime_cache = true
5855
5856# Minimum amount of rows required to utilize datetime cache
5857#datetime_cache_min_rows = 10000
5858
5859# Automatically generate is-holiday features from date columns
5860#holiday_features = true
5861
5862#holiday_country = ""
5863
5864# List of countries for which to look up holiday calendar and to generate is-Holiday features for
5865#holiday_countries = "['UnitedStates', 'UnitedKingdom', 'EuropeanCentralBank', 'Germany', 'Mexico', 'Japan']"
5866
5867# Max. sample size for automatic determination of time series train/valid split properties, only if time column is selected
5868#max_time_series_properties_sample_size = 250000
5869
5870# Maximum number of lag sizes to use for lags-based time-series experiments. are sampled from if sample_lag_sizes==True, else all are taken (-1 == automatic)
5871#max_lag_sizes = 30
5872
5873# Minimum required autocorrelation threshold for a lag to be considered for feature engineering
5874#min_lag_autocorrelation = 0.1
5875
5876# How many samples of lag sizes to use for a single time group (single time series signal)
5877#max_signal_lag_sizes = 100
5878
5879# If enabled, sample from a set of possible lag sizes (e.g., lags=[1, 4, 8]) for each lag-based transformer, to no more than max_sampled_lag_sizes lags. Can help reduce overall model complexity and size, esp. when many unavailable columns for prediction.
5880#sample_lag_sizes = false
5881
5882# If sample_lag_sizes is enabled, sample from a set of possible lag sizes (e.g., lags=[1, 4, 8]) for each lag-based transformer, to no more than max_sampled_lag_sizes lags. Can help reduce overall model complexity and size. Defaults to -1 (auto), in which case it's the same as the feature interaction depth controlled by max_feature_interaction_depth.
5883#max_sampled_lag_sizes = -1
5884
5885# Override lags to be used
5886# e.g. [7, 14, 21] # this exact list
5887# e.g. 21 # produce from 1 to 21
5888# e.g. 21:3 produce from 1 to 21 in step of 3
5889# e.g. 5-21 produce from 5 to 21
5890# e.g. 5-21:3 produce from 5 to 21 in step of 3
5891# 
5892#override_lag_sizes = "[]"
5893
5894# Override lags to be used for features that are not known ahead of time
5895# e.g. [7, 14, 21] # this exact list
5896# e.g. 21 # produce from 1 to 21
5897# e.g. 21:3 produce from 1 to 21 in step of 3
5898# e.g. 5-21 produce from 5 to 21
5899# e.g. 5-21:3 produce from 5 to 21 in step of 3
5900# 
5901#override_ufapt_lag_sizes = "[]"
5902
5903# Override lags to be used for features that are known ahead of time
5904# e.g. [7, 14, 21] # this exact list
5905# e.g. 21 # produce from 1 to 21
5906# e.g. 21:3 produce from 1 to 21 in step of 3
5907# e.g. 5-21 produce from 5 to 21
5908# e.g. 5-21:3 produce from 5 to 21 in step of 3
5909# 
5910#override_non_ufapt_lag_sizes = "[]"
5911
5912# Smallest considered lag size
5913#min_lag_size = -1
5914
5915# Whether to enable feature engineering based on selected time column, e.g. Date~weekday.
5916#allow_time_column_as_feature = true
5917
5918# Whether to enable integer time column to be used as a numeric feature.
5919# If using time series recipe, using time column (numeric time stamps) as input features can lead to model that
5920# memorizes the actual time stamps instead of features that generalize to the future.
5921# 
5922#allow_time_column_as_numeric_feature = false
5923
5924# Allowed date or date-time transformations.
5925# Date transformers include: year, quarter, month, week, weekday, day, dayofyear, num.
5926# Date transformers also include: hour, minute, second.
5927# Features in DAI will show up as get_ + transformation name.
5928# E.g. num is a direct numeric value representing the floating point value of time,
5929# which can lead to over-fitting if used on IID problems.  So this is turned off by default.
5930#datetime_funcs = "['year', 'quarter', 'month', 'week', 'weekday', 'day', 'dayofyear', 'hour', 'minute', 'second']"
5931
5932# Whether to filter out date and date-time transformations that lead to unseen values in the future.
5933# 
5934#filter_datetime_funcs = true
5935
5936# Whether to consider time groups columns (tgc) as standalone features.
5937# Note that 'time_column' is treated separately via 'Allow to engineer features from time column'.
5938# Note that tgc_allow_target_encoding independently controls if time column groups are target encoded.
5939# Use allowed_coltypes_for_tgc_as_features for control per feature type.
5940# 
5941#allow_tgc_as_features = true
5942
5943# Which time groups columns (tgc) feature types to consider as standalone features,
5944# if the corresponding flag "Consider time groups columns as standalone features" is set to true.
5945# E.g. all column types would be ["numeric", "categorical", "ohe_categorical", "datetime", "date", "text"]
5946# Note that 'time_column' is treated separately via 'Allow to engineer features from time column'.
5947# Note that if lag-based time series recipe is disabled, then all tgc are allowed features.
5948# 
5949#allowed_coltypes_for_tgc_as_features = "['numeric', 'categorical', 'ohe_categorical', 'datetime', 'date', 'text']"
5950
5951# Whether various transformers (clustering, truncated SVD) are enabled,
5952# that otherwise would be disabled for time series due to
5953# potential to overfit by leaking across time within the fit of each fold.
5954# 
5955#enable_time_unaware_transformers = "auto"
5956
5957# Whether to group by all time groups columns for creating lag features, instead of sampling from them
5958#tgc_only_use_all_groups = true
5959
5960# Whether to allow target encoding of time groups. This can be useful if there are many groups.
5961# Note that allow_tgc_as_features independently controls if tgc are treated as normal features.
5962# 'auto': Choose CV by default.
5963# 'CV': Enable out-of-fold and CV-in-CV (if enabled) encoding
5964# 'simple': Simple memorized targets per group.
5965# 'off': Disable.
5966# Only relevant for time series experiments that have at least one time column group apart from the time column.
5967#tgc_allow_target_encoding = "auto"
5968
5969# if allow_tgc_as_features is true or tgc_allow_target_encoding is true, whether to try both possibilities to see which does better during tuning.  Safer than forcing one way or the other.
5970#tgc_allow_features_and_target_encoding_auto_tune = true
5971
5972# Enable creation of holdout predictions on training data
5973# using moving windows (useful for MLI, but can be slow)
5974#time_series_holdout_preds = true
5975
5976# Max number of splits used for creating final time-series model's holdout/backtesting predictions. With the default value '-1' the same amount of splits as during model validation will be used. Use 'time_series_validation_splits' to control amount of time-based splits used for model validation.
5977#time_series_max_holdout_splits = -1
5978
5979#single_model_vs_cv_score_reldiff = 0.05
5980
5981#single_model_vs_cv_score_reldiff2 = 0.0
5982
5983# Whether to blend ensembles in link space, so that can apply inverse link function to get predictions after blending. This allows to get Shapley values to sum up to final predictions, after applying inverse link function: preds = inverse_link(   (blend(base learner predictions in link space   )))      = inverse_link(sum(blend(base learner shapley values in link space)))      = inverse_link(sum(      ensemble shapley values in link space     ))For binary classification, this is only supported if inverse_link = logistic = 1/(1+exp(-x))For multiclass classification, this is only supported if inverse_link = softmax = exp(x)/sum(exp(x))For regression, this behavior happens naturally if all base learners use the identity link function, otherwise not possible
5984#blend_in_link_space = true
5985
5986# Whether to speed up time-series holdout predictions for back-testing on training data (used for MLI and metrics calculation). Can be slightly less accurate.
5987#mli_ts_fast_approx = false
5988
5989# Whether to speed up Shapley values for time-series holdout predictions for back-testing on training data (used for MLI). Can be slightly less accurate.
5990#mli_ts_fast_approx_contribs = true
5991
5992# Enable creation of Shapley values for holdout predictions on training data
5993# using moving windows (useful for MLI, but can be slow), at the time of the experiment. If disabled, MLI will
5994# generate Shapley values on demand.
5995#mli_ts_holdout_contribs = true
5996
5997# Values of 5 or more can improve generalization by more aggressive dropping of least important features. Set to 1 to disable.
5998#time_series_min_interpretability = 5
5999
6000# Dropout mode for lag features in order to achieve an equal n.a.-ratio between train and validation/test. The independent mode performs a simple feature-wise dropout, whereas the dependent one takes lag-size dependencies per sample/row into account.
6001#lags_dropout = "dependent"
6002
6003# Normalized probability of choosing to lag non-targets relative to targets (-1.0 = auto)
6004#prob_lag_non_targets = -1.0
6005
6006# Method to create rolling test set predictions, if the forecast horizon is shorter than the time span of the test set. One can choose between test time augmentation (TTA) and a successive refitting of the final pipeline.
6007#rolling_test_method = "tta"
6008
6009#rolling_test_method_max_splits = 1000
6010
6011# Apply TTA in one pass instead of using rolling windows for internal validation split predictions. Note: Setting this to 'False' leads to significantly longer runtimes.
6012#fast_tta_internal = true
6013
6014# Apply TTA in one pass instead of using rolling windows for test set predictions. This only applies if the forecast horizon is shorter than the time span of the test set. Note: Setting this to 'False' leads to significantly longer runtimes.
6015#fast_tta_test = true
6016
6017# Probability for new Lags/EWMA gene to use default lags (determined by frequency/gap/horizon, independent of data) (-1.0 = auto)
6018#prob_default_lags = -1.0
6019
6020# Unnormalized probability of choosing other lag time-series transformers based on interactions (-1.0 = auto)
6021#prob_lagsinteraction = -1.0
6022
6023# Unnormalized probability of choosing other lag time-series transformers based on aggregations (-1.0 = auto)
6024#prob_lagsaggregates = -1.0
6025
6026# Time series centering or detrending transformation. The free parameter(s) of the trend model are fitted and the trend is removed from the target signal, and the pipeline is fitted on the residuals. Predictions are made by adding back the trend. Note: Can be cascaded with 'Time series lag-based target transformation', but is mutually exclusive with regular target transformations. The robust centering or linear detrending variants use RANSAC to achieve a higher tolerance w.r.t. outliers. The Epidemic target transformer uses the SEIR model: https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology#The_SEIR_model
6027#ts_target_trafo = "none"
6028
6029# Dictionary to control Epidemic SEIRD model for de-trending of target per time series group.
6030# Note: The target column must correspond to I(t), the infected cases as a function of time.
6031# For each training split and time series group, the SEIRD model is fitted to the target signal (by optimizing
6032# the free parameters shown below for each time series group).
6033# Then, the SEIRD model's value is subtracted from the training response, and the residuals are passed to
6034# the feature engineering and modeling pipeline. For predictions, the SEIRD model's value is added to the residual
6035# predictions from the pipeline, for each time series group.
6036# Note: Careful selection of the bounds for the free parameters N, beta, gamma, delta, alpha, rho, lockdown,
6037# beta_decay, beta_decay_rate is extremely important for good results.
6038# - S(t) : susceptible/healthy/not immune
6039# - E(t) : exposed/not yet infectious
6040# - I(t) : infectious/active <= target column
6041# - R(t) : recovered/immune
6042# - D(t) : deceased
6043# ### Free parameters:
6044# - N : total population, N=S+E+I+R+D
6045# - beta : rate of exposure (S -> E)
6046# - gamma : rate of recovering (I -> R)
6047# - delta : incubation period
6048# - alpha : fatality rate
6049# - rho : rate at which people die
6050# - lockdown : day of lockdown (-1 => no lockdown)
6051# - beta_decay : beta decay due to lockdown
6052# - beta_decay_rate : speed of beta decay
6053# ### Dynamics:
6054# if lockdown >= 0:
6055# beta_min = beta * (1 - beta_decay)
6056# beta = (beta - beta_min) / (1 + np.exp(-beta_decay_rate * (-t + lockdown))) + beta_min
6057# dSdt = -beta * S * I / N
6058# dEdt = beta * S * I / N - delta * E
6059# dIdt = delta * E - (1 - alpha) * gamma * I - alpha * rho * I
6060# dRdt = (1 - alpha) * gamma * I
6061# dDdt = alpha * rho * I
6062# Provide lower/upper bounds for each parameter you want to control the bounds for. Valid parameters are:
6063# N_min, N_max, beta_min, beta_max, gamma_min, gamma_max, delta_min, delta_max, alpha_min, alpha_max,
6064# rho_min, rho_max, lockdown_min, lockdown_max, beta_decay_min, beta_decay_max,
6065# beta_decay_rate_min, beta_decay_rate_max. You can change any subset of parameters, e.g.,
6066# ts_target_trafo_epidemic_params_dict="{'N_min': 1000, 'beta_max': 0.2}"
6067# To get SEIR model (in cases where death rates are very low, can speed up calculations significantly):
6068# set alpha_min=alpha_max=rho_min=rho_max=beta_decay_rate_min=beta_decay_rate_max=0, lockdown_min=lockdown_max=-1.
6069# 
6070#ts_target_trafo_epidemic_params_dict = "{}"
6071
6072#ts_target_trafo_epidemic_target = "I"
6073
6074# Time series lag-based target transformation. One can choose between difference and ratio of the current and a lagged target. The corresponding lag size can be set via 'Target transformation lag size'. Note: Can be cascaded with 'Time series target transformation', but is mutually exclusive with regular target transformations.
6075#ts_lag_target_trafo = "none"
6076
6077# Lag size used for time series target transformation. See setting 'Time series lag-based target transformation'. -1 => smallest valid value = prediction periods + gap (automatically adjusted by DAI if too small).
6078#ts_target_trafo_lag_size = -1
6079
6080# Maximum amount of columns send from UI to backend in order to auto-detect TGC
6081#tgc_via_ui_max_ncols = 10
6082
6083# Maximum frequency of duplicated timestamps for TGC detection
6084#tgc_dup_tolerance = 0.01
6085
6086# Timeout in seconds for time-series properties detection in UI.
6087#timeseries_split_suggestion_timeout = 30.0
6088
6089# Weight TS models scores as split number to this power.
6090# E.g. Use 1.0 to weight split closest to horizon by a factor
6091# that is number of splits larger than oldest split.
6092# Applies to tuning models and final back-testing models.
6093# If 0.0 (default) is used, median function is used, else mean is used.
6094# 
6095#timeseries_recency_weight_power = 0.0
6096
6097# Whether to force date column format conversion during prediction. Date format
6098# is inferred during training and assumes prediction data has the same format.
6099# Enable this setting would force DAI to do the format conversion silently.
6100# For instance, if expected format is '%m/%d/%Y' but prediction comes with '2000-01-01', then
6101# conversion will be done by converting the date representation into 'yyyy-mm-dd' on ad hoc fashion.
6102# Note: Even force conversion, this normally wont affect embedding information of the date column.
6103# 
6104#force_on_convert_incorrect_date_format = false
6105
6106# Every *.toml file is read from this directory and process the same way as main config file.
6107#user_config_directory = ""
6108
6109# IP address for the procsy process.
6110#procsy_ip = "127.0.0.1"
6111
6112# Port for the procsy process.
6113#procsy_port = 12347
6114
6115# Request timeout (in seconds) for the procsy process.
6116#procsy_timeout = 3600
6117
6118# IP address for use by MLI.
6119#h2o_ip = "127.0.0.1"
6120
6121# Port of H2O instance for use by MLI. Each H2O node has an internal port (web port+1, so by default port 12349) for internal node-to-node communication
6122#h2o_port = 12348
6123
6124# IP address and port for Driverless AI HTTP server.
6125#ip = "127.0.0.1"
6126
6127# IP address and port for Driverless AI HTTP server.
6128#port = 12345
6129
6130# A list of two integers indicating the port range to search over, and dynamically find an open port to bind to (e.g., [11111,20000]).
6131#port_range = "[]"
6132
6133# Strict version check for DAI
6134#strict_version_check = true
6135
6136# File upload limit (default 100GB)
6137#max_file_upload_size = 104857600000
6138
6139# Data directory. All application data and files related datasets and
6140# experiments are stored in this directory.
6141#data_directory = "./tmp"
6142
6143# Sets a custom path for the master.db. Use this to store the database outside the data directory,
6144# which can improve performance if the data directory is on a slow drive.
6145#db_path = ""
6146
6147# Datasets directory. If set, it will denote the location from which all
6148# datasets will be read from and written into, typically this location shall be configured to be
6149# on an external file system to allow for a more granular control to just the datasets volume.
6150# If empty then will default to data_directory.
6151#datasets_directory = ""
6152
6153# Path to the directory where the logs of HDFS, Hive, JDBC, and KDB+ data connectors will be saved.
6154#data_connectors_logs_directory = "./tmp"
6155
6156# Subdirectory within data_directory to store server logs.
6157#server_logs_sub_directory = "server_logs"
6158
6159# Subdirectory within data_directory to store pid files for controlling kill/stop of DAI servers.
6160#pid_sub_directory = "pids"
6161
6162# Path to the directory which will be use to save MapR tickets when MapR multi-user mode is enabled.
6163# This is applicable only when enable_mapr_multi_user_mode is set to true.
6164# 
6165#mapr_tickets_directory = "./tmp/mapr-tickets"
6166
6167# MapR tickets duration in minutes, if set to -1, it will use the default value
6168# (not specified in maprlogin command), otherwise will be the specified configuration
6169# value but no less than one day.
6170# 
6171#mapr_tickets_duration_minutes = -1
6172
6173# Whether at server start to delete all temporary uploaded files, left over from failed uploads.
6174# 
6175#remove_uploads_temp_files_server_start = true
6176
6177# Whether to run through entire data directory and remove all temporary files.
6178# Can lead to slow start-up time if have large number (much greater than 100) of experiments.
6179# 
6180#remove_temp_files_server_start = false
6181
6182# Whether to delete temporary files after experiment is aborted/cancelled.
6183# 
6184#remove_temp_files_aborted_experiments = true
6185
6186# Whether to opt in to usage statistics and bug reporting
6187#usage_stats_opt_in = true
6188
6189# Configurations for a HDFS data source
6190# Path of hdfs coresite.xml
6191# core_site_xml_path is deprecated, please use hdfs_config_path
6192#core_site_xml_path = ""
6193
6194# (Required) HDFS config folder path. Can contain multiple config files.
6195#hdfs_config_path = ""
6196
6197# Path of the principal key tab file. Required when hdfs_auth_type='principal'.
6198# key_tab_path is deprecated, please use hdfs_keytab_path
6199# 
6200#key_tab_path = ""
6201
6202# Path of the principal key tab file. Required when hdfs_auth_type='principal'.
6203# 
6204#hdfs_keytab_path = ""
6205
6206# Whether to delete preview cache on server exit
6207#preview_cache_upon_server_exit = true
6208
6209# When this setting is enabled, any user can see all tasks running in the system, including their owner and an identification key. If this setting is turned off, user can see only their own tasks.
6210#all_tasks_visible_to_users = true
6211
6212# When enabled, server exposes Health API at /apis/health/v1, which provides system overview and utilization statistics
6213#enable_health_api = true
6214
6215#notification_url = "https://s3.amazonaws.com/ai.h2o.notifications/dai_notifications_prod.json"
6216
6217# When enabled, the notification scripts will inherit
6218# the parent's process (DriverlessAI) environment variables.
6219# 
6220#listeners_inherit_env_variables = false
6221
6222# Notification scripts
6223# - the variable points to a location of script which is executed at given event in experiment lifecycle
6224# - the script should have executable flag enabled
6225# - use of absolute path is suggested
6226# The on experiment start notification script location
6227#listeners_experiment_start = ""
6228
6229# The on experiment finished notification script location
6230#listeners_experiment_done = ""
6231
6232# The on experiment import notification script location
6233#listeners_experiment_import_done = ""
6234
6235# Notification script triggered when building of MOJO pipeline for experiment is
6236# finished. The value should be an absolute path to executable script.
6237# 
6238#listeners_mojo_done = ""
6239
6240# Notification script triggered when rendering of AutoDoc for experiment is
6241# finished. The value should be an absolute path to executable script.
6242# 
6243#listeners_autodoc_done = ""
6244
6245# Notification script triggered when building of python scoring pipeline
6246# for experiment is finished.
6247# The value should be an absolute path to executable script.
6248# 
6249#listeners_scoring_pipeline_done = ""
6250
6251# Notification script triggered when experiment and all its artifacts selected
6252# at the beginning of experiment are finished building.
6253# The value should be an absolute path to executable script.
6254# 
6255#listeners_experiment_artifacts_done = ""
6256
6257# Whether to run quick performance benchmark at start of application
6258#enable_quick_benchmark = true
6259
6260# Whether to run extended performance benchmark at start of application
6261#enable_extended_benchmark = false
6262
6263# Scaling factor for number of rows for extended performance benchmark. For rigorous performance benchmarking,
6264# values of 1 or larger are recommended.
6265#extended_benchmark_scale_num_rows = 0.1
6266
6267# Number of columns for extended performance benchmark.
6268#extended_benchmark_num_cols = 20
6269
6270# Seconds to allow for testing memory bandwidth by generating numpy frames
6271#benchmark_memory_timeout = 2
6272
6273# Maximum portion of vm total to use for numpy memory benchmark
6274#benchmark_memory_vm_fraction = 0.25
6275
6276# Maximum number of columns to use for numpy memory benchmark
6277#benchmark_memory_max_cols = 1500
6278
6279# Whether to run quick startup checks at start of application
6280#enable_startup_checks = true
6281
6282# Application ID override, which should uniquely identify the instance
6283#application_id = ""
6284
6285# After how many seconds to abort MLI recipe execution plan or recipe compatibility checks.
6286# Blocks main server from all activities, so long timeout is not desired, esp. in case of hanging processes,
6287# while a short timeout can too often lead to abortions on busy system.
6288# 
6289#main_server_fork_timeout = 10.0
6290
6291# After how many days the audit log records are removed.
6292# Set equal to 0 to disable removal of old records.
6293# 
6294#audit_log_retention_period = 5
6295
6296# Time to wait after performing a cleanup of temporary files for in-browser dataset upload.
6297# 
6298#dataset_tmp_upload_file_retention_time_min = 5
6299