Experiment settings: Text sequence to sequence
The settings for a text sequence to sequence experiment are listed and described below.
General settings
Dataset
This setting defines the dataset for the experiment.
Problem category
This setting defines a particular general problem type category, for example, image.
- The selected problem category (for example, image) determines the options in the Problem type setting.
- The From experiment option enables you to utilize the settings of an experiment (another experiment).
Experiment
This setting defines the experiment H2O Hydrogen Torch references to initialize the experiment settings. H2O Hydrogen Torch initializes the experiment settings with the values from the selected (built) experiment.
This setting is available only if From experiment is selected in the Problem category setting.
Problem type
This setting defines the problem type of the experiment, which also defines the settings H2O Hydrogen Torch displays for the experiment.
- The selected problem category (in the Problem category setting) determines the available problem types.
- The selected problem type and experience level determine the settings H2O Hydrogen Torch displays for the experiment.
Import config from YAML
This setting defines the YML file that defines the experiment settings.
- H2O Hydrogen Torch supports a YML file import and export functionality. You can download the config settings of finished experiments, make changes, and re-upload them when starting a new experiment in any instance of H2O Hydrogen Torch.
- To learn how to download the YML file (configuration file) of a completed experiment, see Download an experiment's logs/config file.
Use previous experiment weights
This setting determines whether to initialize the model weights with the weights from the experiment specified in the Experiment setting.
A model's weights are available for an experiment (model) of the same problem type and backbone.
This setting might be useful in case you want to continue training from a built experiment.
The Use previous experiment weights setting is available only if From experiment is selected in the Problem category setting.
Experiment name
This setting defines the name of the experiment.
Dataset settings
Train dataframe
This setting specifies the path to a file that contains a dataframe comprising training records utilized by H2O Hydrogen Torch for model training within the experiment. Here, the term 'file' denotes a specific file adhering to a dataset format tailored for the problem type addressed in the experiment. To learn more, see Dataset formats.
- The records are combined into mini-batches when training the model.
- If a validation dataframe is provided, a fold column is not needed in the train dataframe.
- To import datasets for inference only, when defining the settings for an experiment, set the Train dataframe setting to None while setting the Test dataframe setting to the relevant dataframe (as a result, H2O Hydrogen Torch utilizes the relevant dataset for predictions and not for training).
Validation strategy
This setting specifies the validation strategy H2O Hydrogen Torch uses for the experiment.
To properly assess the performance of your trained models, it is common practice to evaluate it on separate holdout data that the model has not seen during training.
Options
- K-fold cross validation
- This option splits the data using the provided optional fold column in the train data or performs an automatic 5-fold cross-validation in the absence of a fold column.
- Grouped k-fold cross-validation
- This option allows you to specify a group column based on which the data is split into folds.
- Custom holdout validation
- This option specifies a separate holdout dataframe.
- Automatic holdout validation
- This option allows you to specify a holdout validation sample size that is automatically generated.
Validation dataframe
This setting defines a file containing a dataframe with validation records that H2O Hydrogen Torch uses to evaluate the model during training.
- To set a Validation dataframe requires the Validation strategy to be set to Custom holdout validation. In the case of providing a validation dataframe, H2O Hydrogen Torch fully respects the choice of a separate validation dataframe and does not perform any internal cross-validation. In other words, the model is trained on the full provided train dataframe, and model performance is evaluated on the provided validation dataframe.
- The validation dataframe should have the same format as the train dataframe but does not require a fold column.
The Validation dataframe settings is only available when you select Validation strategy in the Custom holdout validation setting.
Selected folds
This setting defines the selected validation fold(s) in case of cross-validation; a separate model is trained for each value selected. Each model utilizes the corresponding part of the data as a holdout sample to assess performance while the model is fitted to the rest of the records from the training dataframe. As a result, folds estimate how the model performs in general when used to make predictions on data not used during model training.
H2O Hydrogen Torch allows running experiments on a single selected fold for faster experimenting and multiple selected folds to gain more trust in the model's generalization and performance capabilities.
This setting is available only when the Validation strategy setting is not set to Custom holdout validation or Automatic holdout validation.
Test dataframe
This setting defines a file containing a dataframe with test records that H2O Hydrogen Torch uses to test the model.
- The test dataframe should have the same format as the train dataframe but does not require a label column.
- To import datasets for inference only, when defining the setting for an experiment, set the Train dataframe setting to None while setting the Test dataframe setting to the relevant dataframe (as a result, H2O Hydrogen Torch utilizes the relevant dataset for predictions and not for training).
Label columns
This setting defines the name(s) of the dataframe column(s) that refer to the target value(s) an H2O Hydrogen Torch experiment can aim to predict.
Text column
Defines the dataset column(s) containing the input text H2O Hydrogen Torch uses during model training.
H2O Hydrogen Torch concatenates multiple text columns with a specific separator token.
Data sample
This setting defines the percentage of the data to use for the experiment. The default percentage is 100%.
Changing the default value can significantly increase the training speed. Still, it might lead to a substantially poor accuracy value. Using 100% of the data for final models is highly recommended.
Data sample choice
This setting specifies the data H2O Hydrogen Torch samples according to the percentage set in the Data sample setting. H2O Hydrogen Torch does not sample the unselected data.
The Data sample choice setting is only available if the value in the Data sample setting is less than 1.0.
Tokenizer settings
Lowercase
Grid search hyperparameter
Determines whether to transform to lower case the text that H2O Hydrogen Torch observes during the experiment. This setting is turned Off by default.
When turned On, the observed text is always lowercased before training and prediction. Tuning this setting can potentially lead to a higher accuracy value for certain types of datasets.
Max length
Grid search hyperparameter
Specify the maximum length of the token input sequence that is used for model training. The following example describes how you can use this setting to truncate a given token input sequence.
Consider the following text:
I'd like to read the H2O Hydrogen Torch documentation today.
The preceding text is tokenized by bert-base as follows:
['I', "'", 'd', 'like', 'to', 'read', 'the', 'H', '##2', '##O', 'Hydrogen', 'Torch', 'document', '##ation', 'today', '.']
A [CLS]
(classification) token is subsequently added to the input sequence at position 0. (The manner in which this token is represented as a string depends on the model.)
['[CLS]', 'I', "'", 'd', 'like', 'to', 'read', 'the', 'H', '##2', '##O', 'Hydrogen', 'Torch', 'document', '##ation', 'today', '.']
If the maximum length is set to 8, the preceding input sequence is truncated after 8 tokens. Therefore, the model is provided with the following input sequence:
``['[CLS]', 'I', "'", 'd', 'like', 'to', 'read', 'the']
A higher token count leads to higher memory usage that slows down training while increasing the probability of obtaining a higher accuracy value.
Label max length
Defines the maximum length of the target text H2O Hydrogen Torch uses during model training.
Augmentation settings
Token mask probability
Defines the random probability of the input text tokens to be randomly masked during training.
- Increasing this setting can be helpful to avoid overfitting and apply regularization
- Each token is randomly replaced by a masking token based on the specified probability
Architecture settings
Pretrained
Grid search hyperparameter
Defines whether the neural network should start with pre-trained weights. When this setting is On, the training of the neural network starts with a pre-trained model on a generic task. When turned Off, the initial weights of the neural network to train become random.
Backbone
Grid search hyperparameter
Defines the backbone neural network architecture to train the model.
- Image regression | Image classification | Image metric learning | Audio regression | Audio classification
- H2O Hydrogen Torch accepts backbone neural network architectures from the timm library (select or enter the architecture name)
- Image object detection
- H2O Hydrogen Torch provides several backbone state-of-the-art neural network architectures for model training. When you select Faster RCnn or Fcos as the model type for the experiment, you can input any architecture name from the timm library. When you select Efficientdet as the model type for the experiment, you can input any architecture name from the efficientdet-pytorch library
- Image semantic segmentation | Image instance segmentation
- H2O Hydrogen Torch accepts backbone neural network architectures from the segmentation-models-pytorch library (select or enter the architecture name).
- 3D image regression | 3D image classification
- H2O Hydrogen Torch accepts backbone (encoder) neural network architectures from a subset (resnet and efficientnet) of the timm library (select or enter the architecture name).
- Text regression | Text classification | Text token classification | Text span prediction | Text sequence to sequence | Text metric learning
- H2O Hydrogen Torch accepts backbone neural network architectures from the Hugging Face library (select or enter the architecture name)
- Speech recognition
- HuggingFace Wav2Vec2 CTC models are supported
- All problem types
- Usually, it is good to use simpler architectures for quicker experiments and larger models when aiming for the highest accuracy
- Speech recognition
- If possible, leverage backbones pre-trained closely to your use case (for example, noisy audio, casual speech, etc.)
Gradient checkpointing
Determines whether H2O Hydrogen Torch activates gradient checkpointing (GC) when training the model. Starting GC reduces the video random access memory (VRAM) footprint at the cost of a longer runtime (an additional forward pass). Turning On GC enables it during the training process.
Gradient checkpointing is an experimental setting that is not compatible with all backbones. If a backbone is not supported, the experiment fails, and H2O Hydrogen Torch informs through the logs that the selected backbone is not compatible with gradient checkpointing. To learn about the backbone setting, see Backbone.
Activating GC comes at the cost of a longer training time; for that reason, try training without GC first and only activate when experiencing GPU out-of-memory (OOM) errors.
Intermediate dropout
Grid search hyperparameter
Defines the custom dropout rate H2O Hydrogen Torch uses for intermediate layers in the transformer model.
Trust Remote Code
This setting determines whether the transformers library should allow the use of custom models that are defined on the Hugging Face Hub with their own custom modeling files. By enabling this setting, you permit the execution of code from these custom models, which might include new or modified layers, architectures, or other functionalities that are not part of the standard transformers library. This setting should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hugging Face Hub on your instance.
Lora
This setting turns on or off the use of Low Rank Approximations (LoRA) in H2O Hydrogen Torch during model training.
LoRA (Low-Rank Adaptation) is a technique used to compress the weight matrices of large pre-trained language models, making them more memory-efficient and faster to train. In NLP machine learning models, LoRA can significantly improve the performance of the model while reducing the computational cost.
Enabling this setting can lead to faster training times and lower memory usage, making it particularly useful when working with large-scale NLP tasks. However, it may result in a slight decrease in model accuracy compared to using full-rank matrices. Turning off this setting will ensure that full-rank matrices are used during training but at the cost of longer training times and higher memory requirements.
For most NLP tasks, we recommend enabling LoRA during training unless you require the highest possible level of accuracy and have sufficient computational resources available. In such cases, turning off LoRA may improve model performance at the expense of increased training time and memory usage.
Lora target modules
This setting allows you to specify the linear layers in your model that will undergo the Low Rank Approximation (LoRA) technique during training. By default, all linear layers in the model are selected for LoRA approximation. However, you can customize this behavior by specifying a list of module names or indices corresponding to the linear layers to which you want to apply LoRA.
Customizing the LoRA target modules can be beneficial when you specifically want to apply LoRA to certain layers in your model. This approach can further enhance the model's performance while minimizing the computational cost.
By specifying the target modules, you can precisely control which linear layers will benefit from the memory efficiency and faster training times offered by LoRA. This fine-grained control allows you to optimize the trade-off between computational resources and model performance.
To leverage the advantages of LoRA, experiment with different combinations of module names or indices to identify the optimal set of target modules. This customization empowers you to tailor the LoRA approximation to your specific model architecture and requirements.
By default, all linear layers are selected for LoRA approximation. However, you can customize this setting by specifying a list of module names or indices corresponding to the linear layers to which you want to apply LoRA.
This setting is available if you turn On the Lora setting.
Lora dropout
This setting determines the probability of applying dropout to the Low Rank Approximation (LoRA) weights during training. By default, this setting is set to 0.05, meaning dropout is applied to the LoRA weights with a probability of 0.05 (5%) during training.
During the training process, dropout is a regularization technique that randomly sets a fraction of the weights to zero. This technique helps prevent overfitting and improves the model's generalization performance. By specifically applying dropout to the LoRA weights, you can further enhance the regularization effect and have better control over the model's performance.
Adjusting the LoRA dropout probability allows you to fine-tune the regularization strength, balance model accuracy, and overfitting prevention. Increasing the probability value amplifies the dropout effect, resulting in stronger regularization. Conversely, decreasing the probability value reduces the dropout effect.
This setting is available if you turn On the Lora setting.
Lora r
This setting determines the dimension of the matrix decomposition used in the Low Rank Approximation (LoRA) technique. By default, the rank is set to 4, which means that the weight matrices in the LoRA layers are decomposed into a product of smaller matrices with a rank of 4.
LoRA utilizes matrix decomposition to compress weight matrices in large pre-trained language models, improving memory efficiency and faster training times. The matrix decomposition dimension specifies the size of the decomposed matrices used in the approximation process.
By setting the LoRA matrix decomposition dimension, you can control the level of compression applied to the weight matrices. A higher dimension value allows for more detailed representation but may require additional memory and computational resources. Conversely, a lower dimension value reduces memory usage but may result in a loss of fine-grained information.
The default value of 4 provides a balanced trade-off between memory efficiency and model performance. However, you can customize this setting based on your specific requirements and constraints.
This setting is available if you turn On the Lora setting.
Lora alpha
This setting specifies the scaling factor for the Low Rank Approximation (LoRA) weights.The default value for this setting is 16.
This setting determines the magnitude of the LoRA weights, which can affect the model's performance and computational cost.
A higher value for "LoRA alpha" results in larger LoRA weights, which can improve the model's accuracy but also increase the computational cost. Conversely, a lower value for "LoRA alpha" results in smaller LoRA weights, which can reduce the computational cost but may also result in lower model accuracy. You can adjust this setting to find the optimal trade-off between model accuracy and computational efficiency.
This setting is available if you turn On the Lora setting.
Training settings
Loss function
Grid search hyperparameter
This setting defines the loss function H2O Hydrogen Torch utilizes during model training. The loss function is a differentiable function measuring the prediction error. The model utilizes gradients of the loss function to update the model weights during training.
Options
Image regression | 3D image regression | Text regression | Audio regression
Image classification | 3D image classification | Text classification | Audio classification
Image semantic segmentation | 3D image semantic segmentation | Image instance segmentation
Image metric learning | Text metric learning
Text token classification | Text span prediction | Text sequence to sequence
Speech recognition
Optimizer
Grid search hyperparameter
This setting defines the algorithm or method (optimizer) to use for model training. The selected algorithm or method defines how the model should change the attributes of the neural network, such as weights and learning rate. Optimizers solve optimization problems and make more accurate updates to attributes to reduce learning losses.
Options
- Adadelta
- To learn about Adadelta, see ADADELTA: An Adaptive Learning Rate Method.
- Adam
- To learn about Adam, see Adam: A Method for Stochastic Optimization.
- AdamW
- To learn about AdamW, see Decoupled Weight Decay Regularization.
- RMSprop
- To learn about RMSprop, see Neural Networks for Machine Learning.
- SGD
- H2O Hydrogen Torch uses a stochastic gradient descent optimizer.
Learning rate
Grid search hyperparameter
This setting defines the learning rate H2O Hydrogen Torch uses when training the model, specifically when updating the neural network's weights. The learning rate is the speed at which the model updates its weights after processing each mini-batch of data.
- The learning rate is an important setting to tune as it balances under and overfitting.
- The number of epochs highly impacts the optimal value of the learning rate.
Differential learning rate layers
Defines the learning rate to apply to certain layers of a model. H2O Hydrogen Torch applies the regular learning rate to layers without a specified learning rate.
Options
The options for an image object detection experiment are different based on the selected Model type (setting). Options: If you select EfficientDet as the experiment's Model type (setting), the following options are available: If you select Faster R-CNN as the experiment's Model type (setting), the following options are available: If you select FCOS as the experiment's Model type (setting), the following options are available:Image regression | Image classification | Text regression | Text classification | Text token classification | Audio regression | Audio classification
Image object detection
Options
Options
Options
Image semantic segmentation
3D image semantic segmentation | Text sequence to sequence
Image instance segmentation
Image metric learning | Text metric learning
Text regression
Text span prediction
A common strategy is to apply a lower learning rate to the backbone of a model for better convergence and training stability.
Different layers are available for different problem types.
Batch size
Grid search hyperparameter
This setting defines the number of training examples a mini-batch uses during an iteration of the training model to estimate the error gradient before updating the model weights. In other words, this setting defines the batch size used per GPU.
During model training, the training data is packed into mini-batches of a fixed size.
Automatically adjust batch size
If this setting is turned On, H2O Hydrogen Torch checks whether the Batch size specified fits into the GPU memory. If a GPU out-of-memory (OOM) error occurs, H2O Hydrogen Torch automatically decreases the Batch size by a factor of 2 units until it fits into the GPU memory or Batch size equals 1.
Drop last batch
This setting drops the last incomplete batch during model training when turned On.
H2O Hydrogen Torch groups the train data into mini-batches of equal size during the training process, but the last batch can have fewer records than the others. Not dropping the last batch can lead to a less robust gradient estimation while causing a more volatile training step.
Epochs
Grid search hyperparameter
This setting defines the number of epochs to train the model. In other words, it specifies the number of times the learning algorithm goes through the entire training dataset.
- The Epochs setting is an important setting to tune because it balances under- and overfitting.
- The learning rate highly impacts the optimal value of the epochs.
- For the following supported problem types, H2O Hydrogen Torch now enables you to utilize/deploy a pre-trained model trained on zero epochs (where H2O Hydrogen Torch does not train the model and the pretrained model (experiment) can be deployed as-is):
- Speech recognition
- Text sequence to sequence
- text span prediction
Schedule
Grid search hyperparameter
This setting defines the learning rate schedule H2O Hydrogen Torch utilizes during model training. Specifying a learning rate schedule prevents the learning rate from staying the same. Instead, a learning rate schedule causes the learning rate to change over iterations, typically decreasing the learning rate to achieve a better model performance and training convergence.
Options
- Constant
- H2O Hydrogen Torch applies a constant learning rate during the training process.
- Cosine
- H2O Hydrogen Torch applies a cosine learning rate that follows the values of the cosine function.
- Linear
- H2O Hydrogen Torch applies a linear learning rate that decreases the learning rate linearly.
Warmup epochs
Grid search hyperparameter
This setting determines the number of epochs to warmup for gradually increasing the learning rate from 0 to the specified value. The learning rate increases linearly during the warmup period, allowing the model to adapt to the learning process gradually.
You can set the value of this setting as a ratio of an epoch. For instance, setting it to 0.1 means warmup is performed for only 10% of the first full epoch.
Weight decay
Grid search hyperparameter
This setting defines the weight decay that H2O Hydrogen Torch uses for the optimizer during model training.
Weight decay is a regularization technique that adds an L2 norm of all model weights to the loss function while increasing the probability of improving the model generalization.
Gradient clip
Grid search hyperparameter
This setting defines the maximum norm of the gradients H2O Hydrogen Torch specifies during model training. Defaults to 0, no clipping. When a value greater than 0 is specified, H2O Hydrogen Torch modifies the gradients during model training. H2O Hydrogen Torch uses the specified value as an upper limit for the norm of the gradients, calculated using the Euclidean norm over all gradients per batch.
This setting can help model convergence when extreme gradient values cause high volatility of weight updates.
Grad accumulation
Grid search hyperparameter
This setting defines the number of gradient accumulations before H2O Hydrogen Torch updates the neural network weights during model training.
- Grad accumulation can be beneficial if only small batches are selected for training. With gradient accumulation, the loss and gradients are calculated after each batch, but it waits for the selected accumulations before updating the model weights. You can control the batch size through the Batch size setting.
- Changing the default value of Grad Accumulation might require adjusting the learning rate and batch size.
Save best checkpoint
This setting determines if H2O Hydrogen Torch should save the model weights of the epoch exhibiting the best validation metric. When turned On, H2O Hydrogen Torch saves the model weights for the epoch exhibiting the best validation metric. When turned Off, H2O Hydrogen Torch saves the model weights after the last epoch is executed.
- This setting should be turned On with care as it has the potential to lead to overfitting of the validation data.
- The default goal should be to attempt to tune models so that the last or very last epoch is the best epoch.
- Suppose an evident decline for later epochs is observed in logging. In that case, it is usually better to adjust hyperparameters, such as reducing the number of epochs or increasing regularization, instead of turning this setting On.
Evaluation epochs
This setting defines the number of epochs H2O Hydrogen Torch uses before each validation loop for model training. In other words, it determines the frequency (in a number of epochs) to run the model evaluation on the validation data.
- Increasing the number of Evaluation Epochs can speed up an experiment.
The Evaluation epochs setting is available only if the following setting is turned Off: Save Best Checkpoint.
Evaluate before training
Determines whether to perform a validation run before training. This setting is potentially helpful for assessing the performance of zero-shot pertained backbones and checking the modeling pipeline.
The following supported problem types support externally pretrained zero-shot models (while problem types that do not contain this support fit a new head on top of a backbone):
- Text span prediction
- Text sequence to sequence
- Speech recognition
Calculate train metric
This setting determines whether the model metric should also be calculated for the training data at the end of the training. When On, the model metric is calculated for the training data. The resulting values do not indicate the true model performance because they are based on H2O Hydrogen Torch's identical data records for model training but can give insights into over/underfitting.
Train validation data
This setting defines whether the model should use the entire train and validation dataset during model training. When turned On, H2O Hydrogen Torch uses the whole train dataset and validation data to train the model.
- H2O Hydrogen Torch also evaluates the model on the provided validation fold. Validation is always only on the provided validation fold.
- H2O Hydrogen Torch uses both datasets for model training if you provide a train and validation dataset.
- To define a training dataset, use the Train dataframe setting. For more information, see Train dataframe.
- To define a validation dataset, use the Validation dataframe setting. For more information, see Validation dataframe.
- Turning On the Train validation data setting should produce a model that you can expect to perform better because H2O Hydrogen Torch trained the model on more data. Thought, also note that using the entire train dataset and out-of-fold validation dataset generally causes the model's accuracy to be overstated as information from the validation data is incorporated into the model during the training process. note
If you have five folds and set fold 0 as validation, H2O Hydrogen Torch usually trains on folds 1-4 and reports on fold 0. With Train validation data turned On, we can add fold 0 to the training, but H2O Hydrogen Torch still reports its accuracy. As a result, it overstated for fold 0 but should be better for any unseen (test) data/production scenarios. For that reason, you usually want to consider this setting after running your experiments and deciding on models.
This setting is only available if you turned the Save best checkpoint setting Off.
Build scoring pipelines
Determines whether the experiment (model) automatically generates an H2O MLOps pipeline and Python scoring pipeline at the end of the experiment. If turned Off, you can still create scoring pipelines on demand when the experiment is complete (e.g., when you click Download soring or Download MLOps).
Prediction settings
Metric
This setting defines the metric to evaluate the model's performance.
Options
Image regression | 3D image regression | Text regression | Audio regression
Image classification | 3D image classification | Text classification | Audio classification
Image object detection
Image semantic segmentation | 3D image semantic segmentation
Image instance segmentation
Image metric learning | Text metric learning
Text token classification
Text span prediction
Text sequence to sequence
Speech recognition
Batch size inference
This setting defines the batch size of examples to utilize for inference.
Selecting 0 sets the Batch size inference to the same value used for the Batch size setting.
Max length inference
Defines the max length value H2O Hydrogen Torch uses for the generated text.
- Similar to the Max Length setting in the tokenizer settings section, this setting specifies the maximum number of tokens to predict for a given prediction sample.
- This setting impacts predictions and the evaluation metrics and should depend on the dataset and average output sequence length that is expected to be predicted.
Do sample
Determines whether to sample from the next token distribution instead of choosing the token with the highest probability. If turned On, the next token in a predicted sequence is sampled based on the probabilities. If turned Off, the highest probability is always chosen.
Num beams
Defines the number of beams to use for beam search. Num Beams default value is 1 (a single beam); no beam search.
A higher Num Beams value can increase prediction runtime while potentially improving accuracy.
Temperature
Defines the temperature to use for sampling from the next token distribution during validation and inference. In other words, the defined temperature controls the randomness of predictions by scaling the logits before applying softmax. A higher temperature makes the distribution more random.
- Modify the temperature value if you have the Do Sample setting enabled (On).
- To learn more about this setting, refer to the following article: How to generate text: using different decoding methods for language generation with Transformers.
Environment settings
Gpus
This setting determines the list of GPUs H2O Hydrogen Torch can use for the experiment. GPUs are listed by name, referring to their system ID (starting from 1). If no GPUs are selected, H2O Hydrogen Torch utilizes the CPU for model training.
Number of seeds per run
This setting defines the number of seeds to use for a single run. If more than one seed is selected, each experiment runs multiple times.
- Deep learning models can sometimes exhibit certain randomness in individual runs. Running an experiment multiple times with multiple seeds, can give insights into stability of results.
- In case of high randomness, better judgement can be made about the performance of a model with certain hyperparameter settings, by comparing the average results across seeds, for example in a grid search scenario.
Number of GPUs per run
This setting defines the number of GPUs to use for a single run when training the model. A single run might represent a single fold, a single seed run or a single grid search run.
If 5 GPUs are available, it is possible to run a 5-fold cross-validation in parallel using a single GPU per fold.
- The available GPUs are the ones that can be enabled using the GPUs setting.
- If the number of GPUs is less than or equal to 1, this setting (Number of GPUs per run ) is not available.
Mixed precision training
Determines whether to use mixed-precision during model training. When turned Off, H2O Hydrogen Torch does not use mixed-precision for training.
Mixed-precision is a technique that helps decrease memory consumption and increases training speed.
Mixed precision inference
Determines whether to use mixed-precision during model inference.
Mixed-precision is a technique that helps decrease memory consumption and increases inference speed.
Sync batch normalization
Determines whether to synchronize batch normalization across GPUs in a distributed data-parallel (DDP) mode. In other words, when turned On, multi-GPU training is enabled to synchronize the batch normalization layers of the model across GPUs. In a nutshell, H2O Hydrogen Torch with multi GPU splits the batch across GPUs, and therefore, when a normalization layer wants to normalize data, it has access only to the part of the batch stored on the device. As a result, it works out of the box but gives better results if the data in all GPUs are collected to normalize the data of the entire batch.
When turned On, data scientists can expect the training speed to drop slightly while the model's accuracy improves. However, this rarely happens in practice and only occurs under specific problem types and defined batch sizes.
Number of workers
This setting defines the number of workers H2O Hydrogen Torch uses for the DataLoader. In other words, it defines the number of CPU processes to use when reading and loading data to GPUs during model training.
Seed
This setting defines the random seed value that H2O Hydrogen Torch uses during model training. It defaults to -1, an arbitrary value. When the value is modified (not -1), the random seed allows results to be reproducible—defining a seed aids in obtaining predictable and repeatable results every time. Otherwise, not modifying the default seed value (-1) leads to random numbers at every invocation.
Logging settings
Logger
This setting defines the logger type that H2O Hydrogen Torch uses for model training
Options
- None
- This option does does not use any logger.
- Neptune
- This option utilizes Neptune as a logger to track the experiment. To use Neptune, you must define the following settings: Neptune API token and Neptune project.
Neptune API token
This setting defines the Neptune API token to validate all subsequent Neptune API calls.
This setting is available if you select Neptune in the Logger setting.
Neptune project
This setting defines the Neptune project.
This setting is available if you select Neptune in the Logger setting.
Log grad norm
This setting determines whether to log the total grad norm before and after clipping.
This setting adds a small overhead during the experiment runtime but can help determine if the gradients are exploding or unstable.
Turn this setting on if you suspect unstable gradients; as a result, you may then choose a value for the gradient clip to prevent exploding gradients.
Number of texts
This setting defines the number of texts to show in the experiment Insights tab.
- Submit and view feedback for this page
- Send feedback about H2O Hydrogen Torch to cloud-feedback@h2o.ai