Skip to main content
Version: Next

Release notes

v1.5.0 | January 27, 2024

Overview

At last, the release of H2O Hydrogen Torch v1.5.0 brings significant enhancements. It introduces support for a new problem type, furthering our mission to provide a no-code, accessible platform to train state-of-the-art deep neural networks (models) across diverse problem types. Whether you're a seasoned data scientist or have no coding experience, H2O Hydrogen Torch v1.5.0 makes building cutting-edge models easier and more efficient. The major points of this release are as follows:

  • New problem type: Multi-modal causal language modeling
    • H2O Hydrogen Torch v1.5.0 now supports multi-modal causal language modeling, enabling the creation of models around this problem type. These models generate textual responses based on both textual and visual inputs, such as queries paired with images.
    • Why? This feature empowers users to tackle complex tasks requiring cross-modal data understanding.
    note

    To learn more, see

  • Automatic deep learning (AutoDL)
    • H2O Hydrogen Torch v1.5.0 introduces AutoDL. This new feature allows you to train a set of models with just one input: a time budget, which specifies the number of experiments that H2O Hydrogen Torch should generate, each with different values for specific hyperparameters referred to as grid search hyperparameters.
    • Why? This feature automates the model-building process, making it especially useful for non-data scientists who need to create high-performing models quickly and efficiently. Furthermore, this feature lowers the barrier to entry for deep learning and enables anyone to build "good enough" models without extensive expertise.
    note

    To learn more, see

  • Quantization Dtype (data type)
    • A new setting for text-only problem types is available during QLoRA training named Quantization Dtype.
    • Why? This new setting allows users to control the level of quantization applied to the model, balancing memory efficiency and precision.
    note

    To learn more, see


Feedback