Release notes
Version 0.19.1 (March 13th, 2026)
New Features
- [Projects] The platform now validates that experiment configurations match the selected project type, preventing misconfigured runs.
Version 0.19.0 (March 12th, 2026)
This release includes assistant improvements along with experiment and training fixes.
New Features
- [Assistant] General usability and reliability improvements to the built-in assistant.
Fixes
- [Experiments] Fixed an issue where experiment table columns displayed ghost metrics that no longer applied to the current run.
- [Training] Fixed a data conversion error that could cause failures in certain post-processing steps.
Version 0.18.4 (March 6th, 2026)
New Features
- [Platform] Support for private Certificate Authority (CA) certificates in environments that use custom TLS infrastructure.
Version 0.18.0 (March 5th, 2026)
This release introduces object detection as a new problem type, reworks chat templates, and aligns binary classification output with multiclass classification.
New Features
- [Experiments] New problem type: Object Detection, expanding beyond text-based workloads.
- [Experiments] Chat templates redesigned for improved flexibility and consistency across model types.
- [Experiments] Binary classification output is now consistent with multiclass classification, simplifying downstream usage.
- [Experiments] Classification tasks now accept non-integer class labels, giving more flexibility in dataset preparation.
- [Datasets] New built-in demo dataset: mini_textvqa_v2 for visual question answering tasks.
- [Platform] Token access control to restrict access to platform resources.
- [Platform] Default evaluation model upgraded from GPT 4o to GPT 5. Azure OpenAI is no longer supported as an evaluation backend.
- [Training] Training logs are now batched, reducing noise and improving readability during long-running experiments.
- [UI] The UI now clarifies when an evaluation uses perplexity-only metrics and skips text generation.
Fixes
- [Training] Fixed timezone-aware fallback handling in datetime comparisons, which could cause incorrect scheduling or display of timestamps.
- [Training] Fixed an issue where LoRA weights were incorrectly applied during inference instead of only during training.
- [Data Generation] Fixed a crash that occurred when a single generation failed during synthetic data generation. Individual generation errors are now handled gracefully.
- [Experiments] Fixed the validation metrics step type for assistant tool models to use the correct numeric format.
- [Platform] Fixed an issue that could prevent fresh installations from starting correctly.
Feedback
- Submit and view feedback for this page
- Send feedback about H2O Enterprise LLM Studio to cloud-feedback@h2o.ai