Skip to main content

FAQs

How is H2O Enterprise LLM Studio different from the open source H2O LLM Studio?​

H2O Enterprise LLM Studio is the licensed, closed-source version of H2O’s open source LLM Studio. It is built for production-grade deployments, designed to support multiple users, centralized management, secure enterprise environments, and advanced automation.

While it uses the same core training code, it differs from the OSS version in several important ways:

Key differences:

  • Offers a full REST API for automation and integration
  • Centralized model and dataset storage, shared across users
  • Data generation and curation tools
  • Integrated automation features like an agent that helps discover the best fine-tuning strategies and hyperparameters (AutoML for LLMs)
  • Enterprise-grade support for upgrades and versioning

Is the Enterprise version open source?​

No. H2O Enterprise LLM Studio is a licensed and closed-source product. While it shares some of the same training internals as the open source H2O LLM Studio, the overall system is not open source.

Is there a Python SDK?​

Not at this time. The platform provides a full REST API instead, which can be accessed from any programming environment, including Python.

Can I migrate experiments or models from OSS to Enterprise?​

There is no direct migration path. The internal representations (especially for metadata, storage format, and classification) have diverged. If needed, raw model weights or training configs could potentially be ported manually.

Is multi-user support included?​

Yes. The Enterprise version supports multiple users with centralized access control and shared experiment/model visibility via projects.

How do I access the REST API?​

You can generate an API key from your profile menu in the UI. Once authenticated, the in-environment API documentation can be used to explore and execute API calls directly.

Does Enterprise support fine-tuning open source models?​

Yes. You can fine-tune any HuggingFace-compatible model by uploading it to the system or selecting a publicly available model from the UI or API.

Can I deploy models from the Enterprise version?​

Yes. The platform supports deploying models via the UI or API. These deployments can then be integrated into other applications via REST endpoints.

Is data stored locally or remotely?​

All models, datasets, and experiment metadata are stored in centralized storage, either in a cloud object store or a mounted enterprise file system, depending on your environment setup.

Can I run Enterprise LLM Studio offline?​

Yes. Enterprise LLM Studio can be deployed in air-gapped or restricted environments, provided the appropriate licensing and hardware requirements are met.

Note: In air-gapped or offline deployments, features that require access to external LLMs—such as "distilling" large models (i.e., generating synthetic data or labels from an external LLM)—are not available. All data generation, augmentation, or distillation must be performed using only the resources and models available locally within the air-gapped environment.


Feedback