Return to docs
What's new in h2oGPTe v1.6.32
ยท 4 min read
We are excited to announce the release of h2oGPTe 1.6.32! This release brings important improvements, bug fixes, and new features to enhance your experience with h2oGPTe.
Document and collection managementโ
- Added a delete option to the documents grid view, allowing for easier file management.
- Improved space management and visual layout on the Collections page to enhance the user experience.
- Introduced a Lite ingest mode for faster, more streamlined document processing.
User interface and experienceโ
- Enhanced the filter bar on the Documents page for improved usability on mobile devices.
- Improved the cosmetic appearance of the UI in private mode.
- Moved model failure notification to an alert in the sidebar to decrease interruptions to the user experience.
- The auto-logout logic has been improved to prevent premature session terminations.
- Added a prompt template for British English to better support international users.
System administration and configurationโ
- Added support for accepting older H2O.ai public keys during license checks to improve flexibility.
- Exposed WebSocket ping timeouts to be configurable, allowing administrators to fine-tune network settings for chat session connections.
- Exposed S3 connection limits to allow for tuning during high-load periods in document ingestion workflows.
- Made the web crawl functionality optional, giving administrators control over whether external website crawling is available as a RAG ingestion method.
Performance and scalabilityโ
- Addressed and resolved slow ingestion speeds for certain PDF documents, ensuring timely processing.
- Optimized chat queries for faster and more efficient performance.
- Ensured unique processing paths are used for per-page PDFs to improve ingestion reliability.
- Increased the memory allocated to the crawler in Kubernetes environments to handle larger workloads.
- Updated the system to use the latest version of Chromium for improved performance and security.
Agents and AIโ
- Enhanced the prompt query to ensure the LLM recognizes when it is performing a RAG task.
- Deduplicated document metadata sent to the LLM to improve efficiency and response quality.
- Updated the PII (Personally Identifiable Information) model and its detection threshold.
System stability and robustnessโ
- Implemented a mechanism to identify and mark stale jobs that do not have an active worker.
- Enabled sub-services to refresh and register their state independently for better system-wide awareness.
- Forced a chat pod to restart if it is unable to consume new user tasks, ensuring service availability.
- Disallowed multiple self-tests for the same LLM from running simultaneously to prevent conflicts.
AI developers and API enhancementsโ
- Optimized the logic for the automatic chat naming feature by checking the setting before pulling chat history.
Application and UI bugsโ
- Corrected a bug where sending a new agent chat message in the same session would fail after stopping a previous message.
- Fixed issue where agent chat with a collection would fail to include the RAG context.
- Fixed issue where collection configuration settings were not applied to a new chat until the page was refreshed.
- Addressed a security vulnerability related to gemini affecting its accuracy for RAG benchmark.
- The final response for non-streamed REST API calls is now always returned as expected.
- Addressed a potential security vulnerability related to unsafe quoting in code.
- Fixed an issue that prevented non-owners from sharing prompt templates.
- Fixed issue to ensure the most up-to-date document name is used in references.
- Fixed issue where the page would not resynchronize after redaction.
- Fixed issue with the back button to ensure it consistently aligns with the page heading across all relevant pages.
- The thumbnail picker container now has a consistent appearance with the thumbnail card container.
- The correct parsing algorithm is now used for highlighting evaluated chat messages.
- Fixed the visibility condition for the user pairing link.
Backend and API bugsโ
- Corrected the Go package name to resolve build issues.
- Reverted a type change to restore correct file type detection with libmagic.
Testing and CI/CDโ
- Addressed several flaky tests to improve the reliability of the test suite.
- Tests will no longer fail due to repeated LLM timeouts.
- Agent key-related tests are now run serially to prevent conflicts.
- Fixed an issue with Vex self-registration in the CI pipeline.
Supportโ
For technical support and questions about this release, please refer to our documentation or contact our support team at support@h2o.ai.
Next stepsโ
We recommend upgrading to v1.6.32 to take advantage of these improvements. The upgrade process is straightforward and maintains all existing data and configurations.
Feedback
- Submit and view feedback for this page
- Send feedback about Enterprise h2oGPTe to cloud-feedback@h2o.ai