Skip to main content
Version: v1.3.12

Access a Collection's settings

Overview​

The settings of a Collection allows users to modify various aspects of the Collection, including its name, description, prompt settings, and access permissions. By updating the Collection name and description, users can provide a more accurate and detailed overview of the Collection's content. Additionally, by adjusting the prompt settings, users can customize the prompts the Collection utilizes.

To invite other authenticated users to access the Collection, users can select their email addresses from the list of available users. Once selected, these users will be granted access to the Collection and can view and interact with its content. Only authenticated users can be invited to access a Collection, and unauthenticated users cannot access the Collection even if they have the link.

Instructions​

To update the settings of a Collection, consider the following instructions:

  1. In the Enterprise h2oGPTe navigation menu, click Collections.
  2. In the Collections list or grid, select the Collection whose settings you want to edit.
  3. Click Settings.
  4. In the Settings list, select Settings.
  5. Perform any configurations you want.
    note

    For more detailed information about each setting, see Collection settings.

  6. Click Update.

access-a-collections-settings.gif

Collection settings​

The Collection settings section includes the following settings:

Collection name​

This setting defines the name of the Collection.

Description​

This setting defines the description of the Collection.

Share with​

This setting defines the users that can access the Collection. You can choose the email addresses of other authenticated users with whom you wish to share the Collection.

Prompt settings​

The Prompt settings section lets you customize prompts for the Collection.

Reset​

The Reset setting restores the default prompt settings. To restore the default prompt settings, consider the following steps:

  1. Click Reset.
  2. Click Update.

Personality (System Prompt)​

This setting defines the Collection's large language model (LLM) personality. Defining a personality for the LLM helps shape the behavior of the generated responses by the Collection (Chat).

Example: I am h2oGPTe, an intelligent retrieval-augmented GenAI system developed by H2O.ai.

Generation approach (RAG type to use)​

This setting defines the generation approach for responses. Enterprise h2oGPTe offers the following methods for generating responses to answer user's queries (Chats):

  • LLM Only (no RAG)

    This option generates a response to answer the user's query solely based on the Large Language Model (LLM) without considering supporting document contexts from the Collection.

  • RAG (Retrieval Augmented Generation)

    This option utilizes a neural/lexical hybrid search approach to find relevant contexts from the Collection based on the user's query for generating a response.

  • RAG+ (RAG without LLM context limit)

    This option utilizes RAG (Retrieval Augmented Generation) with neural/lexical hybrid search using the user's query to find relevant contexts from the Collection for generating a response. It uses the recursive summarization technique to overcome the LLM's context limitations. The process requires multiple LLM calls.

  • HyDE RAG (Hypothetical Document Embedding)

    This option extends RAG with neural/lexical hybrid search by utilizing the user's query and the LLM response to find relevant contexts from the Collection to generate a response. It requires two LLM calls.

  • HyDE RAG+ (Combined HyDE+RAG)

    This option utilizes RAG with neural/lexical hybrid search by using both the user's query and the HyDE RAG response to find relevant contexts from the Collection to generate a response. It requires three LLM calls.

caution

Depending on the selected generation approach, configure the following parameters accordingly:

RAG prompt before context​

This setting defines the prompt that goes before the Document contexts in the Collection. It is used to construct the Large Language Model (LLM) prompt sent to the LLM. The LLM prompt is the question you send to the LLM to generate a desired response. You can customize the prompt according to your requirements.

Example: Pay attention and remember the information below, which will help to answer the question or imperative after the context ends.

RAG prompt after context​

This setting defines the prompt that goes after the Document contexts in the Collection. It is used to construct the Large Language Model (LLM) prompt sent to the LLM . The LLM prompt is the question you send to the LLM to generate a desired response. You can customize the prompt according to your requirements.

Example: According to only the information in the document sources provided within the context above,


Feedback