Skip to main content
Version: v1.4.9

Chat settings

Overview​

The Chat settings tab of your Chat session allows you to manage and customize your Chat session. You can control a Chat session by adjusting the system prompt, selecting the LLM (Large Language Model) to use for response generation, and selecting a suitable response generation approach.

Instructions​

  1. On the h2oGPTe navigation menu, click Chat.
  2. From the chats table, select the Chat session you want to customize.
  3. Click the icon. settings icon
  4. Customize your Chat session according to your requirements. For more detailed information about each setting, see Chat settings.
  5. Click Update to apply the changes.

Chat settings​

The Chat settings tab includes the following settings:

Prompt settings​

Customize the prompts used for your chat. Click Reset to reset the prompt settings to default.

Personality (System Prompt)​

Customize the personality of the LLM according to your requirements for the Chat session. It helps to shape the behavior of the generated response.
Example: I am h2oGPTe, an intelligent retrieval-augmented GenAI system developed by H2O.ai.

LLM to use​

Choose the Large Language Model (LLM) to use for generating responses.

Generation approach (RAG type to use)​

Select the generation approach for responses. h2oGPTe offers the following methods for generating responses to answer the user's queries:

  • LLM Only (no RAG)
    Selecting LLM only (no RAG) option generates a response to answer the user's query solely based on the Large Language Model (LLM) without considering supporting Document contexts from the collection.

  • RAG (Retrieval Augmented Generation)
    Selecting RAG (Retrieval Augmented Generation) option utilizes a neural/lexical hybrid search approach to find relevant contexts from the collection based on the user's query for generating a response.

  • RAG+ (RAG without LLM context limit)
    Selecting the RAG+ (RAG without LLM context limit) option utilizes RAG (Retrieval Augmented Generation) with neural/lexical hybrid search using the user's query to find relevant contexts from the Collection for generating a response. It uses the recursive summarization technique to overcome the LLM's context limitations. The process requires multiple LLM calls.

  • HyDE RAG (Hypothetical Document Embedding)
    Selecting HyDE RAG (Hypothetical Document Embedding) option extends RAG with neural/lexical hybrid search by utilizing the user's query and the LLM response to find relevant contexts from the collection to generate a response. It requires two LLM calls.

  • HyDE RAG+ (Combined HyDE+RAG)
    Selecting HyDE RAG+ (Combined HyDE+RAG) option utilizes RAG with neural/lexical hybrid search by using both the user's query and the HyDE RAG response to find relevant contexts from the collection to generate a response. It requires three LLM calls.

Depending on the selected generation approach, configure the parameters listed below.

RAG prompt before context​

Set a prompt that goes before the Document contexts in a collection. It is used to construct the LLM prompt sent to the LLM (Large Language Model). The LLM prompt is the question you send to the LLM to generate a desired response. You can customize the prompt according to your requirements.
Example: Pay attention and remember the information below, which will help to answer the question or imperative after the context ends.

RAG prompt after context​

Set a prompt that goes after the Document contexts in a collection. It is used to construct the LLM prompt sent to the LLM (Large Language Model). The LLM prompt is the question you send to the LLM to generate a desired response. You can customize the prompt according to your requirements.
Example: According to only the information in the document sources provided within the context above,

note

Open the LLM prompt section of a chat to see the full prompt (question).

HyDE No-RAG LLM prompt extension​

Customize and extend the prompt for the HyDE (Hypothetical Document Embedding) No-RAG LLM approach.
Example: Keep the answer brief, and list the 5 most relevant key words at the end.

Include self-reflection for every response​

Toggle this option to include self-reflection in each response generated by the LLM. It measures the model's behavior and provides a more complete response.


Feedback