Skip to main content
Version: Next

A Chat session


A Chat session is a focused interaction between you and Enterprise h2oGPTe, consisting of a series of prompts and answers that are based on a specific Collection.


The default language used to chat with Enterprise h2oGPTe is set to English. However, you can also Chat using a different language by specifying the desired language within the following setting located in the A Chat's settings page: Personality (System Prompt). For more information about supported languages, see FAQs.

Components of a Chat session

components of a Chat session

  1. Copy response

    This button enables you to copy the LLM response to the clipboard.

  2. Upvote/Downvote response

    These two buttons allow you to provide feedback on the usefulness of a response. This feedback is valuable for developers in improving the model. Your feedback is stored on the Feedback page. To learn more, see Feedback.

  1. Self Reflection

    This button grants you access to the self-reflection score of the LLM response.

  2. LLM prompt

    This button allows you to view the full LLM prompt, constructed using the RAG prompt before context, the Document context, and the RAG prompt after context. The LLM prompt is the question sent to the LLM to generate a desired response.

  1. Usage stats

    This button showcases the Usage stats section, which highlights detailed information about performance and resource utilization during a Chat session. These stats encompass various metrics to track the efficiency and cost associated with the session.

    • Response time: This metric indicates the duration required for the LLM (Large Language Model) to generate a response to the user's query.
    • Cost: This represents the expenses linked with the Chat session. It denotes the expenditure involved in processing the user's query and producing the corresponding response, measured in US dollars.
    • Usage: This section furnishes additional insights into the utilization of resources throughout the Chat session.
      • LLM: Denotes the Large Language Model (LLM) utilized for generating responses.
      • Input tokens: Indicates the number of tokens present in the user's query.
      • Output tokens: Reflects the number of tokens in the generated response.
      • Origin: Specifies the generation approach (RAG type) employed for crafting responses to address the user's queries.
      • Cost: Reiterates the cost associated with the Chat session.
  1. Delete response

    This button allows you to delete the LLM response.

  1. References

    This section lets you view the References section, which highlights the sections of the Document from which the context was derived to generate the response.