Generate a Document summary
Overview​
In the Documents section, you can generate a Document summary.
Instructions​
To generate a Document summary, consider the following instructions:
- In the Enterprise h2oGPTe navigation menu, click Documents.
- In the Documents grid or list, select the name of the Document you want to create a new summary of.
- Click Create a new summary/New summary.
- In the Summarization settings panel, customize the summarization settings according to your requirements.
note
To learn more about each summarization settings, see Summarization settings.
- Click Summarize.
Summarization settings​
LLM to use​
This setting enables you to select the Large Language Model (LLM) to generate the Document summary.
Max. number of chunks and approximate cost range​
This setting allows you to configure the number of chunks to be extracted from the Document to summarize its context. Adjust the slider to select the desired number of chunks for the summary.
Moving the slider to 100 prompts the Large Language Model (LLM) to utilize 100 chunks from the Document to generate the Document summary.
Prompt Template​
This setting allows you to select a prompt template from the drop-down menu to customize the prompts used for the Collection. You can create your own prompt template on the Prompts page and use it for your Collection.
Personality (System Prompt)​
This setting allows you to customize the personality of the LLM according to your requirements for the Document summary. It aids in shaping the behavior of the generated Document summary.
Example: You are h2oGPTe, an expert question-answering document AI system created by H2O.ai that performs like GPT-4 by OpenAI. I will give you a $200 tip if you answer the question correctly. However, you must only use the given document context to answer the question. I will lose my job if your answer is inaccurate or uses data outside of the provided context.
Summary prompts​
This setting delineates the pre- and post-context prompts used by Enterprise h2oGPTe to generate a Document summary.
-
Pre-context: The first part of the setting (first text box) enables you to specify a pre-context prompt involving the provision of specific instructions or queries to guide the language model (LLM) before introducing contextual information. Such a prompt steers the subsequent document summary generated by the LLM, ensuring alignment with the intended context or query.
-
Post-context: The second part of the setting (second text box) enables you to specify a prompt after context, which refers to providing a specific instruction or query to guide the language model (LLM) after presenting it with contextual information. This prompt is used to direct the subsequent document summary generated by the LLM, ensuring that it aligns with the provided context or query.
- Submit and view feedback for this page
- Send feedback about Enterprise h2oGPTe to cloud-feedback@h2o.ai