Skip to main content

Workflow

The following is a typical workflow for using H2O Eval Studio.

  1. Add a connection to the host of the LLM models you want to evaluate. (Select from h2oGPTe RAG, h2oGPTe LLM, or Open AI Assistant.) For more information, see Add a connection.

  2. Create or import a test suite with prompts, expected answers, and (for RAG evaluation) a corpus of documents. For more information, see Tests.

  3. Create a leaderboard by running an evaluation.

  4. View a visualization of the leaderboard, obtain an HTML report, and download a zip archive with the evaluation results.


Feedback