Skip to main content
Version: v0.4.0

Tutorials

Learn how to generate labeled datasets in H2O Label Genie

Learning path

note

To learn how H2O Label Genie can help annotate data to build and deploy a model with H2O Hydrogen Torch and H2O MLOps, refer to the following blog: In the H2O AI Cloud, build, deploy, and score a state-of-the-art image classification model, starting with unlabeled data.

Text annotation tasks

  • Tutorial 1A: Text classification annotation task

    This tutorial describes the process of creating a text classification annotation task, including specifying an annotation task rubric for it. To highlight the process, we will annotate a dataset containing user reviews (in text format) and ratings (from 0 to 5) of Amazon products.

  • Tutorial 2A: Text regression annotation task

    This tutorial describes the process of creating a text regression annotation task, including specifying an annotation task rubric for it. To highlight the process, we will annotate a dataset containing user reviews (in text format) and ratings (from 0 to 5) of Amazon products.

  • Tutorial 3A: Text-entity recognition annotation task

    This tutorial describes the process of creating a text-entity recognition annotation task, including specifying an annotation task rubric for it. To highlight the process, we will annotate a dataset containing user reviews (in text format) and ratings (from 0 to 5) of Amazon products.

  • Tutorial 4A: Text summarization annotation task

    This tutorial describes the process of creating a text summarization annotation task. To highlight the process, we are going to annotate a dataset that contains human-generated abstract summaries from news stories published on the Cable News Network (CNN) and Daily Mail websites.

  • Tutorial 5A: Text-generative AI annotation task

    This tutorial describes the process of creating a text-generative AI annotation task, including specifying an annotation task rubric for it. To highlight the process, we will utilize the Amazon reviews dataset that contains user reviews (in text format) and ratings (from 0 to 5) of Amazon products. In particular, we will utilize an H2O.ai zero-shot learning model (large language model (LLM)) to summarize the product reviews.

Image annotation tasks

  • Tutorial 1B: Image classification annotation task

    This tutorial describes the process of creating an image classification annotation task, including specifying an annotation task rubric for it. To highlight the process, we are going to annotate a dataset that contains images of cars and coffee.

  • Tutorial 2B: Image regression annotation task

    This tutorial describes the process of creating an image regression annotation task, including specifying an annotation task rubric for it. To highlight the process, we are going to annotate a dataset that contains images of healthy and diseased apple leaves for plant pathology recognition.

  • Tutorial 3B: Object detection annotation task

    This tutorial describes the process of creating an object detection annotation task, including specifying an annotation task rubric for it. To highlight the process, we are going to annotate a dataset that contains images of cars and coffee.

  • Tutorial 4B: Image instance segmentation annotation task

    This tutorial describes the process of creating an image instance segmentation annotation task, including specifying an annotation task rubric for it. To highlight the process, we are going to annotate a dataset that contains images of cars and coffee.

Audio annotation tasks

  • Tutorial 1C: Audio classification annotation task

    This tutorial describes the process of creating an audio classification annotation task, including specifying an annotation task rubric for it. To highlight the process, we will annotate a dataset containing 5-second-long recordings of environmental sounds organized into ten classes (with 40 examples per class).

  • Tutorial 2C: Audio regression annotation task

    This tutorial describes the process of creating an audio regression annotation task, including specifying an annotation task rubric for it. To highlight the process, we will annotate a dataset containing 600 audio samples of spoken digits (0-9) of 60 different speakers.


Feedback