Dataset format: Image instance segmentation
- Formats
- Examples
- Helper functions
- Format conversions
H2O Hydrogen Torch supports several dataset (data) formats for an image instance segmentation experiment. Supported formats are as follows:
- Hydrogen Torch format
- COCO format
The data following the Hydrogen Torch format for an image instance segmentation experiment is structured as follows: A zip file (1) containing a Parquet file (2) and an image folder (3):
folder_name.zip (1)
│ └───pq_name.pq (2)
│ │
│ └───image_folder_name (3)
│ └───name_of_image.image_extension
│ └───name_of_image.image_extension
│ └───name_of_image.image_extension
│ ...
You can have multiple Parquet files in the zip file that you can use as train, validation, and test dataframes:
- A train CSV file needs to follow the format described above
- A validation CSV file needs to follow the same format as a train CSV file
- A test CSV file needs to follow the same format as a train CSV, but does not require a class_id and rle_mask column
- The available dataset connectors require the data for an image instance segmentation experiment to be in a zip file. Note
To learn how to upload your zip file as your dataset in H2O Hydrogen Torch, see Dataset connectors.
- A Parquet file containing the following columns:
- An image column containing the names of the images for the experiment, where each image has an image extension specifiedNote
- Images can contain a mix of supported image extensions. To learn about supported image extensions, see Supported image extensions for image processing.
- The names of the image files do not specify the data directory (location of the images in the zip file). You can specify the data directory (data folder) when uploading the dataset or before the dataset is used for an experiment. For more information, see Import dataset settings.
- A class_id column containing the class names of each instance mask. Each row of the dataset should contain a list of class names, where each element in the list refers to a single mask instance.
- A rle_mask column containing run-length-encoded (RLE) masks for each instance from the class_id column. Each row of the dataset should contain a list of RLE-encoded masks, where each element in the list refers to a single instance.Note
The length of each class_id and rle_mask list must be equal while referring to the total number of instances in each respective image. If an instance is not present for a given image, all lists need to be empty.
- An optional fold column containing cross-validation fold indexes Note
The fold column can include integers (0, 1, 2, … , N-1 values or 1, 2, 3… , N values) or categorical values.
- An image column containing the names of the images for the experiment, where each image has an image extension specified
- An image folder that contains all the images specified in the image column; H2O Hydrogen Torch uses the images in this folder to run the image instance segmentation experiment. Note
All image file names need to specify image extension. Images can contain a mix of supported image extensions. To learn about supported image extensions, see Supported image extensions for image processing.
The data following the COCO format for an image instance segmentation experiment is structured as follows: A zip file (1) containing a JSON file (2) and an image folder (3):
folder_name.zip (1)
│ └───json_name.json (2)
│ │
│ └───image_folder_name (3)
│ └───name_of_image.image_extension
│ └───name_of_image.image_extension
│ └───name_of_image.image_extension
│ ...
You can have multiple JSON files in the zip file that you can use as train, validation, and test datasets:
- A train JSON file needs to follow the format described above
- A validation JSON file needs to follow the same format as a train JSON file
- A test JSON file needs to follow the same format as a train CSV file, but does not require labels
- The available dataset connectors require the data for an image instance segmentation to be in a zip file. Note
To learn how to upload your zip file as your dataset in H2O Hydrogen Torch, see Data connectors.
- A JSON file that contains labels in a COCO format .
- A folder containing all the images specified in the JSON file; H2O Hydrogen Torch uses the images in this folder to run an image instance segmentation experiment.
- Hydrogen Torch format
The coco_image_instance_segmentation.zip
file is a preprocessed dataset in H2O Hydrogen Torch and was formatted following the Hydrogen Torch format to solve an image instance segmentation problem. The structure of the zip file is as follows:
coco_image_instance_segmentation.zip
│ └───train.pq
│ │
│ └───images
│ └───000000151231.jpg
│ └───000000433826.jpg
│ └───000000061159.jpg
│ ...
As follows, three random rows from the Parquet file:
image_id | class_id | rle_mask |
---|---|---|
000000151231.jpg | ['car' 'car'] | ['91949 7 92375 14 92801... |
000000433826.jpg | ['car' 'car'] | ['224473 3 224952 4 22... |
000000061159.jpg | ['car' 'car'] | ['161665 9 162291 25... |
- In this example, the data directory in the image column is not specified. Therefore, it needs to be specified when uploading the dataset, and the images folder needs to be selected as the value for the Data folder setting. For more information, see Import dataset settings.
- To learn how to access one of the preprocessed datasets in H2O Hydrogen Torch, see Demo (reprocessed) datasets.
RLE encoding and decoding functions
from typing import Tuple
import numpy as np
def mask2rle(x: np.ndarray) -> str:
"""
Converts input masks into RLE-encoded strings.
Args:
x: numpy array of shape (height, width), 1 - mask, 0 - background
Returns:
RLE string
"""
pixels = x.T.flatten()
pixels = np.concatenate([[0], pixels, [0]])
runs = np.where(pixels[1:] != pixels[:-1])[0] + 1
runs[1::2] -= runs[::2]
return " ".join(str(x) for x in runs)
def rle2mask(mask_rle: str, shape: Tuple[int, int]) -> np.ndarray:
"""
Converts RLE-encoded string into the binary mask.
Args:
mask_rle: RLE-encoded string
shape: (height,width) of array to return
Returns:
binary mask: 1 - mask, 0 - background
"""
s = mask_rle.split()
starts, lengths = [np.asarray(x, dtype=int) for x in (s[0:][::2], s[1:][::2])]
starts -= 1
ends = starts + lengths
img = np.zeros(shape[0] * shape[1], dtype=np.uint8)
for lo, hi in zip(starts, ends):
img[lo:hi] = 1
return img.reshape(shape, order="F") # Needed to align to RLE direction
CSV file with masks to Hydrogen Torch format
import pandas as pd
df = pd.read_csv("/data/train.csv")
# Prepare the processed dataset
df = df.groupby(["image_id"]).agg(lambda x: x.to_list()).reset_index()
df[["image_id", "class_id", "rle_mask"]].to_parquet(
"/data/train.pq", engine="pyarrow", index=False
)
COCO to H2O Hydrogen Torch format
import json
import pandas as pd
from pycocotools.coco import COCO
def get_instance_segmentation(df, coco_path):
coco = COCO(json_path)
images = pd.DataFrame(df["images"])
categories = pd.DataFrame(df["categories"])
annotations = pd.DataFrame(df["annotations"])
rles = []
for idx, annotation in enumerate(df["annotations"]):
yield data_split, idx / len(df["annotations"])
mask = mask2rle(coco.annToMask(annotation))
rles.append(mask)
annotations["rle_mask"] = rles
annotations.loc[annotations.rle_mask == "", "rle_mask"] = float("nan")
annotations = annotations[["image_id", "category_id", "rle_mask"]]
annotations["category_id"] = annotations["category_id"].astype(int)
annotations = annotations.merge(
categories[["id", "name"]].drop_duplicates(),
left_on="category_id",
right_on="id",
how="left",
)
annotations = annotations.merge(
images[["id", "file_name"]].drop_duplicates(),
left_on="image_id",
right_on="id",
how="right",
)
annotations.drop(["id_x", "id_y", "image_id"], axis=1, inplace=True)
return annotations
# Read data
train_path = "/data/COCO_train_annos.json"
with open(train_path, "r") as fp:
train = json.load(fp)
# Parse COCO format
train_ann = get_instance_segmentation(df=train, coco_path=train_path)
# Prepare the processed dataset
train_ann = train_ann.groupby(["file_name"]).agg(lambda x: [] if pd.isnull(x).all() else x.to_list()).reset_index()
train_ann[["file_name", "name", "rle"]].to_parquet(
"/data/train.pq", engine="pyarrow", index=False
)
- Submit and view feedback for this page
- Send feedback about H2O Hydrogen Torch to cloud-feedback@h2o.ai