Updates to Google’s Open Images could lead to improved app interface design

Today, Google’s Open Images corpus for computer vision tasks got a boost with new visual relationships, human action annotations, and image-level labels, as well as a new form of multimodal annotations called localized narratives. Google says this last addition could create “potential avenues of research” for studying how people describe images, which could lead to interface design insights (and subsequently improvements) across web, desktop, and mobile apps.

Google introduced Open Images in 2016, a data set of millions of labeled images spanning thousands of object categories. Major updates arrived in 2018 and 2019, bringing with them 15.4 million bounding-boxes for 600 object categories and segmentation masks (which mark the outline of objects) for 2.8 million object instances in 350 categories.

“Along with the dataset itself, the associated Open Images [c]hallenges have spurred the latest advances in object detection, instance segmentation, and visual relationship detection,” wrote Jordi Pont-Tuset, a research scientist at Google Research. “Open Images is the largest annotated image dataset in many regards, for use in training the latest deep convolutional neural networks for computer vision tasks.”

As Pont-Tuset explains, one of the motivations behind localized narratives is to leverage the connection between vision and language, which is typically done via image captioning (i.e., images paired with written descriptions of their content). But image captioning lacks visual “grounding.” To mitigate this, some researchers have drawn bounding boxes for the nouns in captions after the fact — in contrast to localized narratives, where every word in the description is grounded.

The localized narratives in Open Images were generated by annotators who provided spoken descriptions of images while they hovered over regions they were describing with a computer mouse. The annotators manually transcribed their description, after which Google researchers aligned it with automatic speech transcriptions, ensuring that the speech, text, and mouse trace were correct and synchronized.

“Speaking and pointing simultaneously are very intuitive, which allowed us to give the annotators very vague instructions about the task.” explained Pont-Tuset. “[This latest version of] Open Images is a significant qualitative and quantitative step towards improving the unified annotations for image classification, object detection, visual relationship detection, and instance segmentation … [w]e hope that [it] will further stimulate progress towards genuine scene understanding.”

Open Images is freely available. According to Google, Open Images’ totals now stand at 9 million images annotated with 36 million image-level labels, 15.8 million bounding boxes, 2.8 million instance segmentations, 391,000 visual relationships, and 59.9 million human-verified image-level labels in over nearly 20,000 categories.

Source: Read Full Article