Automatic semantics-supported video content analysis for new media services

SeVIA

The fashion articles shown here are detected using a neural network and categorized on the basis of hierarchically structured classifiers.

This project, which is being carried out in cooperation with FutureTV and the University of Rostock, is investigating ways to apply analytical approaches from the field of computer vision research to develop and provide innovative, internationally marketable services. The study focuses are the classification and detection of objects in video sequences and semantic sequencing of scenes. The goal is to make it possible to choose appropriate material for a commercial or display additional information on depicted articles. Deep learning based on convolutional neural networks is the basic technology involved; it is also being used within the scope of this project for both object detectors and hierarchical classifiers.

For practical evaluation purposes, a software prototype is being developed and tested on fashion articles. Video sequences are automatically analyzed. First the displayed scene is semantically classified. Then the displayed fashion articles are detected by a neuronal network and categorized applying a hierarchical structure. A scene’s semantic classification is taken into account for automatically choosing the right classifiers.

Funded by