Modeling Salient Object-Object Interactions to Generate Textual Descriptions for Natural Images
Date
2012
Access
Authors
Adeli, Hossein
Journal Title
Journal ISSN
Volume Title
Publisher
East Carolina University
Abstract
In this thesis we consider the problem of automatically generating textual descriptions of images which is useful in many applications. For example, searching and retrieving visual data in overwhelming number of images and videos available on the Internet requires better understanding of the multimedia content that is not provided by user annotated tags and meta-data. While this task remains a very challenging problem for machines, humans can easily generate concise descriptions of the images; they can avoid what seems to be unnecessary and not related to the main point of the images and talk about the objects, their actions and attributes, their interactions with each other and the context that all is happening. Our method consists of two main steps to automatically generate the image description. By using saliency maps and object detectors, it determines the objects that are of interests to the observer and hence, should appear in the description of the image. Then pose (body part configuration) of those objects/entities is used to recognize the single actions and interactions between them. For generating the sentences, we use a syntactic model that first orders the nouns (objects) and then builds sub-trees around the detected objects using the predicted actions. The model then combines those sub-trees using the recognized interactions and at the end, the context of interactions, which is detected with a separate algorithm, is added to create a full sentence for the image. The results show the improved accuracy of the descriptions generated, using our method.