Sherlock: Scalable fact learning in images

Mohamed Elhoseiny, Scott Cohen, Walter Chang, Brian Price, Ahmed Elgammal

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Scopus citations

Abstract

The human visual system is capable of learning an unbounded number of facts from images including not only objects but also their attributes, actions and interactions. Such uniform understanding of visual facts has not received enough attention. Existing visual recognition systems are typically modeled differently for each fact type such as objects, actions, and interactions. We propose a setting where all these facts can be modeled simultaneously with a capacity to understand an unbounded number of facts in a structured way. The training data comes as structured facts in images, including (1) objects (e.g., ), (2) attributes (e.g., ), (3) actions (e.g., ), and (4) interactions (e.g., ). Each fact has a language view (e.g., < boy, playing>) and a visual view (an image). We show that learning visual facts in a structured way enables not only a uniform but also generalizable visual understanding. We propose and investigate recent and strong approaches from the multiview learning literature and also introduce a structured embedding model. We applied the investigated methods on several datasets that we augmented with structured facts and a large scale dataset of > 202, 000 facts and 814, 000 images. Our results show the advantage of relating facts by the structure by the proposed model compared to the baselines.
Original languageEnglish (US)
Title of host publication31st AAAI Conference on Artificial Intelligence, AAAI 2017
PublisherAAAI press
StatePublished - Jan 1 2017
Externally publishedYes

Cite this