On the Relationship between Visual Attributes and Convolutional Networks

Victor Castillo, Bernard Ghanem, Juan Carlos Niebles

Research output: Chapter in Book/Report/Conference proceedingConference contribution

50 Scopus citations

Abstract

One of the cornerstone principles of deep models is their abstraction capacity, i.e. their ability to learn abstract concepts from ‘simpler’ ones. Through extensive experiments, we characterize the nature of the relationship between abstract concepts (specifically objects in images) learned by popular and high performing convolutional networks (conv-nets) and established mid-level representations used in computer vision (specifically semantic visual attributes). We focus on attributes due to their impact on several applications, such as object description, retrieval and mining, and active (and zero-shot) learning. Among the findings we uncover, we show empirical evidence of the existence of Attribute Centric Nodes (ACNs) within a conv-net, which is trained to recognize objects (not attributes) in images. These special conv-net nodes (1) collectively encode information pertinent to visual attribute representation and discrimination, (2) are unevenly and sparsely distribution across all layers of the conv-net, and (3) play an important role in conv-net based object recognition.
Original languageEnglish (US)
Title of host publicationProceedings of the IEEE Conference on Computer Vision and Pattern Recognition
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages1256-1264
Number of pages9
ISBN (Print)9781467369640
DOIs
StatePublished - Oct 15 2015

Fingerprint Dive into the research topics of 'On the Relationship between Visual Attributes and Convolutional Networks'. Together they form a unique fingerprint.

Cite this