Digging deep into the layers of CNNs: In search of how CNNs achieve view invariance

Amr Bakry, Mohamed Elhoseiny, Tarek El-Gaaly, Ahmed Elgammal

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Scopus citations

Abstract

This paper is focused on studying the view-manifold structure in the feature spaces implied by the different layers of Convolutional Neural Networks (CNN). There are several questions that this paper aims to answer: Does the learned CNN representation achieve viewpoint invariance? How does it achieve viewpoint invariance? Is it achieved by collapsing the view manifolds, or separating them while preserving them? At which layer is view invariance achieved? How can the structure of the view manifold at each layer of a deep convolutional neural network be quantified experimentally? How does fine-tuning of a pre-trained CNN on a multi-view dataset affect the representation at each layer of the network? In order to answer these questions we propose a methodology to quantify the deformation and degeneracy of view manifolds in CNN layers. We apply this methodology and report interesting results in this paper that answer the aforementioned questions.
Original languageEnglish (US)
Title of host publication4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings
PublisherInternational Conference on Learning Representations, ICLR
StatePublished - Jan 1 2016
Externally publishedYes

Fingerprint Dive into the research topics of 'Digging deep into the layers of CNNs: In search of how CNNs achieve view invariance'. Together they form a unique fingerprint.

Cite this