Write a classifier: Zero-shot learning using purely textual descriptions

Mohamed Elhoseiny, Babak Saleh, Ahmed Elgammal

Research output: Chapter in Book/Report/Conference proceedingConference contribution

151 Scopus citations

Abstract

The main question we address in this paper is how to use purely textual description of categories with no training images to learn visual classifiers for these categories. We propose an approach for zero-shot learning of object categories where the description of unseen categories comes in the form of typical text such as an encyclopedia entry, without the need to explicitly defined attributes. We propose and investigate two baseline formulations, based on regression and domain adaptation. Then, we propose a new constrained optimization formulation that combines a regression function and a knowledge transfer function with additional constraints to predict the classifier parameters for new classes. We applied the proposed approach on two fine-grained categorization datasets, and the results indicate successful classifier prediction. © 2013 IEEE.
Original languageEnglish (US)
Title of host publicationProceedings of the IEEE International Conference on Computer Vision
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Print)9781479928392
DOIs
StatePublished - Jan 1 2013
Externally publishedYes

Fingerprint

Dive into the research topics of 'Write a classifier: Zero-shot learning using purely textual descriptions'. Together they form a unique fingerprint.

Cite this