NasH: Toward end-to-end neural architecture for generative semantic hashing

Dinghan Shen, Qinliang Su, Paidamoyo Chapfuwa, Wenlin Wang, Guoyin Wang, Lawrence Carin, Ricardo Henao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

14 Scopus citations

Abstract

Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems. While fairly successful, previous techniques generally require two-stage training, and the binary constraints are handled ad-hoc. In this paper, we present an end-to-end Neural Architecture for Semantic Hashing (NASH), where the binary hashing codes are treated as Bernoulli latent variables. A neural variational inference framework is proposed for training, where gradients are directly backpropagated through the discrete latent variable to optimize the hash function. We also draw connections between proposed method and rate-distortion theory, which provides a theoretical foundation for the effectiveness of the proposed framework. Experimental results on three public datasets demonstrate that our method significantly outperforms several state-of-the-art models on both unsupervised and supervised scenarios.
Original languageEnglish (US)
Title of host publicationACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)
PublisherAssociation for Computational Linguistics (ACL)acl@aclweb.org
Pages2041-2050
Number of pages10
ISBN (Print)9781948087322
DOIs
StatePublished - Jan 1 2018
Externally publishedYes

Fingerprint Dive into the research topics of 'NasH: Toward end-to-end neural architecture for generative semantic hashing'. Together they form a unique fingerprint.

Cite this