X2 generative adversarial network

Chenyang Tao, Liqun Chen, Ricardo Henao, Jianfeng Feng, Lawrence Carin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

To assess the difference between real and synthetic data, Generative Adversarial Networks (GANs) are trained using a distribution discrepancy measure. Three widely employed measures are information-theoretic divergences, integral probability metrics, and Hilbert space discrepancy metrics. We elucidate the theoretical connections between these three popular GAN training criteria and propose a novel procedure, called x2-GAN, that is conceptually simple, stable at training and resistant to mode collapse. Our procedure naturally generalizes to address the problem of simultaneous matching of multiple distributions. Further, we propose a resampling strategy that significantly improves sample quality, by repurpos-ing the trained critic function via an importance weighting mechanism. Experiments show that the proposed procedure improves stability and convergence, and yields state-of-art results on a wide range of generative modeling tasks.
Original languageEnglish (US)
Title of host publication35th International Conference on Machine Learning, ICML 2018
PublisherInternational Machine Learning Society (IMLS)rasmussen@ptd.net
Pages7787-7796
Number of pages10
ISBN (Print)9781510867963
StatePublished - Jan 1 2018
Externally publishedYes

Fingerprint Dive into the research topics of 'X2 generative adversarial network'. Together they form a unique fingerprint.

Cite this