EMPIRICAL ANALYSIS OF OVERFITTING AND MODE DROP IN GAN TRAINING

Page view(s)
45
Checked on Sep 05, 2024
EMPIRICAL ANALYSIS OF OVERFITTING AND MODE DROP IN GAN TRAINING
Title:
EMPIRICAL ANALYSIS OF OVERFITTING AND MODE DROP IN GAN TRAINING
Journal Title:
IEEE International Conference on Image Processing
Publication Date:
26 October 2020
Citation:
Abstract:
We examine two key questions in GAN training, namely overfitting and mode drop, from an empirical perspective. We show that when stochasticity is removed from the training procedure, GANs can overfit and exhibit almost no mode drop. Our results shed light on important characteristics of the GAN training procedure. They also provide evidence against prevailing intuitions that GANs do not memorize the training set, and that mode dropping is mainly due to properties of the GAN objective rather than how it is optimized during training.
License type:
http://creativecommons.org/licenses/by-nc-nd/4.0/
Funding Info:
The computational work for this article was performed on resources of the National Supercomputing Centre, Singapore (https://www.nscc.sg). There is no specific funding for this work.
Description:
“© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”
ISBN:

Files uploaded: