Intelligent Systems
Note: This research group has relocated. Discover the updated page here

On the Frequency Bias of Generative Models

2021

Conference Paper

avg


The key objective of Generative Adversarial Networks (GANs) is to generate new data with the same statistics as the provided training data. However, multiple recent works show that state-of-the-art architectures yet struggle to achieve this goal. In particular, they report an elevated amount of high frequencies in the spectral statistics which makes it straightforward to distinguish real and generated images. Explanations for this phenomenon are controversial: While most works attribute the artifacts to the generator, other works point to the discriminator. We take a sober look at those explanations and provide insights on what makes proposed measures against high-frequency artifacts effective. To achieve this, we first independently assess the architectures of both the generator and discriminator and investigate if they exhibit a frequency bias that makes learning the distribution of high-frequency content particularly problematic. Based on these experiments, we make the following four observations: 1) Different upsampling operations bias the generator towards different spectral properties. 2) Checkerboard artifacts introduced by upsampling cannot explain the spectral discrepancies alone as the generator is able to compensate for these artifacts. 3) The discriminator does not struggle with detecting high frequencies per se but rather struggles with frequencies of low magnitude. 4) The downsampling operations in the discriminator can impair the quality of the training signal it provides. In light of these findings, we analyze proposed measures against high-frequency artifacts in state-of-the-art GAN training but find that none of the existing approaches can fully resolve spectral artifacts yet. Our results suggest that there is great potential in improving the discriminator and that this could be key to match the distribution of the training data more closely.

Author(s): Katja Schwarz and Yiyi Liao and Andreas Geiger
Book Title: Advances in Neural Information Processing Systems 34
Volume: 22
Pages: 18126--18136
Year: 2021
Month: December
Editors: M. Ranzato and A. Beygelzimer and Y. Dauphin and P. S. Liang and J. Wortman Vaughan
Publisher: Curran Associates, Inc.

Department(s): Autonomous Vision
Bibtex Type: Conference Paper (inproceedings)
Paper Type: Conference

Event Name: 35th Conference on Neural Information Processing Systems (NeurIPS 2021)
Event Place: Online

Address: Red Hook, NY
ISBN: 978-1-7138-4539-3
State: Published
URL: https://papers.nips.cc/paper_files/paper/2021/hash/96bf57c6ff19504ff145e2a32991ea96-Abstract.html

BibTex

@inproceedings{Schwarz2021NEURIPS,
  title = {On the Frequency Bias of Generative Models},
  author = {Schwarz, Katja and Liao, Yiyi and Geiger, Andreas},
  booktitle = {Advances in Neural Information Processing Systems 34},
  volume = {22},
  pages = {18126--18136},
  editors = {M. Ranzato and A. Beygelzimer and Y. Dauphin and P. S. Liang and J. Wortman Vaughan},
  publisher = {Curran Associates, Inc.},
  address = {Red Hook, NY},
  month = dec,
  year = {2021},
  doi = {},
  url = {https://papers.nips.cc/paper_files/paper/2021/hash/96bf57c6ff19504ff145e2a32991ea96-Abstract.html},
  month_numeric = {12}
}