While deep learning achieves undeniably impressive results on many different aspects of computer vision, the theoretical foundations behind the success of these methods is often not well understood.
We have specifically studied generative adversial networks (GANs). These are networks that generate data (images) based on some random vector. The idea is that they learn the manifold of the data type they are modeling (e.g. images of faces). This is done by fitting a low-dimensional density function to the training data manifold in the generator network. For training these generators, we utilize an adversary: a network that tries to distinguish training input images from images produced by the generator.
However, GANs are notoriously difficult to train, and convergence is often difficult to achieve. Our group has researched the convergence properties of such systems based on tools from discrete control theory, yielding highly valuable insights into the training and convergence of such systems.