Adrian Albert / Karthik Kashinanth
Generative Adversarial Networks (GANs) have shown astounding success in learning complex distributions of data of different modalities arising across a variety of domains, from media (celebrity pictures, speech) to scientific applications (urban land use, cosmology, fluid dynamics). However key challenges remain: training is notoriously unstable, they don’t utilize the significant domain knowledge available in scientific applications, and require large amounts of training data. In this project, we explore various ways in which domain knowledge can be incorporated into GANs, and the practical benefits (e.g., in terms of increased training stability, reduced training time or data needs) this additional knowledge brings. We will investigate ways in which this knowledge can be incorporated, e.g., via constraints on the optimization, architecture design, or conditional inputs. The project builds on preliminary work on a constrained GAN for a benchmark fluid dynamics system.
– Comfortable with Python programming (object-oriented programming, numpy, matplotlib, pandas etc.)
– Familiarity with modern machine learning frameworks such as PyTorch or TensorFlow
– Knowledge and familiarity with modern machine learning, including deep learning. At a minimum, class work and projects, but previous research work using/developing deep learning methods preferred (ideally having worked with GANs)
-Interest in scientific applications of AI including one or more of climate, energy, materials