## AI for Science: generative methods for online adaptive deep learning training ### General information Level: Master Level Research Internship (M2) or equivalent (stage fin étude ingénieur) Where: UGA campus, Grenoble When: 2024-2025, 4 months minimum Financial support: a little more than 500 euros/month Employer: INRIA and Univ. Grenoble Alpes Team: Datamove Advisers: Bruno Raffin (Bruno.Raffin@inria.fr), Sofya Dymchenko (sofya.dymchenko@inria.fr) The internship will take place at the DataMove team located in the IMAG building on the campus of Saint Martin d’Heres (Univ. Grenoble Alpes) near Grenoble. The DataMove team is a friendly and stimulating environement gathering Professors, Researchers, PhD and Master students. Grants are available to pursue a PhD after this master intership. The city of Grenoble is a student friendly city surrounded by the alps mountains, offering a high quality of life and where you can experience all kinds of mountain related outdoors activities. ### Context In supervised learning, successfully training advanced neural networks requires annotated data of sufficient quantity and quality. In natural sciences (physics, chemistry, weather modeling), observational data remains to be a limiting factor. One alternative is to numerically create synthetic training data. This offers several advantages: synthetic data can be generated at will, in potentially unlimited amounts, the quality can be degraded in a controlled manner for more robust trainings, and the coverage of the parameter space can be adapted to focus training where relevant. Today, a large variety of simulation codes to create such data are available, from computer graphics, computer engineering, computational physics, biology and chemistry, and so on. When training data is produced from simulation codes, it can be generated along with the training. <img src="https://www.researchgate.net/profile/Karthik-Duraisamy/publication/331768849/figure/fig1/AS:736558739099653@1552620696164/The-von-Karman-vortex-street-generated-by-the-Rishiri-island-of-Hokkaido-Japan-top.png" width="500"> This approach has multiple benefits. First, there is no need to store and move a huge pre-created data set: float matrices of data can take terrabytes of memory, and reading them from the disk every training iteration might take more time than the iteration itself. Instead, data is stored in working memory and created "on-the-fly": when new data point is created it substitutes an old one. This allows the model to see terrabytes of data throughout its lifetime while storing only a smaller part of it at a time. Second, the training is not done with the same repeated data as in epoch-based approach. Continiously updated training set potentially improves the generalization quality of the model. More importantly, *the update of the training set and creation of new data can be adaptive*, driven by the observed behavior of the neural network during training. However, this adaptive data generation is a challenging question. Active learning adresses this challenge by adaptively sampling the input parameters of simulators based on training progress, aiming to generate more relevant data. Thus, faster and higher-quality training is expected. In current approaches, active learning for simulations-based training often follows a phased algorithm: 1) generate an initial training set by uniformly sampling input points 2) (re)train the model on the trainng set 3) use feedback from the model’s performance to generate/augment new training set and return to (2). Fundamentally, the methods differentiate by choice of "feedback" metric (aquisition function) and the way the next training set is created (aquisition algorithm). ### Our research Our team's research is focused on exploring and developping new online active learning methods for efficient training of surrogates -- neural networks that meant to substitute simulation codes. We have developped *Breed* for online adaptive surrogate training, such as Physics Informed Neural Networks (PINNs), Neural Operators, and basic Dense Neural Networks, within our [*MelissaDL* framework](https://melissa.gitlabpages.inria.fr/melissa/) that allows the training to be highly distributed and the training data to be created on-the-fly. #### Our related publications MelissaDL x Breed: Towards Data-Efficient On-line Supervised Training of Multi-parametric Surrogates with Active Learning, SC AI4S 2024: https://hal.science/hal-04712480v1 Training Deep Surrogate Models with Large Scale Online Learning, ICML 2023: https://hal.science/hal-04102400v1 Loss-driven sampling within hard-to-learn areas for simulation-based neural network training, NeurIPS ML4Phys 2023: https://hal.science/hal-04305233v1 Melissa: Simulation-Based Parallel Training, NeurIPS AI4S 2022: https://hal.science/hal-03842106v1 ### This internship goal This intership is focused on investigating use of generative methods for active learning, e.g., diffusion posterior sampling to generate input points based on models uncertainty. Currently, Breed method uses importance sampling technique and loss statistics. <img src="https://lilianweng.github.io/posts/2021-07-11-diffusion-models/generative-overview.png" width="500"> In the beginning, the objective is to get familiar with the domain and read about existing work: surrogates, neural operators, active learning, online training, Bayesian methods. Then -- start to work on possible generative methods for active learing (**normalizing flows, diffusion models, generative-adversarial networks, energy-based models, etc.**), develop and evaluate their performance through experiments with use cases such as heat equation and fluid dynamics equations. Currently, we work in a team consisting of a PhD student, a research engineer and a research director (Bruno Raffin), we have regular meetings and a daily communication - you will not be alone! The perfect candidate has basic knowledge of generative deep learning, confident programming skills to develop ML/DL algorithms in Python, motivation to quickly learn new things, and, most importantly, an interest to application of AI to physical sciences! ### Related papers Population Monte Carlo with Normalizing Flow. https://arxiv.org/abs/2312.03857 All-in-one simulation-based inference. https://arxiv.org/abs/2404.09636 Adaptive Generation of Training Data for ML Reduced Model Creation. https://www.osti.gov/biblio/1923172 A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks. https://arxiv.org/abs/2207.10289 Mitigating Propagation Failures in Physics-informed Neural Networks using Retain-Resample-Release (R3) Sampling. https://arxiv.org/abs/2207.02338 Deep Active Learning by Leveraging Training Dynamics. https://arxiv.org/abs/2110.08611