Capturing Provenance to Improve the Model Training of PINNs: first handon experiences with Grid5000
Résumé
The growing popularity of Neural Networks in computational science and engineering raises several challenges in configuring training parameters and validating the models. Machine learning has been used to approximate costly computational problems in computational mechanics, discover equations by coefficient estimation, and build surrogates. Those applications are outside of the common usage of neural networks. They require a different set of techniques generally encompassed by Physics-Informed Neural Networks (PINNs), which appear to be a good alternative for solving forward and inverse problems governed by PDEs in a small data regime, especially when it comes to Uncertainty Quantification. PINNs have been successfully applied for solving problems in fluid dynamics, inference of hydraulic conductivity, velocity inversion, phase separation, and many others. Nevertheless, we still need to investigate its computational aspects, especially its scalability, when running in large-scale systems. Several hyperparameter configurations have to be evaluated to reach a trained model, often requiring fine-tuning hyperparameters, despite the existence of a few setting recommendations. In PINNs, this fine-tuning requires analyzing configurations and how they relate to the loss function evaluation. We propose provenance data capture and analysis techniques to improve the model training of PINNs. We also report our first experiences on running PINNs in Grid5000 using hybrid CPU-GPU computing.
Origine | Fichiers produits par l'(les) auteur(s) |
---|