Addressing A Posteriori Performance Degradation in Neural Network Subgrid Stress Models

Generative AI & LLMs
Published: arXiv: 2511.17475v1
Authors

Andy Wu Sanjiva K. Lele

Abstract

Neural network subgrid stress models often have a priori performance that is far better than the a posteriori performance, leading to neural network models that look very promising a priori completely failing in a posteriori Large Eddy Simulations (LES). This performance gap can be decreased by combining two different methods, training data augmentation and reducing input complexity to the neural network. Augmenting the training data with two different filters before training the neural networks has no performance degradation a priori as compared to a neural network trained with one filter. A posteriori, neural networks trained with two different filters are far more robust across two different LES codes with different numerical schemes. In addition, by ablating away the higher order terms input into the neural network, the a priori versus a posteriori performance changes become less apparent. When combined, neural networks that use both training data augmentation and a less complex set of inputs have a posteriori performance far more reflective of their a priori evaluation.

Paper Summary

Problem
Large Eddy Simulations (LES) is a powerful tool for predicting high-fidelity flows, but it has a problem with neural network subgrid stress models. These models, which use artificial intelligence to simulate the effects of small-scale turbulence, often perform well when trained on data but degrade significantly when applied to real-world simulations. This is known as "a posteriori performance degradation," and it's a major challenge for researchers and engineers.
Key Innovation
The researchers in this paper propose a new solution to this problem. They introduce a "multi-filter data augmentation strategy," which exposes the neural network to various plausible filtered inputs and SGS stress distributions. This means that the network is trained on a wide range of possible inputs, rather than just one specific type. They also propose two new filters, BTF and DSCF, which are designed to mimic the behavior of different LES solvers.
Practical Impact
The practical impact of this research is significant. By improving the robustness of neural network subgrid stress models, researchers and engineers can develop more accurate and reliable simulations of complex flows. This can lead to breakthroughs in fields such as aerospace engineering, wind energy, and chemical processing. The researchers also suggest that their approach can be applied to other areas of machine learning, where data augmentation is a key challenge.
Analogy / Intuitive Explanation
Think of a neural network as a photographer trying to capture a beautiful sunset. If the photographer only takes pictures in one location, with one camera setting, they may get a great shot, but it may not be representative of the entire sunset. By taking pictures from different locations, with different camera settings, the photographer can get a more complete and accurate picture of the sunset. Similarly, the researchers in this paper are trying to expose the neural network to a wide range of possible inputs, so that it can learn to generalize and perform well in a variety of situations.
Paper Information
Categories:
physics.flu-dyn cs.LG
Published Date:

arXiv ID:

2511.17475v1

Quick Actions