Matrix-free Neural Preconditioner for the Dirac Operator in Lattice Gauge Theory

Generative AI & LLMs
Published: arXiv: 2509.10378v1
Authors

Yixuan Sun Srinivas Eswar Yin Lin William Detmold Phiala Shanahan Xiaoye Li Yang Liu Prasanna Balaprakash

Abstract

Linear systems arise in generating samples and in calculating observables in lattice quantum chromodynamics~(QCD). Solving the Hermitian positive definite systems, which are sparse but ill-conditioned, involves using iterative methods, such as Conjugate Gradient (CG), which are time-consuming and computationally expensive. Preconditioners can effectively accelerate this process, with the state-of-the-art being multigrid preconditioners. However, constructing useful preconditioners can be challenging, adding additional computational overhead, especially in large linear systems. We propose a framework, leveraging operator learning techniques, to construct linear maps as effective preconditioners. The method in this work does not rely on explicit matrices from either the original linear systems or the produced preconditioners, allowing efficient model training and application in the CG solver. In the context of the Schwinger model U(1) gauge theory in 1+1 spacetime dimensions with two degenerate-mass fermions), this preconditioning scheme effectively decreases the condition number of the linear systems and approximately halves the number of iterations required for convergence in relevant parameter ranges. We further demonstrate the framework learns a general mapping dependent on the lattice structure which leads to zero-shot learning ability for the Dirac operators constructed from gauge field configurations of different sizes.

Paper Summary

Problem
Researchers in the field of lattice quantum field theory (LQFT) are trying to solve complex linear systems that arise when simulating quantum systems on a computer. These systems are crucial for understanding the behavior of particles and forces at the smallest scales, but they are computationally expensive and time-consuming to solve.
Key Innovation
The researchers propose a new method called a "neural preconditioner" that uses machine learning to accelerate the solution of these linear systems. This method is based on an operator learning approach, which means that it learns to construct a new linear map that can be used to solve the original system more efficiently.
Practical Impact
The neural preconditioner has the potential to greatly accelerate the solution of linear systems in LQFT, which could lead to significant advances in our understanding of quantum systems. This could be particularly important for applications in particle physics and nuclear physics, where accurate simulations of quantum systems are essential. The method could also be applied to other fields where linear systems need to be solved efficiently.
Analogy / Intuitive Explanation
Imagine you're trying to find your way through a dense forest. The linear system is like a map of the forest, but it's too complex to navigate directly. The neural preconditioner is like a GPS system that uses machine learning to learn the layout of the forest and find a more efficient route to your destination. By doing so, it can greatly reduce the time and effort required to solve the linear system.
Paper Information
Categories:
hep-lat cs.LG
Published Date:

arXiv ID:

2509.10378v1

Quick Actions