Key Laboratory of Intelligent Computing and Information Processing of Ministry of Education
06 Jun 2025
Deep learning-based hybrid iterative methods (DL-HIM) have emerged as a promising approach for designing fast neural solvers to tackle large-scale sparse linear systems. DL-HIM combine the smoothing effect of simple iterative methods with the spectral bias of neural networks, which allows them to effectively eliminate both high-frequency and low-frequency error components. However, their efficiency may decrease if simple iterative methods can not provide effective smoothing, making it difficult for the neural network to learn mid-frequency and high-frequency components. This paper first conducts a convergence analysis for general DL-HIM from a spectral viewpoint, concluding that under reasonable assumptions, DL-HIM exhibit a convergence rate independent of grid size hh and physical parameters μ\boldsymbol{\mu}. To meet these assumptions, we design a neural network from an eigen perspective, focusing on learning the eigenvalues and eigenvectors corresponding to error components that simple iterative methods struggle to eliminate. Specifically, the eigenvalues are learned by a meta subnet, while the eigenvectors are approximated using Fourier modes with a transition matrix provided by another meta subnet. The resulting DL-HIM, termed the Fourier Neural Solver (FNS), can be trained to achieve a convergence rate independent of PDE parameters and grid size within a local neighborhood of the training scale by designing a loss function that ensures the neural network complements the smoothing effect of the damped Jacobi iterative methods. We verify the performance of FNS on five types of linear parametric PDEs.
In this work, we design a multi-category inverse design neural network to map ordered periodic structure to physical parameters. The neural network model consists of two parts, a classifier and Structure-Parameter-Mapping (SPM) subnets. The classifier is used to identify structure, and the SPM subnets are used to predict physical parameters for desired structures. We also present an extensible reciprocal-space data augmentation method to guarantee the rotation and translation invariant of periodic structures. We apply the proposed network model and data augmentation method to two-dimensional diblock copolymers based on the Landau-Brazovskii model. Results show that the multi-category inverse design neural network is high accuracy in predicting physical parameters for desired structures. Moreover, the idea of multi-categorization can also be extended to other inverse design problems.
Finding index-1 saddle points is crucial for understanding phase transitions. In this work, we propose a simple yet efficient approach, the spring pair method (SPM), to accurately locate saddle points. Without requiring Hessian information, SPM evolves a single pair of spring-coupled particles on the potential energy surface. By cleverly designing complementary drifting and climbing dynamics based on gradient decomposition, the spring pair converges onto the minimum energy path (MEP) and spontaneously aligns its orientation with the MEP tangent, providing a reliable ascent direction for efficient convergence to saddle points. SPM fundamentally differs from traditional surface walking methods, which rely on the eigenvectors of Hessian that may deviate from the MEP tangent, potentially leading to convergence failure or undesired saddle points. The efficiency of SPM for finding saddle points is verified by ample examples, including high-dimensional Lennard-Jones cluster rearrangement and the Landau energy functional involving quasicrystal phase transitions.
There are no more papers matching your filters at the moment.