Abstract
This paper introduces TensorGalerkin, a novel high-performance framework for solving, learning, and optimizing partial differential equations (PDEs) with variational structures. The framework is built on a Galerkin discretization approach and addresses computational inefficiencies in traditional finite element methods (FEM) and physics-informed machine learning (PIML). TensorGalerkin reformulates the assembly of stiffness matrices and load vectors as a tensorized Map-Reduce operation, enabling efficient GPU utilization and minimizing Python-level overhead. The framework is integrated into three key applications: TensorMesh, a GPU-optimized FEM solver; TensorPils, a physics-informed learning framework for PDE solution operators; and TensorOpt, an end-to-end differentiable pipeline for PDE-constrained optimization. Benchmarks on 2D and 3D elliptic, parabolic, and hyperbolic PDEs demonstrate significant computational efficiency and accuracy improvements over existing methods, making TensorGalerkin a promising tool for numerical PDE solving, operator learning, and optimization tasks.
Methodology
TensorGalerkin employs a Galerkin discretization approach reformulated as a tensorized Map-Reduce operation. The Map stage performs dense tensor contractions to compute local stiffness matrices and load vectors, while the Reduce stage handles domain topology using precomputed routing and sparse matrix multiplication. This approach minimizes Python overhead and maximizes GPU efficiency. The framework is implemented in PyTorch and supports applications in numerical PDE solving, physics-informed learning, and PDE-constrained optimization.
Results
TensorGalerkin achieves significant computational efficiency and accuracy improvements across multiple benchmarks, including 2D and 3D elliptic, parabolic, and hyperbolic PDEs on unstructured meshes. The TensorMesh solver outperforms legacy CPU-based FEM solvers, while TensorPils and TensorOpt demonstrate robust performance in physics-informed learning and optimization tasks, respectively. The framework's GPU compatibility and avoidance of automatic differentiation overhead contribute to its superior performance.
Implications
TensorGalerkin has broad implications for scientific computing, enabling faster and more accurate solutions to PDEs in fields such as physics, engineering, and computational design. Its applications in PDE-constrained optimization and physics-informed learning could accelerate advancements in inverse design, uncertainty quantification, and operator learning, particularly in scenarios with limited training data or complex domain geometries.
View on arXiv