### Summary Scaling Physics-Informed Neural Networks for High-Dimensional PDEs arxiv.org

21,648 words - PDF document - View PDF document

### One Line

This text discusses the scaling of Physics-Informed Neural Networks (PINNs) for high-dimensional PDEs, which involves randomly selecting indices and computing gradients.

### Slides

### Slide Presentation (8 slides)

### Key Points

- The curse of dimensionality poses challenges in solving high-dimensional partial differential equations (PDEs) due to exponentially increasing computational costs.
- Stochastic Dimension Gradient Descent (SDGD) is proposed as a new method for solving high-dimensional PDEs.
- Physics-Informed Neural Networks (PINNs) can be trained on nontrivial nonlinear PDEs in 100,000 dimensions in just 6 hours using a speed-up method utilizing sampling for both forward and backward passes.
- The algorithm for scaling PINNs for high-dimensional PDEs has low memory cost and uses an unbiased gradient.
- Algorithm 2 shows rapid convergence even in extremely high-dimensional cases, but instability may occur due to small batch size and resulting gradient variance.

### Summaries

### 28 word summary

This text excerpt discusses the scaling of Physics-Informed Neural Networks (PINNs) for high-dimensional PDEs. The algorithm involves randomly selecting indices and computing gradients with respect to those indices.

### 37 word summary

The text excerpt discusses the scaling of Physics-Informed Neural Networks (PINNs) for high-dimensional Partial Differential Equations (PDEs). The algorithm for training the PINN involves randomly selecting indices and computing the gradient with respect to those indices. Two

### 810 word summary

The curse of dimensionality (CoD) poses challenges in solving high-dimensional partial differential equations (PDEs) due to exponentially increasing computational costs. In this paper, a new method called Stochastic Dimension Gradient Descent (SDGD) is proposed

With a speed-up method utilizing sampling for both forward and backward passes, Physics-Informed Neural Networks (PINNs) can be trained on nontrivial nonlinear PDEs in 100,000 dimensions in just 6 hours of training time. This

Wang et al. propose tensor neural networks for solving high-dimensional Schro?dinger equations. Zhang et al. propose using PINNs for solving stochastic differential equations. Zang et al. propose a weak adversarial network for solving PDEs.

Our algorithm for scaling Physics-Informed Neural Networks (PINNs) for high-dimensional PDEs has several advantages. It has low memory cost because the backward pass only backpropagates over terms with i ? I. The gradient used is an unbiased

The document discusses scaling physics-informed neural networks (PINNs) for high-dimensional partial differential equations (PDEs). The algorithm for training the PINN involves randomly selecting indices and computing the gradient with respect to those indices. Two extensions for further speed

The text excerpt discusses a training algorithm for scaling up and speeding up physics-informed neural networks (PINNs) for high-dimensional partial differential equations (PDEs). The algorithm involves sampling both the forward and backward passes. The tradeoff between speed and

The text excerpt discusses the scaling of Physics-Informed Neural Networks (PINNs) for high-dimensional Partial Differential Equations (PDEs). It introduces Theorem 4.2, which states that the variance of the stochastic gradient decreases as batch sizes

The text excerpt discusses the scaling of Physics-Informed Neural Networks (PINNs) for high-dimensional Partial Differential Equations (PDEs). It presents a lemma and a remark regarding the boundedness of higher-order derivatives of the network. The SGD trajectory

We explore the acceleration achieved by our algorithm and whether Physics-Informed Neural Networks (PINNs) can overcome the curse of dimensionality. We use consistent hyperparameter settings and vary PDE dimensions from 10^2 to 10^5. Our

Our method achieves faster convergence for high-dimensional scenarios, with Algorithm 2 showing rapid convergence even in extremely high-dimensional cases. However, instability was observed in Algorithm 2 for 100,000 dimensions due to small batch size and resulting gradient variance. The

Algorithm 1 and Algorithm 2 are effective in reducing memory cost and accelerating convergence in training Physics-Informed Neural Networks (PINNs). The acceleration provided by Algorithm 2 outweighs the impact of gradient variance in higher dimensional cases. The symmetry of the

The document discusses the use of Physics-Informed Neural Networks (PINN) to approximate the solutions of high-dimensional partial differential equations (PDEs). Two specific PDEs, the Fisher-KPP equation and the Sine-Gordon equation, are

The excerpt discusses the use of physics-informed neural networks (PINNs) for solving high-dimensional partial differential equations (PDEs) with linear-quadratic-Gaussian (LQG) control. It mentions the specific form of the PDEs

The method presented in the document is explored to determine whether it can converge on asymmetric and anisotropic partial differential equations (PDEs). An anisotropic Fokker-Planck equation is designed as an example. The convergence results of the

The primary computational bottleneck for TNNs in solving high-dimensional PDEs lies in the first-order derivatives in the loss function. The computational cost scales linearly with the dimension, and TNN incurs larger memory consumption due to non-sharing of network

This summary highlights the key points of the text excerpt. In Figure 4, the stable convergence of SDGD is demonstrated for the Laplace eigenvalue problem, showing that it can scale up to larger problem sizes. The algorithm is able to handle high

Our method effectively captures low-dimensional solutions within high-dimensional equations, leading to faster convergence. We use small-dimension batch sizes and optimize one dimension at a time, triggering a transfer learning effect on other dimensions. The transferability of gradients is validated by theory

The document discusses the scaling of Physics-Informed Neural Networks (PINNs) for high-dimensional Partial Differential Equations (PDEs). It introduces the full batch gradient and stochastic gradient for PINNs and analyzes their variances. The authors propose a distributed

The article discusses the scaling of physics-informed neural networks (PINNs) for high-dimensional partial differential equations (PDEs). It presents convergence plots and performance results for different algorithms and hyperparameters. The data-parallel approach is compared to the tensor

This excerpt includes a list of references cited in the document "Scaling Physics-Informed Neural Networks for High-Dimensional PDEs." The references include various papers and articles related to the use of deep learning and neural networks in solving high-dimensional partial differential equations

This summary provides a list of references from various articles and papers related to the application of deep learning and neural networks in solving high-dimensional partial differential equations (PDEs). The references cover topics such as the numerical approximation of semilinear parabolic PDE

This text excerpt is a list of references to various research papers on the topic of physics-informed neural networks (PINNs) and their applications in solving high-dimensional partial differential equations (PDEs). The papers mentioned cover a range of topics such as