We’ll be talking about PDEs and how to solve them numerically using NNs. If you have no idea what those are you should probably read a little on it because we won’t be introducing them too much. Check the 3Blue1Brown video for some cool visuals.

To say that PDEs are important in Science and Engineering would be a ridiculous understatement. Maxwell equations (electricity and magnetism)? PDEs. Fluid motion? PDEs. Stock market? Stochastic PDEs. Waves? PDEs. Einstein field equation (geometry of spacetime)? Can be written as PDEs. Schrödinger’s equation (evolution of quantum systems i.e. atoms and molecules)? PDE. You get the idea.

PDEs are stupidly hard to solve and most of them don’t even have analytic solutions so we have to use numerical methods. The whole field of scientific computing is basically solving those equations efficiently and you can find an immense literature on the traditional methods. We’ll talk about those very briefly and mainly in comparison to the NN method we’ll see so if you want to know about those in more depth duckduckgo is your friend!

What we will talk about is how to codify the information contained in the PDE in
a NN and use your favourite optimization algorithm to find the solution. This is
done by adding three terms to form the loss function. The deviation from initial
and boundary conditions and the deviation from the “structure of the PDE”. We
set our function $u = NN(x)$ and we can calculate $u_x$, $u_t$ and others using
something like `torch.autograd.grad(u, x)`

or `tf.gradients(u, x)`

. Then an
example PDE could be $u_t + u_x = 0$, where we have numerical values for $u_t$
and $u_x$ so the structural loss is $(u_t + u_x)^2$. That’s it!

IMHO a very simple and elegant idea :)