Integration by Differentiation

Photo by Max Harlynking on Unsplash

We all have a foundational understanding from high school that integration is the opposite of differentiation, as it's the inverse operation. That's an accurate notion. But there's a lesser-known aspect — integration can be achieved by differentiation too. This article will provide a numerical insight into how it's done. We will introduce a scheme that surpasses traditional direct integration methods in power because it seamlessly extends to higher-order methods and can handle any grid type.

Integrating in one dimension

Let's start by defining an integral as follows:

$$ I = \int_a^b f(x)\,dx $$

Keeping the upper integration limit flexible, we introduce:

$$ u(x) = \int_a^x f(x')\,dx' $$

Hence we can state that:

$$ I = u(b) $$

Taking the derivative simplifies to:

$$ u'(x) = f(x) $$

This is a differential equation in the function u(x). So, calculating the integral is equivalent to solving this differential equation with the boundary condition u(a) = 0. Let's rewrite the differential equation using a different notation, one that provides an idea of our next steps:

$$ u(a) = 0 $$

Let's rewrite the differential equation in a different notation, one which gives us some idea what we could do next:

$$ D u = f $$

Here, D represents the derivative operator.

This far, we haven't introduced any approximation and everything is in continuous form. But when we work numerically, we have to discretize our functions and operators. Functions turn into vectors and operators transform into matrices. Keeping that in mind, our discretization leads us to the following equation...

$$ \left( \begin{matrix} D_{11} & D_{12} & \dots & & D_{1N} \\ D_{21} & D_{22} & \dots & & D_{2N} \\ D_{31} & D_{32} & \dots & & D_{3N} \\ \dots & \dots & \dots & & \dots \\ D_{N1} & D_{N2} & \dots & & D_{NN} \\ \end{matrix} \right) % \left( \begin{matrix} u_{1}  \\ u_{2}  \\ u_{3}  \\ \dots \\ u_{N}  \\ \end{matrix} \right) = \left( \begin{matrix} f_{1}  \\ f_{2}  \\ f_{3}  \\ \dots \\ f_{N}  \\ \end{matrix} \right) $$

Basically, the idea is this: multiply with the inverse matrix so that we get our solution function \(u\). But how do we implement the boundary condition \(u(a)=0\)? Easy! Considering this as an equation system, we have to replace the equation for \(u_1\) by something else and also make sure that all other equations realize that \(u_1=0\). This leads us to

$$ \left( \begin{matrix} 1 & 0 & \dots & & 0 \\ 0 & D_{22} & \dots & & D_{2N} \\ 0 & D_{32} & \dots & & D_{3N} \\ \dots & \dots & \dots & & \dots \\ 0 & D_{N2} & \dots & & D_{NN} \\ \end{matrix} \right) % \left( \begin{matrix} u_{1}  \\ u_{2}  \\ u_{3}  \\ \dots \\ u_{N}  \\ \end{matrix} \right) = \left( \begin{matrix} 0  \\ f_{2}  \\ f_{3}  \\ \dots \\ f_{N}  \\ \end{matrix} \right) $$

We have simply wiped out the first line and first column and put in a 1 at the diagonal entry. On the right hand side, we replaced \(f_1\) by 0. Note that this fulfills both our conditions: implement the equation \(u_1 = 0\) and also that all other equations know that \(u_1=0\) (since we wiped out the first column).

But this is actually more than we need. We could simply cut out the submatrix and use only

$$ \left( \begin{matrix} D_{22} & \dots & & D_{2N} \\ D_{32} & \dots & & D_{3N} \\ \dots & \dots & & \dots \\ D_{N2} & \dots & & D_{NN} \\ \end{matrix} \right) % \left( \begin{matrix} u_{2}  \\ u_{3}  \\ \dots \\ u_{N}  \\ \end{matrix} \right) = \left( \begin{matrix} f_{2}  \\ f_{3}  \\ \dots \\ f_{N}  \\ \end{matrix} \right) $$

Let's call these smaller entities \(\tilde D\), \(\tilde u\) and \(\tilde f\). Inverting it we get

$$ \tilde u = \tilde D^{-1} \tilde f $$

Python Implementation

So far, we've delved into the theoretical underpinnings of integrating via differentiation. Now, let's roll up our sleeves and implement this idea in Python. As an example, we will integrate the function f(x) = cos x from -1 to 1. This will enable us to easily compare with the exact value of ∫ f(x) dx = sin 1 - sin (-1).

We use an equidistant grid for simplicity:


import numpy as np

x = np.linspace(-1, 1, 100)
f = np.cos(x)

exact = np.sin(1) - np.sin(-1)
exact

1.682941969615793

Now, we need a matrix representation of a differential operator. We will keep it simple here and use a finite difference approximation, but it basically works with any other approximations, too. Whatever your approximation method, you just need the matrix representation of the derivative operator.

In Python, we can easily get the finite difference approximation using the findiff package. I wrote an introductory article for that some time ago XXX.


from findiff import FinDiff

d_dx = FinDiff(0, x[1] - x[0]) # 0 is the 0-th axis
D = d_dx.matrix(x.shape)

D

<100x100 sparse matrix of type ''
	with 202 stored elements in Compressed Sparse Row format>

As you can see, findiff gives as a sparse matrix representation, which is good because the finite difference matrix mostly contains zeros.

Now cut out the lower part of our matrix and vectors:


D_ = D[1:, 1:]
f_ = f[1:]

Compute the inverse matrix by using the sparse matrix package from scipy:


from scipy.sparse.linalg import inv

D_inv = inv(D_)

Cut out the last line of the inverse matrix:


w_ = D_inv[-1, :]
w_

<1x99 sparse matrix of type ''
	with 51 stored elements in Compressed Sparse Row format>

and dot-multiply with our function vector:


w_.dot(f_)

array([1.68305532])

And this is our result! The error is


abs(exact - _[0])

0.00011334845408694783

which is ok, but not great. However, we can easily do better! What findiff gives us by default is a second order accuracy approximation for the matrix \(D\). We can request higher order accuracy by simply passing a parameter:


d_dx = FinDiff(0, x[1] - x[0], acc=4)
D = d_dx.matrix(x.shape)
D_ = D[1:, 1:]
D_inv = inv(D_)
w_ = D_inv[-1, :]
abs(w_.dot(f_)[0] - exact)

1.0009984618974954e-08

Much better! If we had done the same with the more direct integration methods, what we have basically done is going from a second order method to a fourth order method, or from "trapezoidal rule" to "Simpsons rule". But not that we have avoided a severe problem of Simpsons' rule: it doesn't work for all grid sizes (only even number of grid points), whereas our method works for all grid sizes! The same holds true for even higher orders of accuracy, e.g.


d_dx = FinDiff(0, x[1] - x[0], acc=6)
D = d_dx.matrix(x.shape)
D_ = D[1:, 1:]
D_inv = inv(D_)
w_ = D_inv[-1, :]
abs(w_.dot(f_)[0] - exact)

5.306866057708248e-14

Conclusion

The discussed approach to numerical integration via differentiation has brought forth intriguing results. Our Python implementation vividly demonstrated the power and flexibility of this method, showing that it can easily be generalized to higher order methods, regardless of grid sizes. Compared to traditional integration methods, this approach stands superior because of its flexibility with grid sizes — an issue that often plagues integration schemes like the popular Simpson's rule of numerical integration.