This afternoon I was in charge of another Computational Physics laboratory session. This one went better than last week, when we had a lot of teething problems, and I’m glad to say that the students are already writing bits of Python code and getting output – some of it was even correct!

After this afternoon’s session I came back to my office and noticed this little book on my shelf:

Despite the exorbitant cost, I bought it when I was an undergraduate back in the 1980s, though it was first published in 1966. It’s an interesting little book, notable for the fact that it doesn’t cover any computer programming at all. It focusses instead on the analysis of accuracy and stability of various methods of doing various things.

This is the jacket blurb:

This short book sets out the principles of the methods commonly employed in obtaining numerical solutions to mathematical equations and shows how they are applied in solving particular types of equations. Now that computing facilities are available to most universities, scientific and engineering laboratories and design shops, an introduction to numerical method is an essential part of the training of scientists and engineers. A course on the lines of Professor Wilkes’s book is given to graduate or undergraduate students of mathematics, the physical sciences and engineering at many universities and the number will increase. By concentrating on the essentials of his subject and giving it a modern slant, Professor Wilkes has written a book that is both concise and that covers the needs of a great many users of digital computers; it will serve also as a sound introduction for those who need to consult more detailed works.

Like any book that describes itself as having “a modern slant” is almost bound to date very quickly, and so this did, but its virtue is that it does complement current “modern” books which don’t include as much about the issues covered by Wilkes because one is nowadays far less constrained by memory and speed than was the case decades ago (and which circumstances I recall very well).

The ~~Course~~ Module I’m teaching covers numerical differentiation, numerical integration, root-finding and the solution of ordinary differential equations. All these topics are covered by Wilkes but I was intrigued to discover when I looked that he does numerical integration before numerical differentiation, whereas I do it the other way round. I put it first because I think it’s easier, and I wanted the students do do actually coding as quickly as possible, but I seem to remember doing e.g. Simpson’s Rule at school but don’t recall ever being taught about derivatives as finite differences.

Looking up the start of numerical differentiation in Wilkes I found:

This is a far less satisfactory method than numerical integration, as the following considerations show.

The following considerations indeed talk about the effect of rounding errors on calculations of finite differences (e.g. the forward difference Δf = [f(x+δ)-f(x)]/δ or backward difference Δf = [f(x)-f(x-δ)]/δ) with relatively large step size δ. Even with a modest modern machine one can use step sizes small enough to make the errors negligible for many purposes. Nevertheless I think it is important to see how the errors behave for those cases where it might be difficult to choose a very small δ. Indeed it seemed to surprise the students that using a symmetric difference Δf=[f(x+δ)-f(x-δ)]/2δ is significantly better than a forward or backward difference. Do a Taylor series expansion and you’ll understand why!

This example with δ=0.1 shows how the symmetric difference recovers the correct derivative of *sin(x) *far more accurately than the forward or backward derivative: