Archive for Numerical Integration

Scientific Computing Then and Now

Posted in Biographical, mathematics with tags , , , on February 10, 2022 by telescoper

This afternoon I was in charge of another Computational Physics laboratory session. This one went better than last week, when we had a lot of teething problems, and I’m glad to say that the students are already writing bits of Python code and getting output – some of it was even correct!

After this afternoon’s session I came back to my office and noticed this little book on my shelf:

Despite the exorbitant cost, I bought it when I was an undergraduate back in the 1980s, though it was first published in 1966. It’s an interesting little book, notable for the fact that it doesn’t cover any computer programming at all. It focusses instead on the analysis of accuracy and stability of various methods of doing various things.

This is the jacket blurb:

This short book sets out the principles of the methods commonly employed in obtaining numerical solutions to mathematical equations and shows how they are applied in solving particular types of equations. Now that computing facilities are available to most universities, scientific and engineering laboratories and design shops, an introduction to numerical method is an essential part of the training of scientists and engineers. A course on the lines of Professor Wilkes’s book is given to graduate or undergraduate students of mathematics, the physical sciences and engineering at many universities and the number will increase. By concentrating on the essentials of his subject and giving it a modern slant, Professor Wilkes has written a book that is both concise and that covers the needs of a great many users of digital computers; it will serve also as a sound introduction for those who need to consult more detailed works.

Like any book that describes itself as having “a modern slant” is almost bound to date very quickly, and so this did, but its virtue is that it does complement current “modern” books which don’t include as much about the issues covered by Wilkes because one is nowadays far less constrained by memory and speed than was the case decades ago (and which circumstances I recall very well).

The Course Module I’m teaching covers numerical differentiation, numerical integration, root-finding and the solution of ordinary differential equations. All these topics are covered by Wilkes but I was intrigued to discover when I looked that he does numerical integration before numerical differentiation, whereas I do it the other way round. I put it first because I think it’s easier, and I wanted the students do do actually coding as quickly as possible, but I seem to remember doing e.g. Simpson’s Rule at school but don’t recall ever being taught about derivatives as finite differences.

Looking up the start of numerical differentiation in Wilkes I found:

This is a far less satisfactory method than numerical integration, as the following considerations show.

The following considerations indeed talk about the effect of rounding errors on calculations of finite differences (e.g. the forward difference Δf = [f(x+δ)-f(x)]/δ or backward difference Δf = [f(x)-f(x-δ)]/δ) with relatively large step size δ. Even with a modest modern machine one can use step sizes small enough to make the errors negligible for many purposes. Nevertheless I think it is important to see how the errors behave for those cases where it might be difficult to choose a very small δ. Indeed it seemed to surprise the students that using a symmetric difference Δf=[f(x+δ)-f(x-δ)]/2δ is significantly better than a forward or backward difference. Do a Taylor series expansion and you’ll understand why!

This example with δ=0.1 shows how the symmetric difference recovers the correct derivative of sin(x) far more accurately than the forward or backward derivative:

A Problem involving Simpson’s Rule

Posted in Cute Problems, mathematics with tags , on March 9, 2018 by telescoper

Since I’m teaching a course on Computational Physics here in Maynooth and have just been doing methods of numerical integration (i.e. quadrature) I thought I’d add this little item to the Cute Problems folder. You might answer it by writing a short bit of code, but it’s easy enough to do with a calculator and a piece of paper if you prefer.

Use the above expression, displayed using my high-tech mathematical visualization software, to obtain an approximate value for π/4 (= 0.78539816339…) by estimating the integral on the left hand side using Simpson’s Rule at ordinates x =0, 0.25, 0.5, 0.75 and 1.

Comment on the accuracy of your result. Solutions and comments through the box please.

HINT 1: Note that the calculation just involves two applications of the usual three-point Simpson’s Rule with weights (1/3, 4/3, 1/3). Alternatively you could do it in one go using weights (1/3, 4/3, 2/3, 4/3, 1/3).

HINT 2: If you’ve written a bit of code to do this, you could try increasing the number of ordinates and see how the result changes…

P.S. Incidentally I learn that, in Germany, Simpson’s Rule is sometimes called called Kepler’s rule, or Keplersche Fassregel after Johannes Kepler, who used something very similar about a century before Simpson…