I’ve long disliked the physics practice of rounding numbers and relying on reporting of “significant figures” rather than explicitly stating the precision. The “significant figure” convention, which only allows expressing a precision of ±1/2 the least-significant digit, is a rather awkward and (I believe) stupid convention.

Giving the mass of an electron as 9.109 382 91(40)E-31 kg or 9.109 382 91E-31 ± 4E-38 kg is much better than rounding to 9.109383E-31. (The parenthesis convention for giving precisions in scientific notation is a handy one, though taught so rarely that it can lead to confusion.)

When someone outside the very narrow confines of a high-school physics class provides a number, there is **no** reason to assume that it is carefully rounded to express the significant figures of accuracy. Engineers routinely use a variant of scientific notation which has powers of 1000 instead of powers of 10 (123.2 millivolts, rather than 1.232E-1 volts). In this notation, 100 millivolts could be interpreted as having 1, 2, or 3 significant figures, so the convention could be interpreted as 100±50, 100±5 or 100±0.5, but the real precision is likely something like 100±2, which can’t be expressed in the “significant figures” notation at all.

Why use engineering notation rather than scientific notation? It is a lot easier to keep track of and compare numbers in the range [1,1000) with a verbal scale than [1,10) with arbitrary exponents. It is easier to specify and find a 47 pF capacitor than 4.7E-8F, especially since the precision is likely to be ±5%, which is not expressible as significant figures.

As an engineer, I expect manufacturers to give me explicit precisions on dimensions. I don’t want to be told that a part is 10″ or 25 cm long, but that it is 25.4±0.2cm. I need to know whether a resistor can be ±20%, ±5%, ±1%, or ±0.1%, especially if I have to substitute one part for another. (Higher precision costs more, and I don’t want to go out and buy a specialty part when an off-the-shelf standard part would do as well.)

Rounding in mid-calculation, as is done in examples in so many books, is both unnecessary and dangerous, increasing the error in the final result for no good reason. (Exception: mental calculations and quick back-of-the-envelope order-of-magnitude estimates often benefit from keeping only 1 or 2 significant figures to reduce the load on working memory and mental calculation time.)

I’m annoyed by the Matter and Interactions text book giving only very rough numbers for physical constants (like the speed of light) on the inside back cover, when some of the problems in the book dealing with nuclear reactions require 6 significant figures to get meaningful results. The table in the back should have the best known values of the constants, with rounded values suitable for quick computation as extras. Having only the rounded values is annoying. I spent some time on the web looking up all the constants and penciling them into the book, irritated at the authors for not having done this for me.

I am quite in favour of improving accuracy in measurements. I think excessive use of numeric calculation has lead us down a path to ignoring the accuracy and simply relying on the numbers coming out of the system. We also tend to average and extrapolate far too often. Just like dimensions without precision, stats are most often quoted as means without variance. And while such numbers are not meaningless, they are quite dangerous.

Comment by mortoray — 2012 February 14 @ 10:22 |

Whoa, it’s not physics, it’s chemistry leading the charge here. I’m not saying significant figures doesn’t happen in physics, but it’s literally taught in general chemistry courses around the country. When I talk with physics educators, we often discuss the monte carlo approach of keeping track of and propagating errors/precision. There’s a great discussion of all things errors-related here: http://www.av8n.com/physics/uncertainty.htm

Comment by Andy Rundquist — 2012 February 14 @ 10:40 |

Hopefully someone can enlighten me, but, I’m not sure I understand the dislike of sig figs for what I think they are intended for. To me, sig figs are about process, not presentation, i.e. a tool for solving problems meaningfully — not presenting data in a research paper.

Sig figs allow you to have a rough idea of the precision throughout a multi-part calculation. In other words, if I am looking for X, where X=sin(a+b)/c, and I know that I can only measure ‘a’ to roughly 2 digits of accuracy, then I may choose 2 or 3 sig figs for recording a, b & c. I may be able to measure ‘b’ to 8 sig figs, but it’s a waste of effort, so I will limit myself to the same # of sig figs throughout the calculation.

If I have ‘a’ as something like 10+/-0.55 and ‘b’ as 15+/-0.3 and ‘c’ as 145 +/-2, how do I present the error for ‘X’ in the final result? (BTW, this is an actual question, not a “debate point”).

Also, how do you present the error of the error? For example, do I write 10+/-2 or 10+/-2.000… or do I have to write 10+/-2+/-.001(ad naseum)?

Comment by Ron G. — 2012 February 14 @ 19:21 |

Significant figures are a very crude form of error analysis, generally not to be trusted. (As an extreme example, computing the energy of a nuclear reaction from the rest masses before and afterwards generally requires 6 or 7 figures on the input to get one significant figure on the output.)

Propagating errors through a calculation can be done, but is difficult to do manually. A lot depends on whether the ± is interpreted as the full range of possible values, the standard deviation, the 99% confidence interval, or something else. A

reallycareful error analysis involves Bayesian statistical models of the variables, with prior models and posteriors based on the observations.For sin(a+b)/c, the error is largely dependent on the range of a and b. If they are small with opposite signs, then the error of the result may be enormous.

For the example of a=10±0.55, b=15±0.3, a+b= 25±0.85 (using worst-case analysis—somewhat less with other meanings for the ±), and sin(a+b) = 0.4226±0.013 (assuming we are using degrees, not radians). sin(a+b)/c would be 2.9146E-3 ± 1.34E-4 Because sin has a slope near 1 here, and a and b don’t cancel in the addition, the sig-fig approximate error analysis comes out pretty close to what a worst-case analysis gets. If you had chosen a=10±0.55 and b=-9±0.3, then a+b would have been -1±0.85 and sin(a+b)=-0.017452±0.0148, and there is not even one significant figure in the result.

One doesn’t generally report errors in the errors. If you need that level of sophistication, then you don’t report just a ± (or even separate + and – tolerances), but give the full distribution of the possible values, along with the Bayesian model used for estimating the distribution.

Comment by gasstationwithoutpumps — 2012 February 14 @ 19:56 |

[…] In physics, however, there are rules that are supposed to make sense. But do they? Here is what one physics teacher has to say: I’ve long disliked the physics practice of rounding numbers and relying on reporting […]

Pingback by Significant figures | Learning Strategies — 2012 March 24 @ 07:35 |

One minor point—although I’m home-schooling physics this year, I can’t really be identified as a “physics teacher”. I’m a bioinformatics professor, and I’ve been a computer engineering professor. I could be identified as a computer scientist or as an engineering professor, but “physics teacher” is a not really a good description (I’m only one chapter in the book ahead of my students, and when we compare answers to homework problems we’re almost as likely to find mistakes in my solutions as in my son’s).

Comment by gasstationwithoutpumps — 2012 March 24 @ 10:52 |