altHumble Pi: When Math Goes Wrong in the Real World by Matt Parker (Riverhead Books (2020)

My sister-in-law and I don't often read the same books, but she has an uncanny ability to discern books I might like. Her recommendation of Humble Pi was a winner.

Matt Parker is a funny writer, and has in spades the usual mathematician's love of puzzles and wordplay. The pages of his book are numbered from back to front—with a twist. Between Chapter 9 and Chapter 10 is Chapter 9.49. The title of his chapter on random numbers is "Tltloay Rodanm." His index is ... interesting.

Not all of the book's quirks are good ones. Parker insists on using "dice" as both singular and plural, which is quite annoying to the reader (or at least this reader); he explains: My motto is "Never say die." Funny, but still irritatingLess funny and more frustrating is his insistence on using "they" as a singular pronoun, even when there is no ambiguity of sex involved. Still less amusing are the jabs he takes at President Trump: they are irrelevant, unkind, and will date the book before its time.

Okay, I've gotten past what I didn't like about the book. Now I can say I recommend it very highly. Despite all the math, it's easy to read. Parker does a good job of explaining most things. Even if you skip the details of the math—and computer code—you can appreciate the stories. It's possible, however, that you'll become afraid to leave your house—and not too sure about staying there. The potential for catastrophic math and programming errors is both amusing and terrifying. Much worse than failing your high school algebra test. On the other hand, you'll gain a greater appreciation for how often things actually go right in the world, despite all our mistakes.

Even if you're one of those who skips the quotation section, be sure to scroll to the bottom before leaving, because Matt Parker also has a YouTube channel, called Stand-up Maths.

I document the quotations with the book's page numbers as written, so higher numbers indicate earlier text.

What is notable about the following quote is that it is the first time I recall seeing affirmed in a "mainstream" publication (and by a mathematician, no less) the claim I first encountered in Glenn Doman's book, How to Teach Your Baby Math.

We are born with a fantastic range of number and spatial skills; even infants can estimate the number of dots on a page and perform basic arithmetic on them. (p. 307, emphasis mine)

With all that advantage right from the get-go, you'd think we could do better than this:

A UK National Lottery scratch card had to be pulled from the market the same week it was launched. ... The card was called Cool Cash and came with a temperature printed on it. If a player's scratching revealed a temperature lower than the target value, they won. But a lot of players seemed to have an issue with negative numbers:

"On one of my cards it said I had to find temperatures lower than -8. The numbers I uncovered were -6 and -7 so I thought I had won, and so did the woman in the shop. But when she scanned the card, the machine said I hadn't. ... They fobbed me off with some story that -6 is higher, not lower, than -8, but I'm not having it." (pp. 307-306)

I'm sure the lottery players are not, in reality, as stupid as this would imply. Don't they all dress more warmly when their thermometers read -20 degrees than when they read -1 degree? But so many people have a major disconnect between real life and the math they learned (or didn't learn) in school.

Did you know this? I didn't. We long ago gave up the Julian calendar for the more accurate Gregorian version,

But astronomy does give Julius Caesar the last laugh. The unit of a light-year, that is, the distance traveled by light in a year (in a vacuum) is specified using the Julian year of 365.25 days. (p. 291)

At 3:14 a.m. on Tuesday, January 19, 2038, many of our modern microprocessors and computers are going to stop working. And all because of how they store the current date and time.... (p. 291)

It's easy to write this off as a second coming of the Y2K "millennium bug" that wasn't. That was a case of higher level software storing the year as a two-digit number, which would run out after 99. With a massive effort, almost everything was updated. But a disaster averted does not mean it was never a threat in the first place. It's risky to be complacent because Y2K was handled so well. Y2K38 will require updating far more fundamental computer code and, in some cases, the computers themselves. (p. 288, emphasis mine)

You don't think math is an important subject? Sometimes when you get the math wrong, no one notices. Sometimes the only victim of your mistake is yourself. But sometimes, lots of people die.

The human brain is an amazing calculation device, but it has evolved to make judgment calls and to estimate outcomes. We are approximation machines. Math, however, can get straight to the correct answer. It can tease out the exact point where things flip from being right to being wrong, from being correct to being incorrect, from being safe to being disastrous.

You can get a sense of what I mean by looking at nineteenth and early-twentieth-century structures. They are built from massive stone blocks and gigantic steel beams riddled with rivets. Everything is over-engineered, to the point where a human can look at it and feel instinctively sure that it's safe. ... With modern mathematics, however, we can now skate much closer to the edge of safety. (p. 265)

A rose by any other name ... might get deleted.

In the mid-1990s, a new employee of Sun Microsystems in California kept disappearing from their database. Every time his details were entered, the system seemed to eat him whole; he would disappear without a trace. No one in HR could work out why poor Steve Null was database Kryptonite.

The staff in HR were entering the surname as "Null," but they were blissfully unaware that, in a database, NULL represents a lack of data, so Steve became a non-entry. (p. 259)

Only those who know nothing about computers really trust them. The rest of us know how easy it is for mistakes to happen. I use spreadsheets a lot, and the chapter on how dependent many of our critical systems are on Microsoft Excel leaves me weak in the knees.

Excel is great at doing a lot of calculations at once and crunching some medium-sized data. But when it is used to perform large, complex calculations across a wide range of data, it is simply too opaque in how the calculations are made. Tracking back and error-checking calculations becomes a long, tedious task in a spreadsheet. (p. 240)

The flapping butterfly wing of a tiny mistake can lead to disaster of Category Six hurricane proportions.

In 2012 JPMorgan Chase lost a bunch of money; it's difficult to get a hard figure, but the agreement seems to be that it was around $6 billion. As is often the case in modern finance, there are a lot of complicated aspects to how the trading was done and structured (none of which I claim to understand). But the chain of mistakes featured some serious spreadsheet abuse, including the calculation of how big the risk was and how losses were being tracked. ...

The traders regularly gave their portfolio positions "marks" to indicate how well or badly they were doing. As they would be biased to underplay anything that was going wrong, the Valuation Control Group ... was there to keep an eye on the marks and compare them to the rest of the market. Except they did this with spreadsheets featuring some serious mathematical and methodological errors. (p. 240-239)

For example (quoted from a JPMorgan Chase & Co. Management Task Force report),

"This individual immediately made certain adjustments to formulas in the spreadsheets he used. These changes, which were not subject to an appropriate vetting process, inadvertently introduced two calculation errors.... Specifically, after subtracting the old rate from the new rate, the spreadsheet divided by their sum instead of their average, as the modeler had intended. This error likely had the effect of muting volatility by a factor of two and of lowering the [Value-at-Risk calculation]." (p. 238)

As a result,

Billions of dollars were lost in part because someone added two numbers together instead of averaging them. A spreadsheet has all the outward appearances of making it look as if serious and rigorous calculations have taken place. But they're only as trustworthy as the formulas below the surface.

Don't get me started on the reliability of the computer models we use to make momentous, life-and-death decisions.

The following quote is from the chapter on counting, which features a serious argument over how many days there are in a week. But I include this just because the author raises a question for which the answer seems so obvious I wonder what I'm missing.

Some countries count building floors from zero (sometimes represented by a G, for archaic reasons lost to history) and some countries start at one. (pp. 205-204)

Lost to history? Isn't it obvious that the G stands for "Ground"?

It will be clear to any regular blog reader why I've included the next excerpt, and why it is so extensive.

Trains in Switzerland are not allowed to have 256 axles. This may be a great obscure fact, but it is not an example of European regulations gone mad. To keep track of where all the trains are on the Swiss rail network, there are detectors positioned around the rails. They are simple detectors, which are activated when a wheel goes over a rail, and they count how many wheels there are to provide some basic information about the train that has just passed. Unfortunately, they keep track of the number of wheels using an 8-digit binary number, and when that number reaches 11111111 it rolls over to 00000000. Any trains that bring the count back to exactly zero move around undetected, as phantom trains.

I looked up a recent copy of the Swiss train-regulations document, and the rule about 256 axles is in there between regulations about the loads on trains and the ways in which the conductors are able to communicate with drivers.

I guess they had so many inquiries from people wanting to know exactly why they could not add that 256th axle to their train that a justification was put in the manual. This is, apparently, easier than fixing the code. There have been plenty of times when a hardware issue has been covered by a software fix, but only in Switzerland have I seen a bug fixed with a bureaucracy patch. (p. 188)

Our modern financial systems are now run on computers, which allow humans to make financial mistakes more efficiently and quickly than ever before. As computers have developed, they have given birth to modern high-speed trading, so a single customer within a financial exchange can put through over a hundred thousand trades per second. No human can be making decisions at that speed, of course; these are the result of high-frequency trading algorithms making automatic purchases and sales according to requirements that traders have fed into them.

Traditionally, financial markets have been a means of blending together the insight and knowledge of thousands of different people all trading simultaneously; the prices are the cumulative result of the hive mind. If any one financial product starts to deviate from its true value, then traders will seek to exploit that slight difference, and this results in a force to drive prices back to their "correct" value. But when the market becomes swarms of high-speed trading algorithms, things start to change. ... Automatic algorithms are written to exploit the smallest of price differences and to respond within milliseconds. But if there are mistakes in those algorithms, things can go wrong on a massive scale. (pp. 145-144)

And they do. When the New York Stock Exchange made a change to its rules, with only a month between regulatory approval and implementation, the trading firm Knight Capital imploded.

Knight Capital rushed to update its existing high-frequency trading algorithms to operate in this slightly different financial environment. But during the update Knight Capital somehow broke its code. As soon as it went live, the Knight Capital software started buying stocks of 154 different companies ... for more than it could sell them for. It was shut down within an hour, but once the dust had settled, Knight Capital had made a one-day loss of $461.1 million, roughly as much as the profit they had made over the previous two years.

Details of exactly what went wrong have never been made public. One theory is that the main trading program accidentally activated some old testing code, which was never intended to make any live trades—and this matches the rumor that went around at the time that the whole mistake was because of "one line of code." ... Knight Capital had to offload the stocks it had accidentally bought ... at discount prices and was then bailed out ... in exchange for 73 percent ownership of the firm. Three-quarters of the company gone because of one line of code. (pp. 144-142)

I've had the following argument before, because schools are now apparently teaching "always round up" instead of the "odd number, round up; even number, round down" rule I learned in elementary school. Rounding up every time has never made sense to me.

When rounding to the nearest whole number, everything below 0.5 rounds down and everything above 0.5 goes up. But 0.5 is exactly between the two possible whole numbers, so neither is an obvious winner in the rounding stakes.

Most of the time, the default is to round 0.5 up. ... But always rounding 0.5 up can inflate the sum of a series of numbers. One solution is always to round to the nearest even number, with the theory that now each 0.5 has a random chance of being rounded up or rounded down. This averages out the upward bias but does now bias the data toward even numbers, which could, hypothetically, cause other problems. (p. 126)

I'll take an even-number bias over an inaccurate sum in any circumstances I can think of at the moment. Hypothetical inaccuracy over near-certain inaccuracy.

It is our nature to want to blame a human when things go wrong. But individual human errors are unavoidable. Simply telling people not to make any mistakes is a naive way to try to avoid accidents and disasters. James Reason is an emeritus professor of psychology at the University of Manchester, whose research is on human error. He put forward the Swiss cheese model of disasters, which looks at the whole system, instead of focusing on individual people.

The Swiss cheese model looks at how "defenses, barriers, and safeguards may be penetrated by an accident trajectory." This accident trajectory imagines accidents as similar to a barrage of stones being thrown at a system: only the ones that make it all the way through result in a disaster. Within the system are multiple layers, each with its own defenses and safeguards to slow mistakes. But each layer has holes. They are like slices of Swiss cheese.

I love this view of accident management, because it acknowledges that people will inevitably make mistakes a certain percentage of the time. The pragmatic approach is to acknowledge this and build a system robust enough to filter mistakes out before they become disasters. When a disaster occurs, it is a system-wide failure, and it may not be fair to find a single human to take the blame.

As an armchair expert, it seems to me that the disciplines of engineering and aviation are pretty good at this. When researching this book, I read a lot of accident reports, and they were generally good at looking at the whole system. It is my uninformed impression that in some industries, such as medicine and finance, which do tend to blame the individual, ignoring the whole system can lead to a culture of not admitting mistakes when they happen. Which, ironically, makes the system less able to deal with them. (pp. 103-102)

My only disappointment with this analogy is that I kept expecting a comment about how "Swiss cheese" is not a Swiss thing. If you get a chance to see the quantity and diversity of cheese available at even a small Swiss grocery store, you will understand why our Swiss grandchildren were eager to discover what Americans think to be "Swiss cheese."

If humans are going to continue to engineer things beyond what we can perceive, then we need to also use the same intelligence to build systems that allow them to be used and maintained by actual humans. Or, to put it another way, if the bolts are too similar to tell apart, write the product number on them. (p. 99)

If that had been done, the airplane windshield would not have exploded mid-flight, even though several other "Swiss cheese holes" had lined up disastrously.

How do you define "sea level"? Well, it depends on what country you're in.

When a bridge was being built between Laufenburg (Germany) and Laufenburg (Switzerland), each side was constructed separately out over the river until they could be joined up in the middle. This required both sides agreeing exactly how high the bridge was going to be, which they defined relative to sea level. The problem was that each country had a different idea of sea level. ...

The UK uses the average height of the water in the English Channel as measured from the town of Newlyn in Cornwall once an hour between 1915 and 1921. Germany uses the height of water in the North Sea, which forms the German coastline. Switzerland is landlocked but, ultimately, it derives its sea level from the Mediterranean.

The problem arose because the German and Swiss definitions of "sea level" differed by 27 centimeters and, without compensating for the difference, the bridge would not match in the middle. But that was not the math mistake. The engineers realized there would be a sea-level discrepancy, calculated the exact difference of 27 centimeters and then ... subtracted it from the wrong side. When the two halves of the 225-meter bridge met in the middle, the German side was 54 centimeters higher than the Swiss side. (pp. 89-88)

Now, here's the video I promised.

Matt Parker's videos share the strengths and weaknesses of his book. You'll notice in the video below his insistence on using plural pronouns for singular people, even when he knows perfectly well that he is talking about a man or a woman. Except for when he apparently relaxes and lapses into referring to the man as "he" and the woman as "she." What a concept. What a relief.

I include this particular video because he tackles a question I've long wondered about. I'm accustomed to making the statement that Switzerland is half the size of South Carolina, usually noting that it would be a lot bigger if they flattened out the Alps. But how much bigger?

Posted by sursumcorda on Friday, February 19, 2021 at 9:05 am | Edit
Permalink | Read 1213 times
Category Reviews: [first] [previous] [next] [newest]
Comments

I thought of several things when I was reading this:
1) A story from my colleagues when I was teaching at Suffolk. They had a student whose BASIC program refused to run properly. It produced no output, but did not produce any error messages. After much agony trying to find an error in the code, this was what they discovered: This being in the "old days," all variables were single letters. For reasons appropriate to the meaning of the program, the student was using S and P. At some point in the program, he had a do-loop reading
FROM S TO P DO ......
Think of how the computer parsed that.



Posted by Kathy Lewis on Friday, February 19, 2021 at 4:23 pm

2) When Gaunce was stuck in the Navy, he was in charge of the computer geeks on the Wainwright, which underwent a major overhaul in Gitmo. Afterwards, they were trying out the new gun-shooting software and realized they could not hit any of the overhead targets. Absolutely none! Gaunce read through the code and realized there was a factor of two error in the targeting software. This was easy to fix and solved the problem. But it would have affected a whole class of ships getting the new software. The difficulty was convincing the higher-ups of the problem. His commanding officer sent a message to the relevant people and then waited for many days for an answer. In the meantime, Gaunce regularly got asked "Lewis, are you sure you're right about this?" One day the CO handed him a piece of paper with a message instructing all ships of this class to correct the code by the factor of 2. No acknowledgement of Gaunce of course, but his CO was happy with him after that.



Posted by Kathy Lewis on Friday, February 19, 2021 at 4:30 pm

3) About the Swiss cheese model - if you read Lovell's book about the Apollo 13 disaster, he makes it clear that the problem was caused by the combination of many errors, no one of which alone would have been a serious problem.



Posted by Kathy Lewis on Friday, February 19, 2021 at 4:32 pm

Thanks, Kathy. Great stories.



Posted by SursumCorda on Friday, February 19, 2021 at 4:41 pm

I received notice about a CD renewal and the offer was for .850%. I had checked online and knew it was not correct, but called. He told me the renewal rate was a "special" of .2% rather than what had been advertised online as .1%. I said that .2 % was fine, unless he wanted to give me what their letter had offered me, which was .85%. He said no this is a special, a much better rate. I repeated what I said, and he still didn't get it. Sigh...



Posted by Laurie on Friday, February 19, 2021 at 6:22 pm

I wonder if you could have held them to the 0.85% offer. Maybe a lawyer could have made it stick. But then again, the lawyer would no doubt have taken more than the difference in fees. :(



Posted by SursumCorda on Friday, February 19, 2021 at 7:21 pm
Add comment

(Comments may be delayed by moderation.)