## Thursday, July 29, 2010

### A coffee table math book

kw: book reviews, nonfiction, mathematicians, mathematics, photographs

Tic Tac Toe is a known game. That means the number of possible games is small enough that a complete analysis has been performed, and optimal strategy has been determined. If both players are experienced and make no mistakes, every game will end in a draw.

In the process of analyzing the game, the following graphic was produced, which summarizes every possible game, even the stupid ones, ignoring symmetry. There are about a quarter million of them.
This illustration is found on page 39 of The Math Book: From Pythagoras to the 57th Dimension, 250 Milestones in the History of Mathematics by Clifford A. Pickover. Before discussing the book further, let's look closer at the illustration. Click on the images for a larger version. The closeup below shows all games that begin with X in the upper-left corner.

This allows us to see a little further into the matrix, but there is quite a lot of detail here. I scanned at full resolution for the following little clip, which shows another level or two, but now we've reached the limitations of the printed page.

Although there are about a quarter million possible games, this can be reduced a lot by symmetry. There are about 27,000 actual games possible, but only just over 1,400 if players at least have the wit to block whenever possible. See the Mathematical Recreations discussion for details.

Back to the book. Dr. Pickover has traced mathematical skills back to the ability of ants to count: they find their way back to the nest by counting their steps. And we thought it took a mammalian brain to count! But we know little of animal abilities, so three essays suffice to dispose of the nonhuman animals.

I was a little taken aback by the fourth essay, on knots. Tying a knot to keep a bead from slipping hardly seems like a mathematical activity to me. I think the use of knots as a data-storage medium by the Incas some 5,000 years ago better fits the bill.

Though this is a fairly large book, fully half is full-page photographs or graphics; there are 250 short essays and 250 images. A lot of care went into selecting them. In a few cases, such as a pixieish image of Martin Gardner with his books, the mathematician in question is pictured, particularly for contributions that don't lend themselves to meaningful visualization. Or, in the case of Gardner, which of his thousands of Scientific American columns do we pick an image from?

It would be nice to have a smooth brass loopy ring like this to dip into soap solution and produce this minimal surface. This one (page 193) was computer-generated, and you can be sure the computer took lots longer to fine-tune the surface than a soap bubble film would. This leads me to think of the pronouncements of some (such as Max Segmark, discussed on page 516), that we live in a universe that is mathematics. The universe and the things in it just do what they do. Mathematically modeling what they do is frequently a memory-hogging, time consuming enterprise.

Like this bubble film: produce any un-knotted closed loop you like, and dip it in soap solution (add a little glycerine to strengthen the film). Presto! The surface that results will be a minimal surface, slightly modified by the Earth's gravitational attraction (So, to get a true minimal surface, do the experiment on the ISS, though "dipping" there has complications not found down here). Consider the "shortest path" problem. It is computationally difficult to find the exactly shortest, or quickest, path from here to, say, the Toronto Zoo. But if you tie a bunch of strings together with the exactly scaled knot-to-knot distances (or time-durations), for all the likely roads between here and there, you can solve the conundrum by simply picking up the knot that represents where you are, and the shortest route will be the one that goes straight to the destination without sagging. Then just write down all the routes. By the way, Google Maps and your GPS use a near-optimal route finder that is a lot quicker than one that could find the exactly optimal route.

This looks like another minimal surface, but in fact it represents an inside-out sphere (Page 255). Ordinary spheres have a constant, positive curvature, by definition. This item has a constant negative curvature. The kicker is that it has to go to infinity in two directions, but does so in a way that it has a finite volume.

A lot of math is like that, dealing with infinite things that aren't infinite when looked at another way. This may be why many people shy away from it. As a working mathematician, I've had to do some problems that work out like this: Add up everything Outside some curve of interest, then subtract the Universe, to get the sum of what is Inside. The ability of certain mathematical techniques to either add or subtract the Universe is both fascinating and appalling!

Another thing that puts a lot of people off of math is that it takes something like five years of advanced education to understand why the expression e+1=0 is in any way significant. Just consider this: π, pi, is the ratio of a circle's circumference to its diameter, while e, Euler's constant, is the basis of natural logarithms, which express things like our eye's adaptability: No matter how bright the illumination is, if it is cut in half, our response is the same. In a room dimly lit by a 10-candlepower light, reducing its brightness to 5 candlepower is quite noticeable. Go outside, where the illumination may be 10,000 candlepower. Shade half of it so there is now 5,000 candlepower, and the perceived reduction in light feels the same. This is a logarithmic response. So what do circles have to do with logarithms? Nobody really knows, but I do understand the math that makes that equation work.

The book, as any good history, is arranged in time order. Though the history of mathematics goes back at least 5,000 years, most of the discoveries have come in more recent times. As in any discipline, as new tools are discovered, it is possible to discover even more. The halfway essay, #125 (page 266) discusses a discovery from 1875, a mere 135 years ago, the Reuleaux Triangle, a curved, three-pointed shape that can be rotated inside a square such that it goes exactly into each corner in turn. It is the basis of drill bits that can cut a square hole. More recently, its ability to rotate in a potato shaped cavity is the basis of the Wankel engine (which the author does not discuss. I've owned a Mazda car with such an engine. Lovely machine).

Just for fun, I did another half-jump, to the two essays that straddle 137.5. They are dated 1946 and 1947. So one quarter of the mathematical discoveries discussed date since World War II and the development of electronic computing machinery. The computer is such a great tool for mathematicians, allowing us to see things we could only think about before. Not all are equally good at mental visualization. But the work of John Von Neumann, discussed in the 1946 essay, and the Gray Code (it is used for making communications less prone to error) of the 1947 essay are two of many breakthroughs that led to the computer-intensive world we have today. The lovely illustrations above are just two examples of things we can see that before we had to imagine.

Not only is the computer useful to show us things it is hard to draw, it excels in running simple, recursive processes that can produce quite complex patterns. Patterns like the ones on this Cone snail occur in cellular atomata, which have been studied since the 1940s, before there were computers to help out. If you know the algorithm to use, you can draw patterns like this on graph paper, just as Conway's "game of Life" can be done by hand on graph paper (yes, I'm just obsessive enough that, before the PC was affordable, I used to play Life on graph paper).

Today, the computer is used throughout mathematics, and there is a tendency to use it as a crutch. We need to remember that, to a mathematician, the "real" numbers used by a computer are actually rational numbers of a particular type. The computer can represent, in the "double precision" conventions in use now, either 264 (1.8×1019) or 296 (7.9×1028) values. That is "all" that can be expressed. That is a lot, true, but between any two of them there are infinitely many values that cannot be expressed. That means a calculation such as the tangent of ten million radians cannot be calculated with as much precision as we might like. You have to divide the ten million by a value of π that is accurate to either 15 or 24 decimal digits, and subtract the part larger than 1. That is a 7-digit value, leaving only 8 or 14 digits after the decimal with which to do further calculations. Things like this lead to our continual need to improve analytical techniques that do not depend on digital methods. Most of the last quarter of the book presents problems that were solved primarily using digital methods.

So it is good to see John Horton Conway, as late as 1986, coming up with the "audioactive sequence", which makes sense only when spoken aloud. It begins 1, 11, 21, 1211, 111221; it is pronounced "one", "one one", "two ones", "one two, one ones" and so forth. It has a sister sequence in which you don't count the numbers in sequence, but simply tally them up: 1, 11, 21, 1211, 3112, 211213, 312213; and a 4 has to show up before the items get longer than six digits. While one can do a lot of things with this sequence using a computer, it is quite a bit of fun to fool with "by ear". You get other sequences by starting with something other than 1. Try it!

There's lots of room for mathematics to grow. A book like this brings a variety of the familiar and the obscure to a more level playing field, and just may convince a few folks that they aren't so bad at math after all.