kw: book reviews, nonfiction, complexity theory
Claudius Ptolemy was a genius. Observations of planetary motions were problematic. The common-sense view that Earth is fixed at the center of the Universe, coupled with the prejudice, left over from the Pythagoreans, that all motions in space depend on perfect circles, could not account for the occasional reversal of each planet's path. Ptolemy proclaimed that there was a major circle, the "deferent", which drove a smaller circle that the planet actually followed. This latter circle he called an "epicycle".
It seems strange that hardly anyone noticed that the epicycles seemed to be coupled somehow; that if two planets (e.g. Mars and Saturn) were in a similar quadrant of the sky, their back-loops occurred over the same period of time. A few who suggested that perhaps Earth was in motion about a center, so far unknown, or perhaps on an epicycle of its own, were ignored or silenced. The prevailing orthodoxy was not to be denied.
Soon this view ran into trouble. Better observations did not support steady motion on a deferent-plus-epicycle, so epicycles were added to the epicycles. By the time Tycho Brahe began to publish his great catalogs of stellar positions and planetary motions, it might take a stack of six or seven epicycles to account for the motion of Mars with an accuracy equal to the observations.
More than half a century before Brahe, Nicolaus Copernicus had proposed that Earth indeed moves, about the Sun, though he clung to perfect circular orbits for Earth and the planets. Thus, his scheme also required epicycles, though in lesser profusion. Based on Brahe's observations, Johannes Kepler determined that the basic form of planetary orbits is the ellipse, and the need for epicycles was almost eliminated.
Almost: If an ideal Earth orbits an ideal Sun in an otherwise empty Universe, the orbit is indeed an ellipse. But Earth's actual orbit is very slightly not an ellipse. Why? Perturbations. This is the modern word for "epicycle", you could say. The other planets, principally Venus, Mars and Jupiter, influence Earth's orbit, so orbital calculations are done by first using the current ellipse that Earth most closely follows, then adding these influences (these perturbations) to predict where Earth (or any body being calculated) will actually go.
However, the ellipse model is sufficiently accurate that many purposes are well served by using a pure ellipse, which predicts Earth's motion for several years into the future with remarkable accuracy (less than its own diameter).
The title premise of Simplexity: Why Simple Things Become Complex (and How Complex Things Can Be Made Simple), by Jeffrey Kluger, is not actually answered clearly within the book's pages. So I'll make short work of it: A simple thing that works "pretty well" can be made to work better by adding refinements. Refining beyond a certain point causes problems of its own. When a new principle is discovered, a new device can be produced that is both simpler and operates better than the prior device. Complications are eliminated…until the next time refinements are called for. Kluger demonstrates this with several good examples, so I won't fault him too much for the lack of a more succinct portrayal.
The first one, which hits home with me, is the Quark model of particle physics. In 1969 I quit studying Nuclear Physics and began to study Geology. I was tired of the "particle zoo" that numbered more than 100 by then, and there were fewer elements than that! I decided to follow a subject where I could go out and hit rocks. I've always liked hammers! Later that year, Murray Gell-Mann won the Nobel Prize for Quarks, and about the time I got my degree in Geology, Physics departments everywhere were beginning to teach this new physics: six quarks plus six leptons interact via four (or five) bosons to make all other particles, only two of which are stable. This came too late for me. Anyway, I'd still rather mess around with rocks than with supercolliders.
The author introduces us to Gell-Mann and his quarks in a chapter on the stock market: "Why is the stock market so hard to predict?" The short answer to this question is the sociology of groups whose aversion to or tolerance of risk are not constant. We are short-term thinkers trying to live in a long-term civilization, and it'll be a long time before we evolve away from mob psychology. As a result, the stock market follows Cauchy statistics with amazing exactness. Of all probability distributions, the Cauchy distribution is wholly unpredictable. The average of past measurements affords you no (none, zero, zip, nada) information about the future trend.
He has a chapter on the lifetimes of mice, men and whales. Every chapter title is accompanied by a "confused by" sideline, and this one is "Confused by Scale". To compare accurately, you have to either compare captive mice and elephants to "civilized" humans, or compare wild mice and elephants to the most "primitive" tribal societies known. The simplifying parameter is heart rate. Wild mammals experience about a billion heartbeats, then die. "Civilized" mammals live about twice as long. Very elderly men and women are found to have lower-than-average heart rates.
This is the point the author has been leading to: The simplifying observation, such as the heart rate or the quark, or the ellipse. Then he begins to apply similar thoughts to cities, but I can't see that he completed the task. He applies Zipf's Law of word frequencies, a version of a power law distribution, to the sizes of cities and villages. Based on my own analysis (and the subject for a different post some time in the future), Zipf's Law does not hold for the entire distribution. A complete analysis leads me to the conclusion that word frequencies and city/village sizes are distributed according to lognormal statistics. The beginning of a lognormal distribution looks a lot like a power law distribution, which has misled many. But the point I was looking for, the lifetime of a city, was not addressed.
All in all, through ten chapters, the author probes various fields of knowledge and experience, and how each trips us up. Nature, for example, seems to produce very complex structures, but she does so by means of simple operations repeated again and again over a range of scales. Thus, a leafless tree is seen to resemble a small branch taken from that tree, except that the branch doesn't go into the same detail, stopping at the diameter of a twig.
This fractal property of many natural objects was first packaged and presented by Benoit Mandelbrot, who also discovered certain recursive mathematical procedures that can produce literally infinitely detailed structures, provided one has the leisure of letting one's computer crank away for infinite time! Self-similarity leads to many beautiful structures. It also accounts for the efficient way your lungs pack an air-filled structure with huge surface area into about a cubic foot, and, subject to a different space-filling principle, how your vascular system can bring blood to every cell in your body while only occupying three percent of its volume.
Finally, the eleventh chapter speeds the irresistible force (complexity theory) towards the immovable object (aesthetics). Why does one painting or photo move us deeply, while another, similar one does not, or even repels us? Why do we weep at one symphony and not another? A colleague once loaned me a music tape to listen to. When I returned it, I said I had enjoyed it and found it interesting. She said, "Interesting?! You are supposed to feel ecstasy!" All I could say was that I get my highs in a different genre. Can complexity theorists puzzle this one out? It is probably better that they don't try. Some things are better enjoyed than explained, like a good meal. A young cooking student was told by his chef/instructor, "I can make anything taste better." He asked, "How would you make salt taste better?" The reply: "By sprinkling it over a tender, medium-well steak!"
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment