Sunday, September 27, 2015

Why books can be read in comfort

kw: book reviews, nonfiction, proofreading, copy editing, memoirs

As much as I occasionally lampoon an egregious typographical error, or a book that seems filled with them, I truly appreciate the careful copy editing that goes into the production of nearly everything we see in print, and books in particular. Copy editing is more than proofreading, more than the ferreting-out of errors by the author, the typesetter, or another editor. It embodies the skills needed to ensure that errors that detract are omitted or corrected, but that usages the author intended, for any reasons whatever, are faithfully retained, even if some might thin them erroneous.

By this I mean much more than the unenviable job of Mark Twain's copy editor, making sure the dialect-ridden text of Huckleberry Finn was as Twain intended it. Not only novels employ "variations" on English usage for effect. Essayists, for example, whose texts require clarity, might employ word order or punctuation in ways that do not exactly fit a journal's preferred rules of style. I've had a couple of battles with copy editors, particularly those in England: one peeve I have is that they want to move every adverb to a standard location in a verb phrase. I might write, "…they were desperately seeking to find…" and have the proof come back, "…they were seeking desperately to find…". Such usage is a hangover from Norman French. It has largely been abandoned in the American language, but is clung to by many copy editors of journals published in England. Then there is the serial comma. Do you prefer to write, "In grammar school I learned reading, writing, and arithmetic", or "…I learned reading, writing and arithmetic"? The former example uses the serial comma, and the latter example leaves it out. There are strong proponents of both usages, just as there are several opinions about the way I placed the question mark in the prior sentence.

Mary Norris has been a copy editor—and worn a few other hats—at The New Yorker since 1978. Her book Between You & Me: Confessions of a Comma Queen drags the somewhat secretive vocation of the copy editor into the daylight for us all to enjoy. She broke into the field when she pointed out an error in something James Thurber had written on his office wall. He was delighted.

Writing and punctuation styles change over time. I learned to use many commas in my sentences, having been taught to "Write for someone reading aloud; show where to breathe." A copy editor set me straight about more modern usage in about 1974, and I've gradually learned to use about one-third as many commas as before. Peruse a few pages of an issue of The New Yorker from about 15 years ago, and you'll see more commas than you might in your daily newspaper. Ms Norris writes of the colorful persons found among the warrens of The New Yorker's offices, none more distinctive than Lu Burke, whose "comma shaker" was famous. It reminded her colleagues to make stylishness subservient to clarity, and not to dogmatically expunge every comma for some doctrinal reason. (Image found at a CMOS review).

Punctuation marks and the foibles of their usage seem to fill about half the book. Chapters treat of hyphens, the other three kinds of dashes (en, em, and long: – — ―), apostrophes, and semicolons as compared to colons and other designators of an author's thought changing direction or focus. In the chapter about dashes, she tells us of Emily Dickinson, who used dashes for nearly everything. A careful student of her handwritten papers could probably find six or seven lengths of dash, and it is quite likely that Dickinson had something quite definite in mind when producing any of them. And there is the question of using spaces around a dash, or not, or whether it is proper to follow a dash with a comma or other bit of punctuation (nearly never in Norris's view). –I just went back and deleted a comma after the word "never"; I still have certain instincts from the 1960's.

And what of the other half of the book? The title illustrates a pet peeve of hers, that people who might usually say, "between you and me," which is proper, tend to say, "between you and I," which is not, if they think they are speaking with someone who has a better education. Somehow, the proper usage takes on a common tinge in their mind, and is therefore suspect, as though "common usage" might be frowned upon by a person of excessive education. Just in case you were wondering, it isn't. Some common usages are certainly incorrect, but most are quite correct. And language changes over time. Today's common usages that are thought to be errors will become standard over a generation or two.

If you can find an edition of Shakespeare that retains his original orthography, you'll find it hard to read. Go back another 400-500 years, and "Old English" is really quite incomprehensible:
Fæder ure þu þe eart on heofonum;
Si þin nama gehalgod.
The letter þ is the Thorn, and is pronounced as an unvoiced "th". Its companion, the Eth (ð), is the source of the "y" used in faux-colonial signs such as "Ye old Curiosity Shoppe", where "Ye" is to be pronounced "the", with the "th" voiced. Have you figured out the two lines above? Here they are circa 1729:
Our Father, which art in heaven;
Hallowed be Thy name.
That ought to be more familiar. The punctuation of the Old English version is according to the 1729 editing of the King James text of 1611. If the Anglo-Saxons of the 12th Century punctuated the prayer at all, it is likely they used a dash or a comma. If you are familiar with the King James Bible in print today, it is the fourth edition, revised in 1729, not the 1611 version, which is almost as unreadable as Anglo-Saxon to most modern readers. Even the orthography of 1729 is looked upon by today's younger set as a nearly foreign language.

Proofreading and copy editing are a conservative enterprise. Readers are most comfortable with the kind of writing they grew up with, if not in content, at least in form. So most authors write in a style not far removed from that of their formative years, and are quite OK with a copy editor who ensures that the same style is adhered to. But some authors experiment with new forms and have new ideas and want them expressed just so. Emily Dickinson without her dashes would seem enervated; they give a breathless rush to her verse. Ms Norris uses an example handwritten by Jackie Kennedy, complete with dashes among its run-on sentences. You simply get a more intimate feel from it as compared to something shoehorned into the straitjacket of "correct usage".

So words, though their treatment takes up but half the book, are the meat, the nourishment of the mind, and the punctuation marks the bones and joints. There is even a chapter on "curse words", particularly the "f-bomb", and on a competition among certain writers at The New Yorker to see how many they could fit on a page (and say something halfway useful in the process). I was reminded of a Mythbusters episode from a few years back, in which they tested the emotional impact on the speaker of cursing loudly to alleviate pain, compared to shouting more innocuous strings of words such as "kittens, raspberries, elephants!" and so forth. Cussing worked better. There really is some utility to it!

Without saying it directly, Ms Norris confesses to a certain level of OCD. She devotes half a chapter to her love of soft #1 pencils, and her inability to achieve comfort with anything harder, such as the ubiquitous #2. She often can't enjoy something when her eye/mind keep tripping over errors. Other times she'll find it entertaining to see a large printed sign that reads "Hunters's Rest", and wonder whether the sign maker was working with a family named "Hunters", or simply covering all the bases of possessive usage.

As you might expect, the writing style is excellent, easing a reader's enjoyment of her insight, wit, and humor. It is quite enjoyable to peek behind the scenes to see that, at least at The New Yorker, a substantial series of editors and readers awaits an author's prose, to ensure that what the magazine prints is, firstly, exactly what the author intended, and secondly, as error-free as is humanly possible.The author's website for the book:

Wednesday, September 23, 2015

Speculation Unbound!

kw: book reviews, nonfiction, scientific miscellany

What would happen to the Earth if the Sun suddenly switched off? Randall Monroe answers that question beginning on page 248 of What If? : Serious Scientific Answers to Absurd Hypothetical Questions. Randall Monroe created the webcomic, which includes a What If? section, in which he answers questions of all kinds sent in by readers of the web site or, more recently, the book.

I can't believe I didn't stumble across this sooner. It is a step beyond the "Fermi Questions", so beloved of the young victims of Science Olympiad. Answering the really absurd questions requires a skill akin to Fermi's, who was famous for taking on a query with no more than a pencil and the back of an envelope. He is also remembered for his method of measuring the yield of the original Trinity atomic bomb. While others did whatever they were doing in their trench a mile or so from Ground Zero, he was seen busily tearing a sheet of notebook paper to small bits. A second or two after the blast was triggered, just before the shock wave hit, he tossed the handful of confetti as high as he could. After the shock hit, and it was deemed safe to exit the trench, he walked around, mapping the outline of the scattering of paper bits, did a calculation or two, and announced how many kilotons the yield had been.

So what would happen to us if the Sun switched off? Randall's take on it is mainly positive. He catalogs nine consequences, including "no need to force your children to wear sunscreen" and "better astronomy" with a quieter (and soon, nonexistent) atmosphere. Of course, his tenth consequence? "We would all freeze and die."

Interestingly, there are two ways to look at the Sun switching off. He chose to work with an immediate cessation of all energy flow from the Sun. One could also consider a sudden cessation of the fusion reactions powering the Sun. That leads to a more drawn-out scenario, because it would take a long time, hundreds of thousands of years, for the outer layers of the Sun to dim appreciably. It would take several tens of millions of years for the Sun to cool to invisibility. Perhaps that would give us the motivation to really ramp up the space program!

There was an interesting short story I read a couple decades ago, in which a young man (or so he seemed) walked into a reporter's office and informed him that the reason so few neutrinos were coming from the Sun was that Jehovah had left the place in a huff a couple thousand years ago, and being a thrifty sort, had turned off the fusion furnace. He said he was the newly-assigned deity and asked the reporter to run a provocative, cagey story that "perhaps" scientists would find a more "normal" level of neutrino activity from the Sun, starting in a few days, and to give no reason other than "informed by someone in the know". He intended to re-start the Sun. Sure enough, a week later the neutrino level rose to what the scientists had calculated it "ought to be". Of course, this was during the period that "neutrino oscillation" was being theorized, and is now the accepted reason that solar neutrino activity is observed to be 30% what was originally expected.

So, what kinds of questions get asked? Things like, "How many laser pointers do you have to point at the Moon so that we could see it?" or, "How much force power did Yoda produce (when lifting the X-wing from the swamp)?" There are also several short sections in which questions are listed but not explicitly answered; they are in the "Weird (and Worrying)" category: "What is the total nutritional value of a human body?" or, "Is there sound in space (There isn't right?)?"

Actually, that last question has an answer (so does the first: the same as a pig of the same weight). Yes, there is sound in space. Sound requires a medium in which to travel. Although the gas density in "outer space" is very low, it is never zero, anywhere. But the frequency of sound that is transmitted with little loss needs to be low enough that the wavelength is longer than the mean free path of the gas molecules as they bounce off one another. So the sounds in space, that go any useful distance, have very low frequencies. For example, in "interplanetary space", the average gas molecule travels a few meters before encountering another. The speed of sound is different at low pressure, but not by a great amount, so we can still use 300 m/s for rough calculations, and we find that a wavelength of 10m occurs at about 30 Hz. The trouble is, the sonic volume would be low, because so little gas is carrying the sound, but a sensitive microphone could detect low hum-type sounds "out there". In interstellar space, the pressure is lower, perhaps a thousand times lower, meaning that the mean free path is a thousand times as long, and frequencies higher than 0.03 Hz would not travel far. So the sounds in interstellar space would be at very low frequencies indeed. But they are there.

Rather than go on about things like using a Gatling Gun to propel a car (watch out, anyone behind!), I suggest you read the book, and check out the web site. Randall Munroe is an entertaining writer and, with a background in robotics, a deft hand at off-the-cuff mathematics (and a stable of helpful scientists' phone numbers in his Rolodex, no doubt). You'll love it.

Monday, September 21, 2015

Producing a depauperate Earth

kw: book reviews, nonfiction, extinction

I collected butterflies and other insects as a child. For a couple of years, when we lived in Utah, I mainly collected locusts, the ones with colorful wings. There were many different wing color patterns. Now, fifty years later, I find that both butterflies and colorful locusts (when I visit Utah) are quite a bit scarcer. Where I live now, in the suburbs southwest of Philadelphia, I have seen more butterflies than I did for a long, long time. But nothing matches those young years in Utah and Ohio. I also recall, during high school years in Sandusky, Ohio, recording morning bird song. I wish I still had the tapes! The "morning chorus" that began half an hour before sunrise in the Spring was a rich symphony. I could recognize the calls of 6 or 8 kinds of birds, and heard several calls I didn't know, every time. There is a pretty good morning chorus here, these days, but again, it pales by comparison with what I recall. Two to three kinds of bird calls are the usual fare.

This is not just me, remembering some "golden age" that never existed. Things are dying out, lots of them. I have been hearing about a "sixth extinction" for some years now. This is the title of a sobering, well-researched book by Elizabeth Kolbert, The Sixth Extinction: An Unnatural History.

What is the "normal" rate of species extinction? To jump in with the conclusion, it is probably close to one species yearly. It is closely allied to the normal rate of species turnover. That is, "extinction" can mean one of two things. Firstly, one species changes to another under the pressure of environmental change. When the two species are things like shellfish that leave good fossils, what a geologist would notice is that in rocks of a certain age, only shells of "type 1" are found, and in the next layer, only those of "type 2". An ecologist might notice that certain "recent fossils" are not found among living shellfish. I saw an example of this in Bear Lake, Idaho 60 years ago, snail shells in abundance, but a ranger told us they no longer lived in the lake; there was a different species now.

In geologic terms, the time span across a couple of millimeters of sedimentary rock might be a million years, so the speciation even could be quite gradual as seen from a human perspective. Observations of animals under selection pressure indicate that one may be replaced by another in much less than a million years: 50 to 100 years is sometimes sufficient. Many, many animal species live their life out within one year, so this represents 50-100 generations. Longer-lived creatures are a different matter. Horses, for example, can reproduce as early as two years, but have a fertile lifetime of about ten years, sometimes more. So a "horse generation" is probably about 6-8 years. A human generation is commonly thought of as 25 years, though in very early times it was probably closer to 20. Anyway, species transformation (I dislike the popular conception of "mutation") can occur in hundreds or thousands of years, to perhaps tens of thousands of years. This is synchronous extinction.

The second kind of extinction is that a species dies out when the environment changes to rapidly for it to adapt, and it is no longer suited to it. It may or may not be replaced, in ecological terms, by an unrelated (or more distantly related) species, which may have evolved about that time, or maybe not. This is asynchronous extinction. Depending on the kind of animal, a species that makes fossils is seen to last between one and ten million years, though some that we call "living fossils" are found to have lasted for tens or hundreds of millions of years. Though I wonder if a coelacanth living today could actually be bred with one somehow brought to the present from 300 million years ago; perhaps there have been a hundred synchronous extinctions along the line, as the animal changed in profound ways that did not materially affect what its fossil form would be.

"Mass Extinction" refers to the sudden disappearance of many species over a shorter period of time. A mass extinction is thought to happen because of a great and widespread change in environmental conditions. These are, of necessity, asynchronous extinctions. One thing that can utterly transform the environment worldwide, at least for a time, is the fall of an asteroid a few miles wide. An asteroid impact eliminated the dinosaurs (those that hadn't become birds already), in what is called the end-Cretaceous extinction event. There have been five major mass extinction events in the last 500 million years, and a few dozen lesser ones. Each of the "Big 5" drove at least half of all species out of existence, pretty much overnight. They mark the boundaries between geologic ages. The lesser ones were of less significance only in comparative terms, and also mark the boundaries of geologic ages or significant geologic periods.

The major ages of geologic time are called:
Mississippian + Pennsylvanian in America, Carboniferous elsewhere
Mesozoic, including Triassic, Jurassic, and Cretaceous
Tertiary (or Paleogene)
Quaternary (or Neogene)
The Big 5 ended the Ordovician, Devonian, Permian, Triassic and Cretaceous. Other named ages and periods also ended with lesser mass extinctions.

The trouble with geologically sudden events is that, on a human scale, they may not appear sudden at all. While the dinosaur-killing asteroid changed all of Earth's environments in at most a few days, the other mass extinctions seem to have taken more time, in the range of years to centuries, and perhaps tens of millennia. On a scale that considers a thousand-year transformation as "sudden", something that takes only a century is lightning-fast.

That is what we see happening. Synchronous species turnover, and most cases of asynchronous extinction, make up the background rate. Roughly speaking, if species last on average a couple million years, and there are about a million species, then some species or other will go extinct every year or every second year. That's a ballpark estimate of background extinction. One per year or a half that. If the greatest of the Big 5, the Permian catastrophe, took 1,000 years to occur, and nearly a million species were wiped out, that is a rate 1,000 to 2,000 times greater than the background.

The chapters of the The Sixth Extinction each focus on one species, as an example of a group of related species, that are greatly reduced or already extinct. Most are known or strongly suspected to be due to human influence. The first example is a Panamanian tree toad, a "poison dart frog", that is probably already extinct. It represents amphibians in general, that are vanishing at a stunning rate. Of 6,200 species of amphibian (frogs, toads, newts, salamanders, and a couple of similar odd critters), about 1,800, or nearly 30%, are reducing in number rapidly, and at least 440, or 7%, are likely to become extinct within very few years. Just among amphibians, the extinction rate is about 100 times the background rate for all species!

One example cannot possibly be due to human influence (unless you are a strict, young-Earth creationist), the Ammonites. These spiral-shaped critters actually survived the biggest mass extinction, the one at the end of the Permian, 251 million years ago, but were wiped out later on by the end-Cretaceous event, the one that famously ended the "age of reptiles", but let some few mammals and birds (small, feathered dinosaurs) sneak through and repopulate Earth. The chapter focuses on the consequences of an Asteroid Winter, and compares it with other possible causes of mass extinctions. It sets the stage for discussing the massive environmental changes we humans are bringing about. If we are indeed the major actor in the environmental shift called Global Warming, and I think we probably are, that is but one large-scale change in worldwide habitats that we have produced, and the one most likely to kill us along with so many other species.

One chapter dwells on rain forests ("jungles"), and the species-area relationship as determined by counting the number of species to be found in various sized plots of forest. In a portion of a large forest, there is a variation of species quantity with area, which is partly statistical. But in a dissected forest, with forested plots of various sizes surrounded by barren land or farms, there is a similar relationship, though it is steeper. The species of focus for the chapter, a small tree in the genus Alzatea, is not found at all in an isolated plot if its area is below a specific number of acres. When a large forest is broken up into isolated plots, at first, the S/A relationship follows that of the original forest. But over time, species are lost, most rapidly from the smallest plots, until a steeper S/A relationship is developed. Sometimes, keeping plots from total isolation by having forested "highways" between them will preserve some species, but this is not true for all. It is like some species die out if their members cannot get far enough from the forest boundary.

Twelve chapters, twelve species plus a thirteenth, about Homo sapiens, about us. We are very likely to be the ultimate victims of the great mass extinction that we are carrying out. It is not known just how many species go extinct every year. I once read of an experiment with "tree fogging", in which researchers used insecticide fog throughout the canopy of an entire tree, and collected all the insects, particularly beetles, that fell onto sheets spread under the tree. Dozens of new species were found and described. Excitedly, they fogged a second tree. Many more new species were found. But they were sobered that most of the new species from tree #1 were not found at all on tree #2, even though they were within a hundred meters of each other. They concluded that many of those, perhaps 100 species, were endemic not just to that forest, but to that specific tree, and found nowhere else. They had made 100 species extinct in an afternoon! They canceled future experiments of that type.

It is hard to find out everything that exists without going and finding them. But when doing so destroys what you are trying to find, what good is that? I don't how to find out how fast species are going extinct, but I think it very, very likely that this human-induced mass extinction is proceeding at a rate that exceeds that of the Permian event, the biggest of the Big 5, by a large margin. I do believe this needs to be more widely known.

Wednesday, September 16, 2015

Faster than the wind, and perhaps he saved your life

kw: book reviews, nonfiction, biographies, scientists, safety, rocket sled experiments

There is a name you need to know: John Paul Stapp. If you have been in a car accident, it is likely that you owe your life and health to him. That is, if you were wearing a seat belt.

Step back about 70 years. World War II had just ended, and a young physician was wondering why so many military pilots were dying, when they didn't have to. During that war, getting shot down was a death sentence in one of two ways: you died when the plane crashed, or you died trying to exit the plane. After the war, ejection seats were found to be, far too frequently, tickets to oblivion. Their design was based on, at best, random guesses about the amount of stress the human body could survive, and the forces the aircraft frame could handle.

Dr. Stapp set out to gather accurate and usable data. What he did and how he did it are detailed in the first half of Sonic Wind: The Story of John Paul Stapp and How a Renegade Doctor Became the Fastest Man on Earth, by Craig Ryan. The second half shows what he, and the country, did as a result.

Before the 1940s, a smattering of centrifuge experiments had established that, with training and with minimal support from a flight suit, a fighter pilot could avoid blacking out at accelerations of about 6 G's. The G is a one-gravity acceleration force. If you weigh 150 lbs (68 kg), that is the force a mattress must apply to hold you up. If you and the mattress are put in a centrifuge and spun so as to apply a 6 G acceleration, the centripetal force the mattress (and the frame holding it) must now apply to hold you is 900 lbs (408 kg). When your body weight is spread out by a mattress, if the area of your body against the mattress is about 5.4 sq ft (0.5 m²), you'll feel a pressure of about 28 lb/ft² or 136 kg/m². That comes to about 0.19 psi. Now, multiply that by six, and you'd feel almost 1.2 psi. If your normal blood pressure is 120/75 (what doctors currently recommend, but maybe yours is higher), that 120 mm translates into 2.3 psi, and the 75 mm into 1.5 psi. So you can see that sustained acceleration of 6 G's tends to draw the blood in your body towards the mattress. If you are sitting rather than lying down, it doesn't take long for an acceleration of 6 G's to pull the blood from your brain, and you black out.

At this point it is all about sustained G forces. It makes sense that you could survive larger forces if they occurred briefly and were rapidly abated. Somehow, a factor of three became dogma, so that a brief acceleration of 18 G was considered the threshold of death. Yet, common observations of people surviving falls calls this into question. One of my brothers fell 20 feet out of a tree, landed on his back on the lawn, and had the breath knocked out of him. But he got up after a minute or so and was OK. Now, a grassy lawn is softer than landing on concrete, but it doesn't have much give. The main thing keeping this from being an "instant stop" (physically impossible) was the flexibility of the body, which squishes out briefly. I calculate that my brother's body touched the ground going about 24 mph (39 kph) and stopped in a distance of about 4 inches. That works out to a stopping force of 60 G's. If instead we allow him a little more flexibility to squishing, perhaps the stopping distance was 6 inches, and he experienced 40 G's. Either number is a far cry from 18 G's.

Over about a decade, Dr. Stapp used himself as the primary experimental subject (not the only one; he also used chimpanzees and on rare occasions, another volunteer) in rocket sled experiments. The rockets would get the sled going to some high velocity, and a braking system would then stop it over a prescribed distance. Here are parameters that might describe a typical experiment:

  • Rocket acceleration: 4 G's
  • Burn time: 4.6 s
  • Burn distance: 410 m (1,340 ft)
  • Peak speed: 644 kph (400 mph)
  • Stop distance: 20.5 m (67 ft)
  • Stopping time: 0.23 sec
  • Average stop G's: 20
  • Peak stop G's: 30 (measured by camera)

Early experiments were conducted with the seat on the sled facing backward, so the subject was pressed into the seat by the stopping forces. Experiments were also conducted with the seat in various orientations, including "butt forwards", to determine the forces of an ejection seat's kick-off blast.

Later experiments were conducted with the seat facing forward, and the subject exposed first to the wind blast, and then to deceleration against the webbing holding him into the seat. Dr. Stapp used chimps to determine the edge of lethality, though it turned out that they are much, much tougher than humans, so getting the calibration right for human experiments was tricky. With humans (mostly himself), he gradually raised the G forces and observed his own feelings and had doctors note what injuries he sustained. Thus, as time went along, the design of the seat was improved to avoid points that exerted extra forces and were causing injury. Over time these design changes were implemented in pilot seats.

The final, most definitive experiment was conducted with a chase plane flying above the rocket sled, to observe and film it from above. The pilot was astounded when the sled outraced the plane, reaching a top speed of 639 mph (1,028 kph), or Mach 0.9. This earned Stapp the title of "fastest man on earth" in a ground-bound vehicle. The title stood for about 30 years. During the deceleration, though, he sat forward-facing, getting the full wind blast, and being jammed against seat restraints with a crushing 45 G's, peak, during a stop that lasted less than 1.5 seconds. He was a mess when he was helped out of the seat. His eyes looked like pools of blood; he was lucky they had stayed in his head. It took weeks for all his sight to return. He had several broken bones. Though he had the ambition to go 1,000 mph, or at least Mach 1 (about 715 mph; authorities vary), it was not to be. He had advanced to Captain, Major, and was now a Colonel, and was moved by the Air Force command to a more administrative role. His sled, named the "Sonic Wind", was retired.

What he did next is the subject of the second part of the book. Dr. Stapp had performed his experiments, often against opposition, on a shoestring. He had to scrounge and cadge for equipment and apply verbal tricks to get some semblance of permission. Such skills were even more necessary after about 1956. He had long lobbied and clamored to Air Force brass about the safety, and its lack, in fighter aircraft and also transports. One result of his nagging was that many transports in war zones had the seats for the troops facing backwards. Then they were much more likely to walk away from a crash. But even during his earlier experiments he was also lobbying for the use of seat belts in automobiles.

By 1956, about 36,000 Americans were dying every year in automobile crashes. The population was about half what it is today, so in proportion, there could now be 72,000 auto deaths yearly, but instead, there are about 33,000. It took Colonel Stapp and his allies another 14 years to bring about the changes, primarily in laws, that have, since about 1970, saved at least 800,000 lives. Over the last 17 years, some of the difference is also due to airbags, something Stapp heartily approved of; he died in 1999, the year after airbags were mandated.

During his "lobbying years", he fought resistance in both government and industry against mandatory seat belt installation and use. The auto manufacturers were a lot like the tobacco lobby of the same era, denying that their products' quality had anything to do with the deaths that were occurring. Fortunately, there were at least aftermarket seat belts available, and many members of the public didn't wait for Washington or anyone else. Over a decade's time, sufficient statistics were compiled that a growing number of lawmakers became convinced of the belts' value, and in 1968, factory-installed seat belts were required by law. I remember an ambulance EMT who said he'd never unbuckled a dead body.

I bought my first car in 1967, a 1964 VW beetle. A couple of years later I bought a set of aftermarket 3-point lap/shoulder belts and installed them. Fortunately, Europe had been ahead of the curve, and though the car didn't have belts already installed, it did have threaded mounting holes, so the installation was easy. I have used seat/shoulder belts ever since. But growing up, we did many road trips, hundreds of miles yearly, in a big station wagon with no belts, and a mattress in the "back-back" for us boys to nap on. We were lucky.

Since 1984, one after another of the U.S. states has passed laws requiring seat belt use. Compliance varies, but averages 85%. Nearly all of those 33,000 highway fatalities in recent years, has come from the 15% who don't wear seat belts. In spite of the air bag in most vehicles, they either crash around inside during a collision, or are ejected. Driving in California with my brother several years ago, we saw an SUV hit the median barrier on the freeway, and the driver burst through the side window and landed on the highway almost in front of us, on his head. One of us (I don't recall who) said, "We just saw someone die."

Two things to remember about Colonel Dr. John Paul Stapp: He risked his life, incidentally becoming the fastest man on earth, to gather safety data; then he used those data and traffic statistics to practically crowbar the United States into becoming quite a bit safer as a place to drive or fly. Craig Ryan's exciting biography brings us the man and the stories, a portrait of someone to whom you just might owe your life.

Monday, September 07, 2015

Creators need armored underwear

kw: book reviews, nonfiction, creativity, sociology

Build a better mouse trap, and the world will beat a path to your door – with tar and feathers. More simply put, "No good deed goes unpunished."

In How to Fly a Horse: The Secret History of Creation, Invention, and Discovery, Kevin Ashton fully exposes and expounds the greatest hypocrisy we face: Nearly everyone loudly touts their allegiance to "innovation" and "creativity", but in the face of any truly new thing, they will hide, protest, or punish the "perpetrator". The book is filled with stories of what really happened to many inventors and innovators. Very few received anything like acceptance, at least during the first few decades after making their discoveries known. Dr. Joseph Lister showed that requiring doctors to wash their hands after every visit with a patient nearly eliminated hospital-borne infections. Even today, there's only a 40% chance that your doctor washes up before examining you, unless you vociferously insist. Dr. Lister's predecessor, Ignaz Semmelweis, was driven to suicide by those opposed to his discoveries.

Why do we create? Why this continual drive for making something new? It is built into us. To be human is to create, or at least, to desire, yearn and long to create. We are defined by our tools. Humans are not the only tool-using animals, but we are the only animals that keep modifying and refining out tools, making more progress in tool development during one lifetime than chimpanzees or ravens have made in tens of thousands of years. In the opening chapter of the book, we read that, prior to about 50,000 years ago (70,000 according to other accounts I have read), innovations in toolmaking occurred over spans of thousands of years. Something changed in the human brain, and there followed an explosion of improvement and modification of human technology that continues to accelerate. Ashton's core thesis is that we are all creators, creation is common, and that the notion that only geniuses can be creators is so false that he denies there is any true meaning to the word "genius". We must all create, or die. In a late chapter he points to stifled creativity as a feature in many kinds of addiction and criminality.

Yet, as Jesus said (Luke 5:39), "No one after drinking old wine wants the new, for he says, 'The old is good enough.'" No matter how much people may say they value creativity and innovation, their instinctive reaction to anything new is, "It was good enough for grandpa and it's good enough for me." (Apologies to the composer of "Gimme That Old Time Religion"). You might say, "Oh, how about Einstein? He did Relativity and all that stuff, and got a Nobel Prize, and everyone loves him." He had a couple of lucky breaks, and got four articles published in 1905, giving him the stature to generalize his discoveries about Relativity in 1915, but his work is roundly misunderstood by most people who have not done the work to understand it, and every year many "private researchers" try to get articles published that challenge some aspect of Relativity or Quantum Mechanics.

Not all innovators are "punished". The book opens with the story of a 12-year-old slave on a French island who discovered how to pollinate the Vanilla plant artificially. "Edmond's Gesture" is still used, by workers with small enough hands. Only in its native area can the plant be pollinated naturally by a small green bee that lives nowhere else. But there was no assurance that Edmond would get credit for his discovery. Only persistent protection and advocacy by his owner assured that the "Gesture" would be associated with Edmond and nobody else. Later in the book we read of the team that developed America's first fighter jet, in about five months! Only thanks to a particularly clear-headed leader and his "Show Me" principle was it even possible.

Before going on, I want to take partial issue with one portion from the second chapter, that purports to show there is no value to the notion of "incubation." Various studies are remarked upon, in which it was shown that giving people various creative tasks, and having them take a break of 15 or 30 minutes in the middle, did not improve the results, and usually hindered them. I am not sure that my experience falls under the title "incubation", but here is a practice upon which I built a career of nearly 40 years as a software developer:
When faced with a conundrum, I'd work at it for a day, and if I didn't see how to make it function properly, I would step back and think through every aspect of the problem, building a kind of flowchart in my mind. Then I would go home, do whatever needed doing there, and sleep on it. Some time between 3 and 6 AM I would awake with a critical idea, and that would directly solve, or lead to the solution of, the key problem with that part of the software. I called it, "throwing it over the wall to the right brain".
I think the studies of "incubation" did not give nearly enough time for incubation to work, nor were the problems to be solved sufficiently difficult. Half a day (including the overnight sleep) seems to be required, at least the way my own mind works. By the above practice I produced a great deal of software that nobody else had been able to write. Oh, and just by the way, I was also a very unusual computer programmer in this regard: I always documented my work in internal "comment lines" that typically numbered about 1/3 of the total bulk of the code. This saved a great deal of time when I needed to re-use some code later and needed to remember how to hook it up. I once had a boss who gave me a file of subroutines (modules) that he had written related to tracing contour lines on maps. I began reading through the FORTRAN code, and some things were very obscure, so I was having a hard time even figuring out how to pass data into and out of the modules. There was not a single comment line in many, many pages of FORTRAN code! I asked him why, and he responded, "Can't you read FORTRAN?" I said, "If I were your supervisor, you'd have just been fired." Of course I can read FORTRAN, but every programmer uses clever constructions that make sense during the writing, but are very hard even for the original programmer to decode later. I decoded his code and was able to make good use of it, but it could have taken much less time had he been a thoughtful programmer. 'Nuff about that.

Right along with Ashton's thesis that we all create, he dwells much on the incremental nature of innovation. Remember Einstein? One of his huge innovations was Special Relativity, based on the speed of light as an absolute limit to motion. It includes things like time dilation (a speeding traveler ages more slowly), relativistic mass increase (things get harder to push as velocity increases), and length compression (faster yardsticks are shorter). All this from his thought experiments about riding a light beam or driving a very fast carriage with a lamp and clock aboard. But the points just listed are based on others' work, particularly that by Lorentz, who was building on work by others. Einstein had some key insights, it is true, but the power of his system was bringing all the parts together, and showing how they worked together. In a small way, I've had a similar experience. I have published a few articles in journals. The one of which I am most proud uses methods from astronomy and civil engineering, plus a 200-year-old technique first published by Leonard Euler. How many recent scientific articles have you seen that cite literature from the 1700's?

The chapter "Chains of Consequence" trace out a few key innovations, including the chain of innovation leading to the aluminum soda can, which can be produced for a few cents (of the 25 cents you spend for a cheap brand of soda, or the dollar you spend for a Coca-Cola). He shows the origin of all the known ingredients in the syrup that is added to carbonated water to make that Coke. Later on, Ashton even digs into Isaac Newton's statement about standing on the shoulders of giants. Similar statements were made for generations before Newton. We find that there are no giants. Instead, we are standing atop a pyramid of people like us, who made one innovation after another after another.

This reminds me of a joke about engineers and mathematicians. An engineer is brought to a room in which he sees a saucepan containing water to his right and a stove, already lit, across the room in front of him. A piece of paper next to the pan says, "Heat the water". He picks up the pan and places it on the stove burner. Then he is taken to another room with the same setup, but the pan with the water is to his left. He picks it up and places it on the stove burner. Now a mathematician is taken to the first room, re-set to the original condition. He does exactly what the engineer did. Then, when taken to the second room, he picks up the pan, but sets it to his right, saying to the person who brought him, "I already solved that problem." Of course, this sounds silly and we snicker at mathematicians, but the mathematician's principle is how we actually innovate. Based on all the problems already solved, we find an unsolved problem, and add just a little to a known method that had almost solved it. You can get a PhD for doing that.

Politics rears its ugly head in a chapter on credit. According to this Wikipedia article, as of 2014, Nobel Prizes have been awarded to 817 men, 47 women, and 25 organizations. Only 15 women have won in Science, and two of them were Marie (twice) and Irène (once) Curie. Quite a number of male laureates won the prize for work not only performed, but invented, by women. This opens up quite a can of worms, so I'll leave it there, reported but not editorialized.

We do not see everything, but a most egregious example of not seeing because of not expecting is found in the saga of Helicobacter pylori, the organism that causes ulcers. Doctors looked at samples of ulcerated tissue from stomach and duodenal ulcers for more than 100 years, and didn't note the bacteria there. Some early "authority" had decreed the stomach too acidic to allow bacterial survival, let alone growth, so every single time bacteria were seen, they were ignored as "contamination". Not so the doctor who finally believed his eyes, and a colleague who was willing to "let the data talk." Ashton doesn't report this, but as I recall, Barry Marshall had to drink water laced with the bacterium, get an ulcer, then cure it with antibiotics, to convince even a few of his colleagues that H. pylori is the true cause of ulcers, not "stress". And it is now known that hundreds of species of bacteria inhabit our stomachs.

What we expect can determine what we see, and what we don't see. Many years ago I went with friends to a road cut in eastern Nevada, where they said we could collect trilobites. We stopped, got out of the car, and I looked at the weathered rock in the road cut. It looked like grainy limestone with a kind of salt-and-pepper texture. I said, "How do we find the trilobites?" One of the guys put his finger next to a black blob half an inch long and said, "Look closely". Suddenly I saw them. Thousands of them. Nearly every black blob on that hillside was a small trilobite! Having seen one, I had "eyes for them". Similarly, on the first fossil collecting trip that my wife and I brought our young son along—he was 5 years old—the leader started the day by showing us several specimens of shellfish that had been found there (it was an abandoned quarry). He pointed to one, saying, "This one is rare and hard to find." Our boy looked for a moment, then trotted off. He came back in 15 minutes or so with three of them. We joked that, of course it helped that his eyes were only 3 feet from the ground! But he was the only person to find any of that species that day.

Innovation is built into us. The hard part is getting over resistance to innovation. We need the new, but we fear it. My father calls it the Moses Principle: in his words, "It takes 40 years in the wilderness for all the old-timers to die off, before the next generation will embrace something new."

Tuesday, September 01, 2015

Who is really at sea, the character or the reader?

kw: book reviews, fiction, novels, animals, shipwrecks, mysticism

A friend gave me a paperback copy of Life of Pi by Yann Martel. This may be the most difficult book to review that I've encountered. It is described by the author in his introduction as a story to "make you believe in God." I would say, for anyone who thinks belief in God is a possible thing, it will make that possibility inevitable, and for anyone who thinks belief in God is either impossible or wrong, it will confirm that impossibility.

Near the end of the Revelation given to John, the last book in the New Testament, the angel's final words to John are:
Do not seal up the words of the prophecy of this scroll, because the time is near. Let the one who does wrong continue to do wrong; let the vile person continue to be vile; let the one who does right continue to do right; and let the holy person continue to be holy. (Rev 22:10b-11, NIV)
Earlier in the vision another angel had said, "There will be no more delay" (10:6), which the King James Version renders, "There will be time no longer." Together these passages show that once the end times truly arrive, it is too late to repent. Of course, the vile and the wrongdoers believe in neither end times nor in repentance.

Young Pi, the nickname for Piscine (French for swimming pool), leaves nothing to chance. Having an open heart and a keen desire to know God, he has accepted and diligently practices Christianity, Islam, and Hinduism. During the major section of the book, which tells the story of his nine months at sea in a lifeboat in the company of a tiger, Pi credits his triple faith for his survival.

The core idea of the story is ambiguity, even irrationality. But "irrational" has two meanings. As a schoolboy, Piscine Molitor Patel, tired of hearing his name mispronounced as "pissing", begins a new school year by insisting that he is to be called Pi, writing on the blackboard of every classroom, "Pi ≈ 3.14". Pi (shown in formulas as π) is the most irrational of the so-called irrational numbers; they are those quantities that cannot be expressed as a ratio of integers. In fact, π is the leading member of a special class of irrational numbers called "transcendental numbers", because they are not the result of any simple operations but themselves form the basis for operations: they are sources, not productions. They transcend algebra.

The first number proven to be irrational is the square root of 2, shown as √2. It is the first member of the other class of irrational numbers, the algebraic numbers. Algebraic and Transcendental: these are the two kinds of irrational numbers. And the other meaning of "irrational"? A sort of synonym, even an evil twin, of "illogical".

The transcendental numbers intrigue me. Though few are known, they seem to outnumber all other kinds of numbers. And so I recall Isaiah 45:15, "You are a God who has been hiding Himself." As metaphors, they are even more fascinating, because "transcendental" is a kind of good-angel twin of "divine". If in reading Life of Pi we believe his story throughout (at least, prior to the second half of Chapter 99), he becomes a kind of god to us. Or, at least, an avatar leading us to God. Mr. Pi Patel becomes transcendental. If, instead, we believe the second half of that chapter, seemingly fabricated on the spot to satisfy overly-rational shipping-disaster investigators, we fall from heaven to earth.

Perhaps this story cannot make absolutely everyone believe in God. But it makes a reader clear that the choice of heaven or hell is ours to make, and we will surely attain our choice.

Friday, August 28, 2015

His cure is much worse than the disease

kw: book reviews, purported nonfiction, medicine, thumbs down

From the book's title, it was clear that it was anti-doctor. I just wasn't prepared for just how much it was anti-doctor, anti-medicine, anti-just about anything except, of course, the products of his business. The Great American Health Hoax: The Surprising Truth About How Modern Medicine Keeps You Sick is by Raymond Francis, and as it turns out, he is the president of Beyond Health International. The book is a moderately subtle advertisement for the company's products. By the way, I compared his online catalog with other highly reputable brands. His cost four times as much as anyone else's that I could find. Caveat emptor!

There is a proverb I heard a few times when I was young, "The devil will tell you the truth seven times to get you to believe one lie." That seems to be the strategy here. I suppose I ought give Mr. Francis the benefit of the doubt in most cases; perhaps he really believes what he has written. But I'd like to examine three items where I find such a supposition to be rather incredible. All are found in the book's 11th Chapter, "Death by Medicine".

First, concerning Ebola, and infectious disease in general. Ebola is a miserable illness, caused by a virus, that causes bleeding from many bodily tissues. If the body's immune response does not subdue the virus soon enough, the range of "leaky" tissues increases until the patient bleeds out from just about everywhere and dies of blood loss. With intensive interventions, developed by doctors using a combination of intuition and trial-and-error, some patients' lives can be saved. With no treatment, depending on the strain of virus, 50% to 90% of those who get it die from it.

Beginning the chapter's section on infectious disease, (based on earlier attacks against antibiotics and vaccines), we read, "If antibiotics and vaccinations are inappropriate, how then do we deal with infectious disease? The answer is simple: keep your immunity strong. The "bug" is not the problem." Throughout the book, the author has made dozens of recommendations for supporting health and a healthy immune system. Most of them are pretty standard fare, some are more wacky, but all sooner or later circle back to this: we need appropriate nutritional supplementation. He scarcely mentions the products; he is more clever than that. But he so frequently writes of "pure ingredients" and suchlike that it is clear, nobody but he has truly "pure" products. Then he has this to say of Ebola in particular on page 263:
The existing literature indicates that vitamin C, in sufficient quantities, has never failed to cure any virus infection…What appears to be happening to Ebola patients is that the body mounts an excessive immune response to the infection, producing a flood of inflammatory chemicals that massively damage every tissue in the body, causing the blood vessels to leak. These inflammatory chemicals use up vitamin C and cause the need for vitamin C to go sky high…this deficiency causes acute scurvy…"
Ebola may seem like an exaggerated case of scurvy, but it is not. The "existing literature" statement is totally false. See this post in ScienceBlogs for a full debunking of the very small number of published pieces that make such claims. Most were by a doctor who claimed to cure polio with vitamin C. The post also debunks the notion that vitamin C has any measurable effect on Ebola. Every point of Mr. Francis's discussion is false. Period.

Second, let's back up to what he says about antibiotics. He calls antibiotics "one of the greatest medical blunders of history." He doesn't even bother to try to explain away the millions upon millions of lives saved and diseases cured by antibiotics. He just ignores them. He makes much of the well known facts that antibiotics are being overused and misused, which has resulted in great problems with antibiotic resistance and with the destruction of gut bacteria. The sad history of antibiotic misuse provides plenty of fodder for the "seven truths". The string of lies that follow include, "…there is no need for antibiotics at all", "Immune-enhancing nutrients [a list follows] will take care of most infections", and, "The right amount of vitamin C will stop almost any infection." Boy, I wish the things he claims were true! Sure would be nice!! But they are false. He closes with a recommendation for the Rife machine. Do you remember Dr. Andrew Weil? He is a great proponent of alternative treatments…but not this one. See his discussion of the machine here; it was quackery in 1932 and it is quackery today.

Here is my own take on antibiotics, because I lived through much of their history. The two main problems are resistance developed by bacteria and the collateral damage, which is the slaughter of many of the bacteria in your gut. Both are a problem mainly with oral antibiotics. Antibiotic pills were developed to make administration easier, not needing a hypodermic injection. When I was a child, if we had an infection we got a penicillin shot. My brothers and I all hated the needle. But there was no other way to administer penicillin, although penicillin powder was pretty good for sprinkling on an open wound (but painful!). Oral antibiotics became a focus because parents didn't like screaming children or children who couldn't be dragged to the doctor's office. Sure, that's over-simplified, but it explains a lot. When an antibiotic is injected, and intravenous injection is the most effective (and most painful) way, little of the drug is excreted so it doesn't get into the sewer and induce resistance on whatever bacteria encounter the sewage. Actually, no matter how you take an antibiotic, you can reduce resistance problems by storing and incinerating all the wastes your body produces during the time you take it, and for 3-4 days thereafter. Smelly, though. But IV antibiotics will not kill your gut flora. This isn't a total solution, but points the way to some useful steps in the right direction.

OK, now to #3. Vaccines. The first thing I did after reading the book's introduction—which is full of red flags to any medically literate person—was check the index for "Vaccine". An extended quote from page 256 is warranted here:
Another of conventional medicine's historic fiascos [sic]is vaccination. Vaccines are ineffective and dangerous. Unlike other drugs, which undergo basic testing prior to approval and recommendation, vaccines do not have to be proven safe or effective before hitting the market. While there is no scientific evidence that immunizations prevent disease, there is plenty of evidence that they are not safe. No vaccine has ever been scientifically proven in double-blind, placebo-controlled studies to be effective; the existing evidence indicates that they are only marginally effective or not effective at all.
That paragraph contains five sentences. Each is false. In order:

  1. A fiasco? A very few vaccines were later found to be ineffective or to cause problems. Most are clearly effective, and they are the most effective public health tool we have to reduce the number of cases of the most deadly infectious diseases.
  2. Ineffective and dangerous? See prior statement. Also, let's see, how many children do you know who have had measles? Even one? I had it, as did my brothers, and every one of my schoolmates. Luckily, we were all of European extraction, and had immune systems that could deal with measles before we died an Ebola-like death! When Europeans first went to Hawaii, there soon followed a measles epidemic that killed most who caught it. The vaccine was introduced in 1963. In 1960, 380 Americans died of measles, among more than 400,000 who caught the disease. In 2007, there were only 43 cases of measles, and no deaths. Cases have been rising more recently, due to the efforts of anti-Vaxxers (including Mr. Francis, no doubt).
  3. Untested before marketing? Not true. This isn't just a lie, it is a huge lie. Maybe he's talking about some other country, in which the government doesn't mandate three-phase testing of all medications. It is dietary supplements that are untested and don't have to prove their effectiveness. Like his company's products.
  4. He repeats the "ineffective and unsafe" charges more vehemently. He is even more wrong. Some vaccines that were found over time to be unsafe to small numbers of patients have been withdrawn. But they had earlier passed safety and efficacy tests mandated by the US government. Some medicines need extra time in their tests, it is true, but predicting which ones isn't yet feasible.
  5. He pulls out all the stops with scientific language intended to muddle your mind. Here is a true statement: Every vaccine on the market has been subject to double-blind, placebo-controlled studies and has proven its effectiveness. The yearly "flu" shots have a measured effectiveness, so they can tell you that, for example, the vaccine being produced right now, for use during the fall of 2015 and winter of 2016, is about 65% effective at preventing infection by the strains of virus that are even now moving out of the tropics into North America; and for those who still get influenza, the case will nearly always be milder in a vaccinated person than one who is not vaccinated.

I went to a private school for grades 1-3. We lived in Salt Lake City, Utah at the time. During exactly those years, Dr. Jonas Salk conducted a double-blind, placebo-controlled study of his Polio vaccines (there are 3 of them). I was one of his 50,000 experimental subjects. My name was in the newspaper in 1955 (the names filled a few pages), in the article announcing the spectacular success of the vaccine. If this vaccine were ineffective, there'd be hundreds of thousands of "iron lung" machines in use today in America.

The anti-vaccine drumbeat really rankles me. Do you really think the disease is safer than the medicine? People scream about some of the vaccines using a mercury chemical as a preservative. The amount of mercury in a vaccine shot is tiny. Do you know the earliest somewhat effective remedy for syphilis? Metallic mercury, about 2cc, injected right into your hip through a large needle (it won't go through a small one). But fewer and fewer vaccines continue to use mercury compounds; better preservatives have been developed. That's how science works. You develop something that works most of the time, but causes a few people some problems. So you keep on developing, to reduce both the numbers who don't get help, and the numbers who have problems. Neither of these numbers will ever reach zero. But it sure beats having the disease!

The diseases I had as a kid: measles, mumps, chicken pox.

The diseases my mother had as a kid: those plus scarlet fever and Rubella (it was called German Measles at the time).

Diseases I avoided because of vaccination: Whooping cough (pertussis), diphtheria, tetanus (which killed the brother of Henry David Thoreau among many others of that era). I could add Polio, but I'd actually had that in my first year, when it is rather mild; the older you are when you get polio, the more severe it is. This wasn't known in 1952 when I became a "polio lab rat". More recently, I have had the shingles vaccine, which is about 2/3 effective I am told. My mother had shingles. I'd risk a great many side effects to avoid that! I've also had the Pneumovax injection; the viruses it combats used to be called "the old man's friend", because immobile patients in nursing homes were its typical victims. And, since age 67, I've begun getting a yearly flu shot. It's a whole lot better than even ten years ago.

In this case I say, "Don't buy this book." I'm glad I read a library copy. Don't read it. I did already so you don't have to. The guy has products to sell, that are very similar to other products but cost much, much more. He is either a cynical, manipulating charlatan, or is bamboozled himself.

Monday, August 24, 2015

Cataloging cryptosleuths

kw: book reviews, nonfiction, monsters, investigations

We love our monsters (most of us). Whatever the nearby mystery, it has a large following, and a few more charismatic legends have a worldwide following, such as Nessie and UFO's. Writer Tea Krulos set out to learn about the whole range of monsters/demons/ghosts/UFO's and the people who hunt them. Doing so, he followed along with a group known as PIM (Paranormal Investogators of Milwaukee) on a number of such hunts, and as he relates in the last chapter, got a little more than he'd bargained for in one instance. He also tagged along with or interviewed members of other groups and researched the field among both scientist, skeptics and believers. The result is his book Monster Hunters: On the Trail with Ghost Hunters, Bigfooters, Ufologists, and Other Paranormal Investigators.

Other than "the boogieman", the only popular monsters I recall from childhood are the Loch Ness Monster ("Nessie") and the Abominable Snowman (Yeti). I find that each represents a category. I don't know if the book's span is all-inclusive, but it seems so:

  • Cryptozoology, or the study of "cryptids", elusive creatures that include
    • Bigfoot, Sasquatch, Yeti, the Skunk Ape and other man-apes.
    • Two kinds of Chupacabra (sometimes written Chupacabras, following a Spanish idiom), whose name means "goat sucker". One variety is thought to be responsible for animals found drained of blood. The other looks like an extremely mange-afflicted dog.
    • Lake monsters such as Nessie, "Champ" in Lake Champlain, and other large snake- or dinosaur-like swimming animals.
    • Werewolves, but seldom of the shape-changing kind (no Vampires are mentioned, though, and Vampire Bats are all too real, if rather small).
  • Demonology, including ghost hunting and activities of professional exorcists.
  • Chimeras, such as MothMan. Nobody seems to hunt Griffins or the Basilisk these days.
  • UFO's, and not all ufologists are convinced they are ET's.

The middle point above is a tricky one. The Roman Catholic Church employs an official Exorcist in each of the 50 U.S. states, and others stationed in provinces around the world. They take demon possession very seriously. Krulos interviewed one of them for his chapter on demon possession, and was given a reasoned account; also a scathing commentary on the flashy exorcist he describes in detail, a certain "reverend", whose activities I disdain at least as much as the Catholic exorcist does. A certain Bible verse about "making merchandise of the word of God" comes to mind.

A point to ponder: If the demons of the Bible actually exist, they are quite capable of impersonating the "ghosts" of the dear departed, which is the view the Bible takes. Their abilities and activities would explain all manner of occult manifestations, although the great majority of "spiritualist" activities are easily shown to be the work of charlatans. Some well respected theologians consider that demons are also behind the UFO activities that are not hoaxes or misunderstandings of natural phenomena.

Cryptozoology in particular dwells on the line between fact and fantasy. From time to time a new, largish species is discovered that nobody knew was there. A famous one is the Coelacanth, a rare fish that was thought to be extinct until some were caught near Indonesia in 1938. I remember as a boy the reports of the first Okapi, a kind of smallish zebra/giraffe. Finding a new mammals is rare. Though new species of many kinds are found every year, they are nearly all small or even tiny, and reclusive. Pretty much everything that isn't reclusive has already been discovered and described in journals. But some cryptozoologists do try to take a very scientific approach, and may study science so as to do it better. Maybe someday a living Yeti will show up in some indisputable way.

Tea Krulos became a kind of meta-hunter, hunting the hunters and bringing us glimpses of their lives. Not many are mild-mannered sorts, though a very few seem to be. It generally takes an outsize personality to go from being "interested" in various monsters or ghosts, or even fascinated with them, to being an active hunter, in an organization like PIM or on one's own. But considering the reputed powers of their prey, it seems safest to hunt in groups.

Monday, August 17, 2015

Even more digital caution

kw: book reviews, nonfiction, sociology, computers, computer revolution

A Canadian Indian chief was talking with a visiting geologist, who was describing his work and the chances of mining in that area. At one point the chief said, "The first time white men came to Canada, they shot all the big game and hauled away the meat. The second time white men came to Canada, they trapped all the small game and hauled away the furs. The third time white men came to Canada, they cut down all the big trees and hauled them away to make lumber. The fourth time white men came to Canada, they cut down all the small trees and hauled them away to make paper. Now they are coming for the rocks!"

Observing the sweep of human history, I get a similar feeling, or perhaps it is the kind of building dread embodied in a Vaudeville routine that began, "Slowly I turned. Step by Step…"

  1. At various times mainly between 15,000 and 5,000 years ago, the Agricultural Revolution made possible a great increase in the human population and the size and density of settlements-towns-cities. In some ways life was better, but there also arose supervisors and nobles and kings, epidemics of cholera and TB, and harder and longer work for nearly all. Those few foraging cultures that remain seem to have an easier time of it, at the cost of not being able to accumulate more goods than they can carry on their person or drag with a travois (foragers don't build roads, so the wheel is no use to them).
  2. Beginning about 200 years ago in England, and spreading to about a third of humanity so far, the Industrial Revolution made possible a great increase in productivity of goods manufacture and travel and literature/literacy/education. There also arose sweatshops (still almost universal outside Europe and the USA) and all the associated ills detailed in Sinclair Lewis's "muckraking" books.
  3. Though Charles Babbage and Ada Lovelace and others made a false start at developing computing machinery in the "real steam" era that is adulated in "steampunk" fiction, true general-purpose computers truly began a bit more than 60 years ago with electronic computation, first using vacuum tubes, then transistors, then small-scale integrated circuits, and now "chips" about the size of postage stamps, that can do a few billion operations per second.
  4. The first Blackberry, the 850, appeared just 16 years ago; it was the precursor of the "smart phone", which really took off once touch screens became economical to produce. The level of computing power needed to run these devices also powers, on a larger scale, all the "big data" processes that have addicted so many of us to a pocket device that pretty much runs our lives, and informs advertisers and government staffers alike about everything happening to nearly everybody.
This environment of ubiquitous computing is what the word "digital" means in the title of Andrew V. Edwards's new book Digital is Destroying Everything: What the Tech Giants Won't Tell You about How Robots, Big Data, and Algorithms are Radically Remaking Your Future. That's not quite the longest title I've ever seen, but it is the longest this year.

Digital processes are not all that new. They date to the invention of counting numbers thousands of years ago. The world has always been divided into things you can count and stuff you can't count. You can't say "I'll have one water with lunch and two waters with dinner", except if you are talking about bottles of filtered water. But water itself is treated as a continuous substance. Apples or sheep, on the other hand, are unitary. It is quite legitimate to have one apple with lunch and two apples with dinner. Or even two "fruits" if you intend to have one apple and one pear. Though most "unitary" items are divisible, and cutting an apple in half to share with a friend is OK, half a table or half a chair or half an automobile is not so useful. So if we want to do the work involved, it is possible to count exactly how many automobiles (that run) are in the city of Houston, or how many oranges are on a particular tree in Florida. But water? or even apple juice? You can't "count" it unless you containerize it, and then you can count quarts or gallons or whatever.

That's a long way to say that the human race has actually become comfortable working with some stuff that is analog and thus can be measured but not counted, and other stuff that comes in natural packets and can thus be counted, and is digital. Though there are super-microscopes that can see atoms, we still don't worry much about how many atoms of helium are in a particular balloon, or how many molecules of water are in a particular drinking glass. On the human scale, atoms (from "a tomos" meaning "can't be divided") just don't matter. For many, many purposes, analog is king. Most of us only care about exactitude when getting change from the cashier or balancing our checkbook. Or counting that there are indeed 12 eggs in that carton of a dozen.

All that is changing. In the four points above, I didn't pay much attention the radical changes of occupation that accompanied each revolution. Midway through the Industrial Revolution, autos rapidly replaced carriages, and the proverbial "buggy whip makers" nearly all went out of business. Only one in ten thousand still remains, making the whips for funky carriages used for giving rides to tourists in historic Philadelphia or Williamsburg. My wife was once a Telex operator. That has long been superseded by at least three technologies in sequence, until they all fell to e-mail and now texting. CEO's text or e-mail almost everything except contracts that need signing, and even then, the signable PDF is taking over for paper contracts.

So what does Mr. Edwards wish us to beware? I could be cute and say, "All of it!", but that would do him poor justice. Digital with a capital D provides abundant conveniences. We just have to achieve some kind of balance, because convenience is not all there is to life. Before there was Digital there were already "couch potatoes", for whom convenience really was, if not everything in their life, at least it made up as much of it as they could manage. Before there was TV, there were already over-avid spectators, going back even before Roman times, when the Emperor's formula for a contented population was "bread and circuses". In between it was "beer and football (either kind)".

The book has 17 chapters that cover everything from the music industry (rapidly dying away), screen addiction (people who text the person sitting across the table at the eatery), the job market (or lack thereof), retail (Amazon and eBay and their Mafia-esque ways to grow into monopolies), and the tension between using the Twitterverse to overcome authoritarian rule and its use by the authorities to track us all in real time (I knew there was a reason I've eschewed getting a Twitter account!).

The saddest chapter has the title, "Obsessive Compulsive: Digital is Destroying Our Will to Create Anything Not Digital." Having spent 40 years writing software, because I was better with code than I was in the lab (I majored in Chemistry, Physics and Geology), I well know the allure of, "Computer programs can do anything!". But early on I recognized that they can't. So I've given equal effort to analog pursuits: music performance (voice and several instruments), art (mobiles, the kind with wires and hanging things), and essay writing (generally speaking, half an essay is of no more use than half a chain saw). I wonder what I did right? So many people I know have no hobbies that don't fit on a 4-inch screen.

Now, some things we're better off without. Music aficionados who have a really good ear can tell the great improvement CD's are over vinyl records. Of course, if they really, really like the third harmonic emphasis created by older equipment, they use the CD player to drive a vacuum tube amplifier. So there is still a very tiny market for manufacturers of vacuum tubes! But most of us are happier without them (though I still own a ham transmitter with driver and transmitter tubes). Digital controls in aircraft and autos have steadily reduced traffic and air fatalities. Digital libraries, though they are putting pressure on brick-and-mortar libraries (still my favorite places: FYI, I don't own an e-reader), afford instant access to an increasing fraction of all human knowledge, and the only way to index all of that.

I suspect if advertisers could not track us via the click rate on their banner ads, there would still be a larger market for paper "newspapers" and magazines. But those markets continue to shrink. Digital is also destroying education "as we know it", but I favor that to some extent. Different people learn different ways: I learned FORTRAN II better in two days using a "programmed instruction" book, than if I'd sat for 12 weeks in some "Comp Sci" classroom trying to learn it from an instructor (Oh, yeah, Comp Sci didn't exist in 1968; I was among those who invented it). The Khan Academy caters to people like me…usually! But for some subjects, I do better with a talented instructor. I needed a really good one to learn Differential Equations while getting an Engineering degree. Two attempts with less talented teachers led me to drop those classes, so the "third time" really was the charm. But if Digital takes over the classroom for many subjects, I, for one, will not cry the loss. Only the most talented teachers will remain as teachers. That is a good thing. The most talented today are moving toward massive online courses, which spread their expertise to a great many more students than could be taught just a decade or two ago.

For some things, we prefer less human interaction. I'm a typical "hunter" type when buying something. Once I know what I want, it is like, "go to forest (store), find prey (the shirt I've decided to buy), kill (buy) it, and go home." My best "shopping" trips last ten minutes. The last person I want to interact with is a store clerk. Of course, that's if I really know what I want. If I don't, and online research hasn't proven helpful, I really do want a knowledgeable store clerk's help. Then I shop differently: "Go to forest, find hunting mentor to lead me to where the best game is. Then kill the prey and take it home." I'm OK if such an event takes half an hour instead of ten minutes. But if the "mentor" is a dullard with little interest in being of genuine help, he/she'd better duck! Actually, no violence, I just find the supervisor, who's more likely to be a good "mentor".

The last couple of chapters get into "What do we do about it?", and I'll avoid stealing the author's thunder, except to say, the Hippies were right, that we ought to get out and smell the flowers more often. Do you ever take a walk and turn off your phone until you return home? Try it.

Friday, August 07, 2015

Our life in bits and bytes

kw: book reviews, nonfiction, algorithms, prediction, sociology

What would life be like if the atoms that make us up were just big enough to see, if we could witness directly how they slide, merge and separate? How complex could our life be if the sum total of our lives could be described by, say, 1,000 characteristics, or perhaps 100? How about 10?

Yet how quick we are to pigeonhole people according to one or two, or at most five, distinguishing items! What do most of us now about, for example, Yo Yo Ma? Male, Chinese, famous musician (maybe you know he is a cellist), … anything else? How about that he is French born, a Harvard graduate, and has earned 19 Grammys? That's six items, more than most people probably now about him.

To what extent do you think you could predict his tastes and buying habits from these six items? If another person shares these six characteristics, to what extent will he also share certain tastes in clothing or food or books to read? Some people wish us to think, "to a great extent". In The Formula: How Algorithms Solve All Our Problems and Create More by Luke Dormehl, some of the people he interviewed claim to do just that. (Maybe you've made a profile on a dating site that starts matching you up when you've entered no more than four or five items. And how fully have you completed your FaceBook profile?). But some go to quite an extreme in another direction, using "big data" to pry inside our skulls.

What kind of big data? All your searches on Google, Yahoo, Alta Vista, Bing, or whatever; every click, Twitter text, FaceBook, LinkedIn, blog post, or online chat. We create tons of data about our day-to-day, even moment-by-moment activities. There was recently an item on the noon radio news about a company that aggregates such data and sells "packages" to companies, who pay $1 to $2 million dollars on some periodic basis for it (That's all I remember, I was listening with half an ear while folding laundry). Why is all that data so valuable? Because businesses believe they can better predict which products will sell to what kind of people if they crunch it.

A few months ago a handle on a drawer broke. Naturally, the cabinet is decades old and nothing even remotely similar in style could be found at Home Depot or a decorator's salon. So of course I looked online for something with the right spacing of mounting holes, with an appearance that would be compatible with the cabinet, in a set of four, so the handles would all match. It took a few days. I bought a set I liked, online, and installed them. For the next several months, however, ads about cabinet door handles appeared everywhere I went online: Google, FaceBook, Amazon, eBay. They all knew I'd been looking for door hardware. None of them knew I was done looking! (Google, are you listening? Do, please, close the loop and collect purchase data also.)

What is The Formula? Luke Dormehl calls it an Algorithm. What is an algorithm? To anyone but a mathematician it is a Recipe or a Procedure. I used to have a book, which I used into unusability: How to Keep Your Volkswagen Alive: A Manual of Step-by-Step Procedures for the Compleat Idiot by John Muir and Richard Sealey. With its help I kept my 1966 Bug alive into Moon Unit territory. The "procedures" were recipes, or algorithms, for things like setting valve clearances, changing a wheel bearing, or overhauling an engine. In computer science, an algorithm is the detailed instructions to a computer to direct it what you want it to do, very, very exactly.

Here is the kicker. A traditional algorithm is carried out in a procedural manner (don't pay attention to claims of non-procedural, object-oriented computer language gurus. At the root, a computer CPU carries out a series of procedural instructions), according to a "computer code" or "program", written in one or more formal languages. Some time ago I looked at the internal release notes for the Android OS used in many cell phones. That version, at least, released in 2009, had modules written in 40 computer languages. No matter how complex the program or program system, the instructions are written by a person, or perhaps by many persons, and no matter how many, their knowledge is finite. There are also time constraints, so that the final product will be biased, firstly by the limitations of the programmer(s), secondly by tactical decisions of what to leave out for the sake of time or efficiency, and thirdly by the simplifications or shortcuts this or that programmer might have made so that some operation was easier to write the code for. They may also be biased by inner prejudices of the programmer(s).

Another kicker: A kind of start-stop-start process had been going on around Neural Networks. They try to mimic the way our brains are wired. There are two kinds, hardware and software. Hardware neural nets are difficult to construct and more difficult to change, but they have much greater speed, yielding almost immediate results. Because people who can wire up such hardware are quite rare compared to people who can write computer software, hardware nets are also rare, and nearly all the research being done with them is being done using software simulations. "Machine learning" by neural nets can be carried out by either hard- or software nets, but I'll defer remarks on one significant difference for the moment.

A neural network created for a specific task—letter recognition in handwritten text, for example—is trained by providing two kinds of inputs. One is a series of target images to "view", perhaps in the form of GIF files, or with appropriate wiring, a camera directly attached. The other is the "meaning" that each target image is to have. A training set may have five exemplars of the lower-case "a", along with five indicators meaning "that is an a", five of "b" and their indicators, and so forth. The innards of the net somehow extract and store various characteristics of the training data set. Then it is "shown" an image to identify, and it will produce some kind of output, perhaps the ASCII code for the letter.

The inner workings of neural nets are pretty opaque, and perhaps unknowable without extremely diligent enumeration of all the things happening at every connection inside. But at the root, in a software neural network there is a traditional algorithm that describes the ways that the network connections will interact, which ones will be for taking input or making output, which ones will store things worth "remembering", and so forth. This is one reason that software nets are rather slow, even on pretty fast hardware. The simulation program cannot produce the wholly parallel processing that a hardware net uses (brains use wholly parallel processing, and are hard-put at linear processing, the opposite of computer CPU's). If the net is small, with only a few dozen or a few hundred nodes, the node-by-node computations can be accomplished rapidly, but a net that can recognize faces, for example, has to be a lot bigger than that. It will be hundreds of times slower.

Now for the other significant difference. The computer running the simulation is digital, while a hardware network is analog. I remember the first time I used a computer, that I was quite impressed to see calculations with 7-8 digits of significance, and if I used double precision, 15 digits. That sounds very precise, and for many uses, it is. Fifteen digit precision means one can specify the size of something about the size of a continent to the nearest nanometer. That is about the size of five or 10 atoms. However, a long series of calculations will not maintain such a level of precision. For many practical uses, calculations of much lower precision are sufficient. Before computers came along, buildings and bridges were built, and journeys planned; a slide rule was accurate enough to do the calculations. My best precision using a slide rule was 3-4 digits. But "real life"systems are typically nonlinear, and the sums tend to partly cancel one another out. You might start with very accurate measurements (but it's quite unlikely they are more accurate than 4-6 digits). Run a simulation based upon those figures a few dozen steps, and somewhere along the line there might have been a calculation similar to this:

324.871 659 836 648 - 324.860 521 422 697 → 0.011 138 413 951 016 4

If you've been counting digits, you might notice that the digits 0164 (which I colored red) are superfluous...where did they come from? That is the rounding error, both that which arose from representing the two numbers above in binary format, and that from the conversion of the result back into decimal form for display. But the bigger problem is that, counting only the black digits, only 11 are useful. Four have been lost. Further, if you were to start with decimal numbers that can be represented exactly in binary form, such as 75/64 = 1.171 875 and 43/128 = 0.335 937 5, multiplying them results in 3,225/8,182 = 0.393 676 757 812 5, which has 13 digits of precision, whereas the original numbers had seven each. Thus it typically takes twice as many digits to represent the result of a multiplication, as were needed to represent the two multiplicands.

I could go on longer, but an interested person can find ways to determine error propagation in all kinds of digital systems, many of which have long been studied already. By contrast, an analog system is not limited by rounding errors. Rather, real wires and real electronic components have thermal noise, which can trouble systems that run at temperatures we might find comfortable. Further, Extracting the outputs in numerical form takes delicate equipment, and the more accurately you want those output numbers to be, the more delicate and expensive the equipment gets. However, until readout, the simulation runs with no errors due to subtraction or multiplication, other than gradual amplification of thermal noise.

Suffice it to say, both direct procedural algorithms and neural network machine-learning systems are in use everywhere, trying to predict what the public is going to do, be it buying, voting, dating, relocating, or whatever. That is the main reason for science, after all: predicting the future. Medical science in the form of a doctor (or more than one) looks at a sick person and first tries to find a diagnosis, an evaluation of what the problem is. The next step is a prognosis, a prognostication or prediction; it is the doctors' expectation of the progress of the disease or syndrome, either under one treatment or another, or under none. A chemist trying to determine how to make a new polymer will use knowledge of chemical bonding to predict what a certain mixture of certain chemicals will produce. Then the experiment is carried out to either confirm the expectation (the prediction), or if it does not, to learn what might have gone against expectation and why. The experiments that led to the invention of Nylon took ten years. But based upon them, many other kinds of polymers later proved easier and quicker to develop. It is even so in biological science. Insect or seashell collecting can be a fun hobby, but a scientist will visit a research museum (or several) to learn all the places a certain animal lives, and when various specimens were collected, and then determine if there is a trend such as growing or shrinking population. Is the animal going extinct? Or is it flourishing and increasing its range worldwide?

In the author's view, The Formula represents the algorithms used in the business world, broadly construed, to predict what you might like, and thus present you with advertising to trigger your desire for that thing. My experience with cabinet handles shows that they often get their timing wrong. Many cool and interesting ads showed up, but it was too late. However, that isn't the author's point. The predictive methods find what ads to show us for products, or prospective dating partners on eHarmony or OK Cupid, or those that manage a politician's image, all tend to narrow our choices. A case in point from the analog world: one of the best jobs I had before going into Engineering came about because an Employment Agent, leafing through job sheets, muttered, "You wouldn't be interested in that," but I quickly said, "Try me!"

Try making some Google searches while logged in to Google, and then (perhaps using a different browser, and if you're really into due diligence, on a different computer network such as a library), making the same searches while not logged in. The "hits" in the main column will be similar, or possibly the same. But the ads on the right are tailored to your own search history and other indicators that Google has gathered.

Is all this a bad thing? Maybe. You can game the system a little, but as time goes on, your history will more and more outweigh things you do differently today. Sure, I got a sudden influx of ads about cabinet handles after searching for same, but if I had a history as a very skilled handyman (I don't!), the exact ads I saw might have been quite different. And I might have also seen ads about certain power tools intended to make the mounting of new cabinet handles even easier.

The author has four concerns and spends a chapter on each.

  1. Are algorithms objective? They cannot be. Programmers are not objective, and machine learning is dependent on the training set, which depends on the persons who create it, and they are not objective.
  2. Can an algorithm really predict human relationships? We have proverbs that give us pause, such as, "Opposites attract", and "If you're not near the one you love, you'll love the one you're near".
  3. Can algorithms make the law more fair? I was once asked by a supervisor if I thought he was fair. I replied, "Too much concern for fairness can result in harshness. We (his 'direct reports') wish to be treated not just fairly but well. We'd like a little mercy with our justice." Mr. Dormehl cites the case of an experiment with an inflexible computer program, given the speed records from a car on a long-distance trip. It issued about 500 virtual tickets. A different program, that averaged speed over intervals just a little longer, issued one ticket.
  4. Can an algorithm create art? Since all the programs created to date operate by studying what makes existing artworks more or less popular, they can only copy the past. True creation means doing what has not been done. Picasso and others who developed Cubism did so against great opposition. Now their works sell for millions. It was art even before it was popular, but the "populace" didn't see it that way for a couple decades.

The book closes with a thoughtful section titled "How to Stay Human in the World of the Formula." While he has some suggestions, I think the best way is to avoid being totally predictable. In many ways, that is hard for me, because I am a man of regular habits. I'm quite happy eating the same meat-and-cheese sandwich for lunch day after day, taking the same route to a work place (or these days, a place I volunteer), eating at a certain kind of restaurant and eschewing most "fine dining" places, wearing a certain kind of garb depending on the season, playing (on acoustic instruments, not electronic devices) certain kinds of music to the exclusion of others, and so forth. But I am also the kind of guy, when I make a mobile, it will be quite different from any other I have ever made: different materials, different color schemes, and different numbers of hanging objects clustered—or not—in various ways. I made one out of feathers once; not my most successful mobile. When I write a formal document or a letter for sending via snail mail, though I type it because handwriting is so slow, I usually pick a new typeface in which to print it; I have a collection of nearly 2,000 font files, carefully selected either for readability or as specialized drop caps (I love drop caps, though I am careful in their use). I haven't bothered to try alternate typefaces for this blog, because there are only 7 available anyway, and the default is as good as any.

The author proposes that we "learn more about the world of The Formula". Sure. But as long as Google's Edge Rank (formerly Page Rank) is a black box, and as long as everyone out there from FaceBook and LinkedIn to Amazon and NetFlix keep tweaking their own black box "recommendation engines", it will be a kind of arms race between the cleverest consumers and the marketers. But, hasn't that always been true?

Saturday, August 01, 2015

And you thought sharks were dangerous

kw: book reviews, nonfiction, animals, animal rights, performing animals, orcas

What is black and white, smarter than a chimpanzee, but can't walk? It is the apex predator of the oceans, the orca or "killer whale". Until about 90 years ago, an orca was called a "grampus" by many whalers; that's the word used in Moby Dick. Others called them "blackfish".

I once recalculated the Encephalization Quotient of various whales and porpoises, based on subtracting out the blubber so they'd be more comparable to land mammals. Human EQ ranges from 7 to nearly 8, for fit persons (Take a bright individual with an EQ of 7.5, but a sweet tooth. If he fattens up and his weight doubles, his EQ will drop below 7). Porpoises and other "small" toothed whales have EQ's mostly above 4. But even the fittest bottlenose is 40% blubber, so divide 4 by 0.6, and you get – surprise! – something above 6.6! Raw EQ for orcas ranges from nearly 3 to nearly 4, but their percentage of blubber is a bit less, so I'd put their "real" EQ into the 4.5-6.5 range.

I've been to SeaWorld to see orcas perform, including "rocket hop" stunts that throw a human trainer 30+ feet above the water. This was before 2010 when a widely-publicized killing of a trainer led to a ban on human trainers (who are also performers) entering the water with orcas throughout the US.

How is it even possible for humans to work with these creatures? We need some comparisons, and John Hargrove, an orca trainer for 14 years, provides them in his new book Beneath the Surface: Killer Whales, SeaWorld, and the Truth Beyond Blackfish, written with Howard Chua-Eoan. John was one of several orca trainers interviewed in the film Blackfish, which probed the real story of the death of Dawn Brancheau.

After a performance in February 2010, tensions were high between several orcas, for reasons not clear to the trainers, and Dawn was not in the water, but on a platform next to the pool right at water level, talking to Tilicum, when he moved forward, bit her arm and pulled her into the pool. He didn't try to eat her, but rammed and bit her body repeatedly. She didn't live long, and was partly dismembered. She was the third trainer that Tilicum had killed, but the first to be made known publicly, because this took place in public view.

Do orcas consider us prey? Not really. We don't resemble any of their usual pray animals. Seas around the world are inhabited by a dozen or more loosely related populations of orcas. Some prefer seals, some attack primarily baleen whales, and others eat mainly fish. Being quite a bit smarter than sharks, orcas can tell we aren't seals when they encounter swimming humans, and typically leave them alone. That is in their natural environment. A theme park is hardly a natural environment!

The salt water tanks at SeaWorld parks are big, really big, from our perspective. Roughly the size of a football field, and 40 feet deep, they hold 10-20 million gallons of water, each. But a mature male orca is 30 feet (9m) long and he can weigh 10 tons, though the heaviest male in captivity weighs 6 tons. Take that 6-ton animal, compared to a 200# (90kg) man: the ratio is 60:1. The length ratio is about 5:1. A 15 million-gallon tank sounds like a lot; that is about 1.8 million cubic feet, but:

  • Comparison 1: 1.8 million cu.ft / 60 = 30,000 cu.ft. (850 cu.m.)
  • Comparison 2: 300 ft (football field length) / 5 = 60 ft (18m)
The average house in my neighborhood is a 3-bedroom, 1- or 2-bath bungalow, colonial style, with a full basement but no garage. Indoor volume is about 15,000 cu.ft. The two largest houses on this block are 4-bed and include an attached 2-car garage. Their volume is just under 25,000 cu.ft. That's similar to Comparison 1. On a length basis (Comparison 2), the usual house around here is 30-35' long (10m or less), and the two large houses are 40 ft long (12m). So a captive orca in a SeaWorld park has a pretty big "house" to live in, one might think. Almost a McMansion, perhaps. But he can never go out into the yard. It is perpetual house arrest. A free orca often swims 50-100 km daily, and may dive to 1000 ft (~300m) after prey. You, in house arrest, might take advantage of a treadmill or StairMaster. They don't make those for orcas.

The book has another apt analogy. For about 70 years, some people who fear "flying saucers" or ET's have expressed anxiety over "alien abductions" and "experiments", including breeding experiments, being done by aliens on human captives. Now imagine, you are suddenly subject to imprisonment in your own home, by captors the size of Guinea Pigs…except they are smarter than you are, they have weapons you can't understand, and they control your food supply. Oh, and they teach you silly "behaviors" that you must perform 4, 5, 6, even 7 times daily. They take away your children, subdue you to perform artificial insemination, and take away the children that result, once they are weaned. They also bring in, from time to time, other children and adults with whom you are supposed to share your living space and "all get along." Except these others don't speak your language and have cultural habits you can't fathom. We're not talking moving a Frenchman into an American home. More like tossing Mr. Joe Sixpack in with a warrior from the New Guinea uplands, a Congolese tribeswoman who has never heard the English language, and Quechua-speaking twin toddlers from Peru.

I analyzed orca intelligence above in terms of EQ. The batch of disparate humans described in the prior paragraph would soon develop some kind of patois so they could communicate, and the young twins would probably learn all three other languages available. Orcas in "forced communities" in SeaWorld parks never seem to learn one anothers' languages. Those captured wild were just a year or two old, so they were never raised or socialized by parents. Those born in the parks were, if they were very lucky, somewhat "raised" by their mothers, at least for a couple of years, but were typically moved to another park by year 5, only partway through whatever socialization their mother could teach them. Captive orca groups are thus socially abnormal, extremely so. They cannot communicate vocally to defuse emotional tensions, so they are more violent among themselves than wild orcas.

It is a testament to their intelligence, curiosity, and general good nature, that captive orcas tolerate humans swimming with them at all. If you were in the house described above, and a Guinea-Pig-sized ET came within arm's reach, do you think it would live long? The incidence of "aggressive incidents" and trainer deaths caused by orcas is actually stunningly small.

John tells us there are at present 30 orcas in captivity at SeaWorld parks. Contrary to park press releases, the typical captive orca has a life span of 15-20 years, if they survive beyond a year or two, which half don't. Wild orcas live 30 (male) to 50 (female) years. But a few captive orcas are in the range of 30+ years old. Even if orca "performances" become outlawed, the captive orcas cannot be put into the ocean. They don't know how to behave around wild orcas and would be either shunned, and thus die of loneliness, or killed outright. So we have a 20-to-40-year commitment to these animals, to provide some kind of living for their lifetimes. They are roughly half as intelligent as we are, perhaps more. But that intelligence is different. We cannot ethically keep them as performers, and especially, we cannot keep breeding them. The day must come when no whales are captives in tiny tanks ("tiny" meaning smaller than the Gulf of Bothnia).

John Hargrove loves the whales. He details his life, learning what he had to learn to qualify as an apprentice trainer, getting opportunities that rapidly moved him up the ladder until he was Senior Trainer. A couple of years in France, training orcas who had not yet learned how to work with humans in "their" water…and training the French trainers. He does not stint from telling of the occasional problems with an aggressive or angry orca. I am again impressed that these animals have levels of self-control, and an ability to cool down quickly, that beats nearly any human I've known. John tells of the injuries and traumas, physical and mental, that led him to take a medical leave and then to resign from training, as badly as that hurt him emotionally.

He is deeply conflicted, wishing he could be with the whales, yet knowing that, were SeaWorld management truly ethical, there could be no further human-orca contact beyond caretaking, and many people he loves would lose their jobs. He has become an advocate for the whales. They are the real victims here. No human has yet learned to communicate in depth with an orca or any other sea mammal, in spite of the fact that orcas not only have language among themselves, but several languages around the world. So we have little hope of learning the language of space aliens if any ever show up. Meanwhile, in this "alien encounters of the third (or even fourth!) kind"scenario, we are the aliens, and our captives are totally dependent on us. We have them in a space they cannot escape, and have changed them so they cannot be introduced to "normal" society, anywhere on this globe.

Every one of us needs to spend a few minutes daily pondering that fact.