Friday, July 12, 2019

Biography of the Cat in the Hat

kw: book reviews, nonfiction, biographies, dr seuss, theodor geisel

When I was a very early reader, among numbers of "Little Golden Books" and other "early readers," two others stand out in my memory, The 500 Hats of Bartholomew Cubbins and McElligot's Pool, by Dr. Seuss. I read them again and again. Later, when we obtained more books by Dr. Seuss, I gave them the same treatment. I find ten books by Dr. Seuss on the shelf, though one is a newer reprint of McElligot's Pool so that my old, much-bedraggled copy doesn't get worn to nothing.

Fast-forward a lifetime: When our son was in high school, a decade after adding much wear and tear to my old books and his newer ones, his school put on Seussical, the Musical. He was lucky to be in a high school that doesn't just have a drama club, but also engages in one full-blown musical production yearly, for which they recruit younger siblings and a few parents to assist. For a special performance, they invited a local operatic Bass to sing the Grinch's, which he did with great delight (and ours also!).

The first thing I learned while reading Becoming Dr. Seuss: Theodor Geisel and the Making of an American Imagination, by Brian Jay Jones, is that "Seuss" is pronounced "soice", to rhyme with "voice"! Well, that may be the way Ted Geisel pronounced it as part of his name. As his pen name, however, millions (maybe over a billion!) of us know him as "Doctor Soose", and that's the way it will continue.

I had a tendency to feel a little sad, reading of the life of Ted Geisel. He literally agonized over every word and every picture of his books. He had a hard time when younger getting his work placed with publishers, of magazines or books alike, that would pay him more than a pittance, if they would pay him at all. He didn't achieve financial comfort until he was in his fifties. But I realized that he was doing what he loved, and that, agony and all, he'd have created his fantastical characters and animals whether they made him wealthy or not, or even paid the bills.

Indeed, for decades he paid the bills with advertising illustrations; he coined the phrase "Quick, Henry! The Flit!" (Flit was a bug spray brand he promoted with ads like the one shown here; image courtesy of the UCSD Mandevill Special Collections Library).

This is from 1927. These ads, with their humorous animals and fantastical creatures (the mosquito in particular) show where he honed his craft during the long years when his books didn't yet sell all that well. It took America a while to get used to him.

Get used to him we did! In the 1950's and early 1960's, a good year might see sales of one of his books approaching 10,000 copies. By the 1970's a new Dr. Seuss book might sell millions of copies in just a few years.

An aside I kinda regret: Biographer Jones dwells much too much in early chapters on the "failure" of Ted Geisel to meet the "sensitivity" standards of this generation. He bemoans an apparent bias against women; he repeats—a few too many times—a description of Japanese as caricatured by Geisel during World War II, as slit-eyed dwarves; and he has several other such quibbles. He could have made his point much more effectively with a glancing notice, and just gone on. This stuff detracts from the book. Furthermore, it is dramatically unfair to judge the actions of someone in the 1930's or 1950's by the standards of the 1990's or 2010's. How will our great-grandchildren of the 2060's look back at us, and our comparatively "primitive" behavior? And further-furthermore, today's "standards" are procrustean and overly censorious. I predict they will be the subject of future ridicule!

Considering that millions of words have already been written about Ted Geisel, and possibly about this biography, I think it best simply to say that, knowing his purported faults (heavy smoker, persistent social drinker), I loved the genius that was Dr. Seuss, and I still do. He had an imagination like no other. Younger writers of children's books who tried to emulate his style, and whom he frequently mentored, sometimes produced marvelous books of their own, but had to find their own voice to truly excel. His best-loved character, the Cat in the Hat, was his alter ego: impish, sly, subversive, and messy, but willing to clean things up once the fun was done. The Cat is his fitting companion in this statue at the Geisel Memorial Library on the campus of UCSD, La Jolla, in the town he made his home.

Monday, July 01, 2019

Three for the ash can

kw: rejected books, policy

It has been too long since my last post. In that time I have read major portions of three books, and in each case, stopped reading and decided not to review the book. In such cases I decline to give the author or title any exposure at all. I have a few reasons for not continuing to read a book, including writing too poor to hold my interest and subject matter that is explained too poorly to be of use (in my estimation); I'll reject a book most quickly when I determine that I don't want to take in what is being offered. It seems all three of these criteria "hit the fan," one after another. There are other reasons, but those three are chief.

So I have begun reading another book, a biography of Dr. Seuss (Ted Geisel). It's long, so at least another week must pass before I finish it and review it here. I can happily report that this book is one I am sure to finish, and with much enjoyment.

Saturday, June 22, 2019

The Russian spiders get subtler

kw: blogs, blogging, spider scanning

Hmm. I let 12 days pass, and this showed up:

The Google gnomes have updated the look of the Stats pages, but left most of the colors alone. When I notice a big green blob somewhere besides America, I look more deeply. Russia stands out, since it is so huge. The Audience focus shows more:

The usual count for Russia in a week is a dozen or so. The four 20-high spikes in the chart (note that three are double width) total about 140 hits. Pull out the 136 from Russia, and the basic level is about right.

It seems the spider has been tuned to snarf up 19 or 20 hits at a time, with an hour or so between. Although a cluster of smaller spikes would be less obvious in stats for a more popular blog, they are still pretty obvious here. Even when I was blogging daily or oftener, and getting 15-20 hits hourly, these spikes would have been pretty visible.

Весело ли тебе?

The livingest languages

kw: book reviews, nonfiction, language, languages

My youthful conclusion has been confirmed by a linguist and the experts he cites: It is harder to learn German than most other languages. Not nearly has hard as Vietnamese, perhaps, but hard enough. I learned French rather well in three years in high school, and carried on "pen pal" relationships in French for a number of years, with great enjoyment. In my first year of college, I found I needed to learn German if I wished to be a Chemistry major. No substitutions would be allowed. I flunked German 101, twice. After my sophomore year, during which I truly enjoyed the year of Organic Chemistry, I changed majors…I also moved to another state and changed schools, but mainly for other reasons.

Reading Babel: Around the World in Twenty Languages by Gaston Dorren I had the fun of dabbling in his riffs on what he calls the "Babel 20". These are the languages needed to converse with at least half the people on Earth in their mother tongue, and with about 3/4 of all people in a language they know fluently.

I don't recall what I anticipated when I began to read the book. I found that each section, from chapter 20 (the first, on Vietnamese) through 2 (Mandarin), 2b (the Japanese writing system) and 1 (the last, on English) was focused on one thing or a few things that the author found interesting about the language. He speaks six languages and can read nine more, but they are all Euro-based. He spent six months, much of it in Vietnam, trying to learn Vietnamese, and failed. He got so he could sort-of read small portions of the newspaper, in a language that has the most diacritics of any. Even then, there would be at least a couple of words in every sentence he would have to look up.

A word on diacritics. They are accents, dots and other appurtenances added to our "familiar" Roman letters; it is generally considered that they make them into other letters. Written or printed English hardly has any. We have retained a very few, but they are dropping out of use. How many write "resume" when they mean "résumé" (a curriculum vitae, to use the Latin synonym), or "naive" rather than "naïve"? European languages have them, so that "Do you speak French?" becomes "Parlez-vous Français?". Notice the little squiggle below the "c"; it is called a "cedilla", pronounced "se-dee-ye". Both "résumé" and "naïve" are also from French. Scandanavian languages also have letters like "ø". Consulting Google Translate, I find that "Please show me the way to the restroom" becomes "Xin chỉ cho tôi đường đến nhà vệ sinh" in Vietnamese. Both "d" letters have a slash, some of the vowels have two "things" near them, and one "e" has a dot below plus a circumflex above. I don't know what kind of keyboard can be used to write it!

Two things seem to have conspired to make the language impenetrable to Mr. Dorren: firstly, the pronunciation is subtle, such that at the end of his half year of study, he still could not understand a simple spoken Vietnamese sentence; and secondly, the many levels of respect and politeness that affect the terms used, and they way they are written and pronounced. I learned a little of this kind of thing with Japanese (#13 of the Babel 20; my wife is Japanese), which has four levels: abrupt, ordinary (inside the home), polite (common street lingo; what you mostly learn in language classes), and super-polite. There is also a level of Japanese spoken only by the Emperor, and there are male and female ways of speaking. That's complicated enough, but Vietnamese is rather extreme by comparison.

Vietnamese also has six tones. To those who know a little about it, the four tones of Mandarin might be familiar. The way they distinguish the various meanings of words pronounced "ma" (as in "mama" for mother) is a common "fun story". Depending on tone, "ma" can mean "mother", "horse", "ride" as a horse, and it is also the "pronounced question mark". Thus, with appropriate tones, the phrase "Ma ma ma ma" means "Is Mom riding (the) horse?". Knowing that "dui" (pronounced "dway") means "yes", the answer is "Dui, ma ma ma." Tonal languages abound, particularly in Asia and Africa. Cantonese has nine tones. Luckily, you don't have to have perfect pitch to hear the languages. The tones are relative to the general pitch of the sentence and the person's voice, and some tones are moving, such as the way English speakers tend to end sentences with a rising tone or give one-word answers in a falling tone.

As I mentioned above, Japanese is #13. My nemesis, German, is #11, spoken as a first or second language by 200 million. So it isn't actually all that hard to learn. It is just hard for this Francophile English-speaker. But the author does riff for a third of the chapter on how German is indeed harder than average. Each chapter begins with a one-page summary of language characteristics. It includes such things as language family ("German belongs to the Germanic branch of the Indo-European family", while "Arabic is the most widely spoken of the Semitic languages, … of the Afro-Asiatic family"), script (German uses Roman script, but used to be written using Fraktur, which looks a lot like Old English. My grandmother's grandfather was from Germany and he wrote Fraktur), and sounds (including tones and how many vowel and consonant sounds are in common use).

A word about sounds. This is one area in which I was disappointed. Although little variations can cause the number of "distinct" sounds in any language to exceed 100, they cluster around a more limited number. "Standard" English, primarily the language as described in the large dictionaries, has either 43 or 44 sounds or "phonemes". According to my enormous second edition of Websters Unabridged (published in 1979), there are 30 vowel sounds, but to my study, several of them collapse into one another, leaving 23 or 24, depending on whether a following "l" or "r" actually changes the sounds of "a" and "o". There are 20 consonants, but the number of "nonvowel phonemes" is probably 24, which underlies the claim I have seen for 48 sounds in spoken English. Most languages have fewer than this, usually fewer than 40. But Mandarin (I have been told by several Mandarin speakers) has 88 sounds. Most of them are vowels that English speakers who didn't hear Mandarin as a small child (and keep it up) cannot distinguish, and a number of the consonant sounds are also opaque to us. Thus the common question "Are you Chinese?" becomes "Zhōngguó rén ma". That accented "e" in "rén" is a sound I can neither hear accurately nor properly pronounce (my Chinese friends giggle behind a hand). And "zhōngguó" sounds to me like "choo-go", but I am assured that the "ngg" sound is actually nasalized, though to a lesser extent than ñ in Spanish. At the other end of the spectrum, I understand that Hawaiian has 13 sounds, five vowels and eight consonants. Anyway, the sections on "sounds" for each language do not provide a satisfactory guide.

Dropping that quibble, though, the book is a delight to read, and ends on a thoughtful note. English has become the modern lingua franca, the go-to language for diplomacy and international trade (and internet communications), just as French was a few generations ago, when the term lingua franca was coined. Will Mandarin replace it any time soon? The chapter on English, posed as an interview with a native English speaker, reaches the conclusion that the Chinese writing system(s) would prohibit that. Computer-expedited communication works better when an alphabetic script is used. My Chinese friends, when texting in Mandarin, use a phonetic input method, and the software in their phones picks out possible characters for them to choose. A smaller number use software that lets them draw the character with a finger. That method is actually slower! And, as a final conclusion, automated translation software continues to get better, so we may not need a lingua franca in the future. I have no idea if the Vietnamese sentence I used above is an accurate translation. In another 10-20 years, the translations will be of greatly improved accuracy. I already have an app on my phone that lets me say something in English, and it speaks a Chinese, Spanish, Korean, or whatever translation, and it works pretty well. The person's reply is turned into English that might be a little stilted, but is clear.

Too much fun. I understand the author wrote an earlier book about 60 languages. Worth looking into.

Monday, June 10, 2019

Trying out a new creation myth

kw: book reviews, nonfiction, origins

Dr. David Christian is among a handful of people that founded, and promote, the concept of Big History. Of course, the biggest history available is that of the whole Universe, which has been presented at various angles by none other than Prof. Stephen Hawking via the books A Brief History of Time (1988), The Universe in a Nutshell (2001), The Grand Design (2010, with Leonard Mlodinow), and Brief Answers to the Big Questions (2018, Posthumous). A pretty hard act to follow!

In his book Origin Story, a Big History of Everything, Dr. Christian proposes to replace Genesis and other creation stories with a simplified but comprehensive account of scientific knowledge regarding the Universe, the Earth, and us.

I like his approach. He identifies 8 milestones that he calls Thresholds. One could say they represent successive crystallizations of the flow of energy. One author (I no longer recall who) wrote of "hangups" in the otherwise steady flow of energy from the extreme contrast represented by the Big Bang, to the eventual heat death of the Universe as it reaches maximum possible entropy. For example, stars "hang up" energy by forging hydrogen into helium at a measured rate; also, gravity causes galaxies, galaxy clusters, and superclusters to form and retain their integrity for billions of years, rather than everything falling straight back together. The simplest hangup is the minuscule torque found in a hypothetical universe of just two particles, being attracted by gravity. If either particle has even a trace of sideways momentum, not directly on the line between them, they will miss one another at closest approach, and some sort of orbit will be achieved instead. If they bear an electric charge, they will emit photons as they are accelerated, so that over time, the orbit will shrink until they collide. But the process will take eons longer than if they were to fall directly into one another.

A little under half the book is taken up with the story of the Universe from the Big Bang, and the energy threshold of the Big Bang itself, followed by those represented by the formation of stars, galaxies, molecules, and life, leading up to Threshold 6, Humans. Somehow, the capture of electrons by nucleons to form atoms, which I would think of as a significant threshold, and certainly had a great effect on energy flow in the Universe, is glossed over. Naturally, this would come before molecules. In faithfulness, I must note that the writing in this portion is less than compelling. I went to YouTube to watch some of Dr. Christian's lectures and a TED talk. He is an excellent speaker. It just doesn't show in this portion of the book; there is a lack of passion.

He hits his stride once he begins to discuss agrarian civilizations, which we tend to call just Civilization, as if there were no other. The transition from foraging to farming, which took place over several thousand years, erupting in various places, is quite a crystallization of the human species, from a more fluid state to the settledness of farms and cities. The clear passion shown by the author in this and later portions make for much more agreeable reading. The slog through the earlier portions mostly explains why it took me so long to read this book.

I would call Origin Story a useful first step to making Big History accessible to the bulk of us. There is a long way to go.

Monday, May 27, 2019

This mystery is almost incidental

kw: book reviews, mysteries, fiction, animal fiction

In the past I read a couple of the Sneaky Pie Brown mysteries by Rita Mae Brown, in which the cat is an essential element in the solution of the crimes. Homeward Hound is the latest of her Sister Jane series, set in the Virginia horse country, among the fox hunting set.

Ms Brown's rollicking prose carries a story along, no matter the venue. I found it fascinating reading, not so much for the gradual solution to two murders, but as a window into a subculture about which I know next to nothing. As it happens, the author is a Master of Fox Hounds (MFH designation, something I didn't know is a "thing"). I suspect a number of her friends find themselves limned in the story, only thinly disguised.

Reading of the fox-and-hounds set, to a surprising level of detail, added to my understanding that, whatever formal religion folks may belong to, most have a real "church" that some might call a hobby. So these riders' dedication to the hunt is their actual religion, as is the desert-scanning and mineral/agate-gathering of many rockhounds and the finely-honed model-building of model railroad enthusiasts.

In a Sneaky Pie story, the cat does some of the talking, but only to other cats and dogs, who also talk. In a Sister Jane story, cats, dogs, foxes, and owls get into the act. It seems they can all understand one another's "speech", and also what the humans are saying, but no human knows what the animals are "saying". However, the animals don't do much to solve the crime. Sister Jane does most of that. The animal chatter seems to serve more as a series of asides that help the reader gather background information before Sister Jane becomes aware of it. For example, the primary murder in this volume occurs during a snowed-out fox hunt, when a rider is pulled off his horse and killed, but only the horse notices…and "talks" about it with another horse later on.

The story gets pretty complex, with many characters (about twice as many as most writers would employ, just of the human kind, let alone a couple dozen animals). There are a couple of love stories going on. The tying-up-the-knots scene at the ends ties up a few knots nobody knew were in the works. To me, it detracted a bit. But all in all, quite an enjoyable read.

Wednesday, May 22, 2019

Refraining from a roll in the gutter

kw: book reviews, short fiction, short stories

"You can't rassle with a skunk and come out smelling like a rose." —anonymous Texan.

The blurb on the slipcover says of the characters in Thom Jones's stories, "…grifters and drifters, rogues and ne'er-do-wells, would-be do-gooders…", but I decided to give the book a try: Night Train, a posthumous collection of both published and unpublished stories by Thom Jones.

Jones became famous in his forties when he published "The Pugilist at Rest", which opens the volume. It is indeed a pretty good story. There is an interesting mix of erudition and the argot of the perennial bottom-feeder. The second story, "Black Lights", on the same theme of a washed-up boxer, goes downhill from there. I didn't finish it. I popped around the volume, and found little worth reading for more than a minute or so. I usually checked the last paragraph to see if a story or the character went anywhere. Nope.

I don't mean to say that Jones wrote badly. He spent (squandered?) his skill on getting us inside the heads of people whose head I don't like being in.

There are 19 stories from three prior volumes, and 7 new ones. Only one of the new ones caught my interest for more than a minute or two: "Diary of My Health". It is evidently autobiographical, with flights into lunacy, from the viewpoint of a 62-year-old who doesn't expect to live much longer. It describes Jones's medical issues, but how inflated or deflated I cannot tell. He lived to age 72, dying, as he expected, mainly from diabetes. He also suffered from temporal lobe epilepsy, which features strongly in many of the stories, and figures in his bursts of creativity.

There are a lot of well known writers (and many others) who almost idolize his work. Most of the writers are people whose writing I don't read, so I'll leave them all in the bubble I found them in.

Tuesday, May 21, 2019

VR and AR - Becoming the newest IE's

kw: book reviews, nonfiction, virtual reality, augmented reality, surveys, history

I must have been twelve when my family and I went to a Cinerama theater to see Windjammer. It was quite a special event, partly because it cost a lot more than going to an ordinary movie. The theater's program discussed what made Cinerama extra-special; the ultra-wide screen (more than 140°) and the fully-surrounding sound system. Though the movie had some kind of plot and plenty of action (mostly sailors-in-training climbing rigging, at least as I remember), the sheer spectacle was the main attraction. It was my first Immersive Experience (IE).

The hype surrounding Virtual Reality (VR) and Augmented Reality) AR—and several other bits of jargon that boil down to these two terms—promises us better IE's. So it is about time for a historical and technological survey of the VR/AR field, ably supplied in a new book by David Ewalt: Defying Reality: The Inside Story of the Virtual Reality Revolution.

The author digs deep into history for the forebears of motion pictures and TV, and various efforts, with varying levels of success, to immerse the viewer in the experience. Considering how prone we are to see animals and scenery in clouds, pictures on half-melted butter on a piece of toast, and maps and other images on Holstein cattle, it doesn't take much to prepare us to be immersed in whatever is going on in front of us. If you look now at a video made for "standard" TV (called NTSC), it seems surprisingly blurry. Although vertical resolution was 535 lines, horizontal resolution seldom exceeded 200 dots per line. A TV screen larger than 25" (diagonal; 20"x15") looked blocky. Yet we could easily become immersed in the action on the screen and "zone out" everything else, particularly in a darkened room.

Fast-forward through the two HD formats, which look good on bigger screens, up to 50" or so, and 4k, which comes close to the resolution that most people can see when the screen's horizontal dimension fills about 70° of their visual field (that means being about 50" from the center of an 80" screen). But our actual visual field is more than 200° wide by 150° high, and the zone of overlap for binocular vision is 120° wide. Thus, for a fully immersive visual experience, it takes a heck of an optical system!

Every VR system or AR system so far produced cuts into this, sometimes greatly. As described in Defying Reality, the Oculus Rift, first prototyped by Palmer Luckey nearly ten years ago (on sale for about 3 years now), comes the closest. It is almost exactly matched by the HTC Vive. Both claim 110° horizontal field of view, which is close to our stereo-vision field of 120°.

The author, as a journalist, had good access to Luckey and other developers throughout this recent history. He is a good subject for testing the equipment, because he is extra-susceptible to motion sickness. He bought a purported VR system 20+ years ago, got sick, and remained a skeptic about VR's possibilities until the Rift came along. That and a few equally recently developed headsets are the first ones he's been able to use without nausea. As we say in the computing field, "The iron didn't match the application" until about 2008. It takes computing power equivalent to a 1990's Supercomputer such as the Cray-YMP to do all the "physics" needed to keep up with our visual senses. A decent gaming computer (in the $1,500 price range, at the moment) now has that level of compute power.

More recent developments include the Magic Leap, the first capable AR system, that lets you see the world around you (through tinted glass, not by re-displaying video from external cameras), with virtual elements superimposed. It uses some amazing optical tricks to make that work. So far, though, the field of view of the Magic Leap system is only about 40°x30°, so it has some specialty uses, but is less capable of producing a full IE.

How soon will VR or AR be found in the average middle-class home? A lot depends on application development. It's a little like the year 1981, when the IBM PC was introduced. There wasn't much you could do with it at first, but by the end of the year, WordStar and Lotus 1-2-3 became the killer apps that pushed sales of the machines, even at prices exceeding $2,000 (or $3,000 if you got a 10-Mbyte hard disk). The $2,400 I spent on my first PC clone would come to $6,500 today. No killer app for VR or AR has arisen. When one appears, the field will attain some momentum.

David Ewalt is young enough to keep his finger on the pulse of the field for another decade or so, and write the next book, about how incredibly changed life is with such machinery in most homes, and perhaps in public spaces, trending toward a much better successor to the Google Glass (a great idea ahead of its time and far ahead of the tech needed to make it worthwhile). I'll keep a search agent out there for upcoming books by this author.

Tuesday, May 14, 2019

Quantum Mechanics brought up to date

kw: book reviews, nonfiction, quantum mechanics, overviews

I've just finished reading Beyond Weird: Why Everything You Thought You Knew About Quantum Physics is Different, by Philip Ball. Prior to reading it, every book I've read that "explained" quantum mechanics has used the knowledge and language of the 1940's. Philip Ball brings things up to date, and it is about time someone did so!! The field of quantum (we can leave off the various modifying nouns for the nonce) has been the hottest of hot topics in physics for more than a century, so incredible numbers of experiments have been performed, and astronomical numbers of hypotheses have been proposed and defended and attacked and re-defended in tons and tons of published articles. So folks have learned a thing or to.

To cut to the chase. Werner Heisenberg got things rolling with his Uncertainty Principle: you can't specify both the position and the momentum of any object with accuracy more precise than a tiny "delta" that is related to Planck's Constant (); it yielded a Nobel Prize for him. Albert Einstein showed that light is quantized in his publication on the photoelectric effect, for which he received his Nobel Prize. In general terms, Heisenberg and Einstein agreed that a moving particle had a genuine position and momentum, but that no tool for determining these quantities could operate without changing them.

We can summarize the way this is explained in the Feynman Lectures on Physics thus:
A beam of electrons directed through two holes will produce an interference pattern on a screen. The electrons are behaving like waves. You want to determine which electron goes through which hole. Electrons can be 'probed' by shining short wavelength light on them, and recording where the scattered light came from. When you turn on the light beam, the interference pattern disappears, and is replaced by a smooth diffraction curve, as though there were one hole, not two. Weakening the beam doesn't help. So you try using a longer wavelength, then longer and longer wavelengths. Finally, you find a wavelength that will scatter off the electrons, but allows the interference pattern to form. However, the wavelength of the light you are now using is greater than the spacing between the holes, and you cannot tell which hole any electron goes through.
Niels Bohr thought differently. I have not determined whether he focused on the ubiquitous presence of diffraction patterns in focused beams (of light, electrons, or whatever; I'll come back to this). Whatever, he concluded that the observation of an electron's position or its direction of motion didn't just reveal that quantity, but created it. His published material shows that he dwelt on the matter of measurement; that no quantum theory could be admitted that said anything more than what measurements could say: "There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that it is the task of physics to find out how nature is. Physics concerns what we can say about nature." (p.73) He went beyond this, stating that nothing really happens until it is observed.

This led Erwin Schrödinger to make fun of Bohr with his Cat story: Put a cat in a box, along with a device that has a 50% chance of killing the cat via poison gas in the next hour. Just before opening the box, can you say whether the cat is alive or dead? He asked if Bohr would say, "Both," until the box is opened. Most likely, Bohr would say that the wave function, which Schrödinger had proposed for calculating probabilities of quantum events, would "collapse" and you would only then observe either a dead cat or a living cat (which you would whisk out of the box and close it in case the mechanism were to go off just then and kill you both).

Bohr was a bully. He didn't just promote what became the first of many versions of his Copenhagen Interpretation, he evangelized it, in a totally obtrusive way. He used no violence (of the fisticuffs variety, anyway). He'd just talk you to death. It was like water torture, a torture to which many of that generation of physicists eventually submitted. Still, fewer than half the physicists of any generation, then or since, really believes it. Murray Gell-Mann, for one, thought he simply brainwashed people.

The thing is, there is a bit of support for this view, though it calls into question the definition of "observer". And I confess, when I first heard of the Cat story, I asked, "Isn't the cat an observer?" Anyway, anyone who has done technical photography, and especially astronomical photography, knows of Diffraction. The smaller the hole (the camera diaphragm, for instance) through which a beam of light passes, the more that beam spreads out. It isn't much, in most cases, because visible light has such short wavelengths, between 400 and 700 billionths of a meter (nanometers or nm). However, the pixels in a modern digital camera are really small and closely spaced. In my Nikon D3200 camera, for example, there are about 24 million cells in a sensor that measures 12mm x 18mm, spaced 0.003mm apart. That is 3,000 nm. For technical photography I'd prefer for the light to spread out no more than the distance from cell-to-cell, so that "every pixel counts". For a lens of focal length 100mm, we want a half-angle α to be less than arcsin(.003/200) = 0.00086°. For such calculations it is common to use a wavelength of 500 nm. We find that an aperture of 33.3mm, or larger, is needed to get full utility from the camera's sensor. That implies an f/stop of f/3. Make the aperture any smaller, and the picture gets successively fuzzier. If you have an SLR and can control the aperture, try taking a picture outdoors at f/22 or f/32 (many will allow that). It will be pretty fuzzy.

This is why astronomers like telescopes with a wide aperture. Not just because they are efficient "light buckets", but because the bigger the hole the light goes through, the less it spreads out, and the sharper an image you can obtain. Of course, a telescope with a focal ratio of f/3 or less is hard to build and expensive. But for a large instrument with a long focal length, those little pixels are very small "on the sky", allowing you to see finer detail in distant galaxies.

Now, if there is no sensor at the focus of the telescope, does the diffraction still occur? Here I attempt a détente between Bohr and Heisenberg. Heisenberg would say that the aperture, no matter its size, is "disturbing" the light beam from some distant object, spreading it out. The effect is never zero, no matter how big the aperture. This implies that the whole universe affects what happens to every bit of light, every photon, as it makes its way from source to wherever it is "going". But, whether we are watching what happens or not, it must still be happening. Bohr would have to admit that the "observer" is effectively the aperture, and by extension, that the universe itself is an observer, or is constituted of observers. Effectively, everything is an "observer" of everything!

On page 84, the author writes, "…the idea that the quantum measurement problem is a matter of 'disturbing' what is measured is exactly what the Copenhagen Interpretation denies." (Author's emphasis) I think this example shows that such a position is untenable. For my part, I think the Copenhagen Interpretation, as stated by Bohr and most of his followers, is simply silly. Photons go where they go and do what they do. So does anything else in motion. It is amazing that the environment, all of it, has an effect on where they go. But diffraction experiments show that it is so: Everything disturbs everything. However, for most of the universe out there, the disturbance doesn't seem to amount to much.

One quibble I have with the book is that it lacks a table of contents. The 19 chapters just have a 2-page heading with a snappy title. In the chapter titled "The everyday world is what quantum becomes at human scales" (the 11th), the environment is brought in, and the matter of "decoherence". Prior chapters have discussed all the things we find "weird", such as "entanglement" (for example, two particles that carry equal but opposite values of some characteristic such as polarization, even to the ends of the universe if they don't bump into anything). They get us ready for this chapter, the key chapter of the book.

Entanglement, just mentioned above, is one kind of Coherence between quantum entities. A laser beam and a Bose-Einstein condensate are, or express, coherent states among numerous entities. Coherence is thought to be fragile. It is actually quite robust, and even infectious. Particles that interact with their environment spread their quantum states around. The problem is, any instrument we might use to measure such quantum states is part of the environment, and so partakes of that state, becoming unable to detect it. That is what is meant by Decoherence. It expresses our inability to keep a pair of quanta, for example, in a given entangled state because they "want" to spread it around. The longer we want them to stay in coherence, the more it will cost. However, it is this phenomenon of decoherence that leads directly to the human-scale, everyday behavior of objects. The author concludes that the entire universe "observes" everything that goes on unless we take great pains to isolate something from the environment, so we can measure it. It is the error of Bohr and others in not recognizing the influence of the multitude of non-conscious "observers" known as the universe, that led to the silliness of the Copenhagen Interpretation.

Or, perhaps I ought to be more charitable to Niels Bohr. Maybe he was right, that things only happen when they are observed by a conscious entity. But diffraction shows that every photon, of whatever wavelength, that passes through the universe; every electron, neutrino, etc., etc., etc., is an observer, and produces the universe that we see, in which most quantum phenomena require costly apparatus to observe and maintain (except diffraction!). If the universe required conscious observers only, or things could not happen, that would imply that God made the universe only after there were observers to keep it functioning! And that's funny. It may even be true! The Bible, in the book of Job (38:4-7), mentions multitudes of angels that observed the "foundation of the Earth". The Copenhagen Interpretation agrees with Job.

A late chapter discusses quantum computing, that it is being over-hyped (so what else is new?). Near the end of the discussion, I read that all the ways of making a quantum computer, so far discovered, are special-purpose. One can search, one can encode or decode, and so forth. It appears that, at the moment at least, no general-purpose quantum computer can be produced, that would be analogous in its breadth of function to our general-purpose digital computers. So don't sell your stock in Intel or Apple just yet!

I was much refreshed by this book. The author's point of view is still somewhat "Copenhagenish", but that's OK with me. If decoherence is what he says it is, then it really is true that what we see at our scale of a meter or two is just the consequence of quanta doing what they do, in all their multitudes, and spreading their characteristics about quite promiscuously, so that the universe just keeps keeping on, as it has since the Big Bang, at the very least.

Tuesday, May 07, 2019

The robots are coming...aren't they?

kw: book reviews, nonfiction, artificial intelligence, essay collections, essays

I almost titled this post "Artificial Intelligence Through a Human Lens". But that describes only a few of the essays in Possible Minds: 25 Ways of Looking at AI, edited by John Brockman.

When I was an active programmer (these days I'm called a "coder"), my colleagues and I often spoke of "putting the intelligence into the code". By "intelligence" we really meant the logic and formulas needed to "get the science done". During that 40-plus year career, and for the half-decade since, I've heard promises about the "electronic brains" we worked with. They were going to replace all of us; they would program themselves and outstrip mere humans; they would solve the world's ills; they would develop cures for all diseases and make us immortal. The spectrum of hopes, dreams and fears shared among professional programmers exactly matched the religious speculations of earlier generations! To many of my former colleagues, the computer was their god.

The term "artificial intelligence" may have first hit print soon after ENIAC became public, but it really took off after 1960, nearly sixty years ago. As far as I can tell, there has been about one article expressing fears for every two articles expressing hopes, over that entire time span. But in the popular press, and by that I mean primarily science fiction, the ratio has been about 4 or 5 dystopian visions for every utopian one…even with the popularity of Isaac Asimov's I, Robot books.

An aside: Asimov was famously neurotic. In his Robot fiction, the people are neurotic while the robots are deified. However, a great many of the short pieces explore all the ways a robot can go wrong, either because of a misapplication of one of the famous "Three Laws", or by a dilemma that arises when two of the laws are in conflict. The scary old film Forbidden Planet has its own version of Robbie the Robot temporarily driven to catatonia by being ordered to shoot a person.

While the pros have been dreaming of the wonderful things "thinking machines" could do, the SciFi community has put equal thought into what they might do, that we don't want them to do. Colossus is probably the best example (the book is much better, and infinitely more chilling, than the film).

While I thoroughly enjoyed reading Possible Minds, I finally realized that the essays tell us a lot about the writers, and very little about machine intelligences.

Some of the essays limn the history of the field, and the various "waves" (or fads!) that have swept through about once a decade. The current emphasis is "deep learning", which is really "neural nets" from about 30 years ago, made much deeper—many more layers of connectivity—and driven by hardware that runs several thousand times faster than the supercomputers of the 1980's. But a few authors do point out the difference between the binary response of electronic "neurons" and the nonlinear response of biological neurons. And at least a couple point out that natural intelligence is embodied, while hardly anyone is giving much attention to producing truly embodied AI (that is, robots of the Asimovian ilk).

Another aside: Before I learned the source of the name "Cray" for the Cray Computer Corporation, I surmised that it represented a contention that the machine had brain power equal to a crayfish. Only later did I learn of Seymour Cray, who has named his company for himself. But even today, I suspect that the hottest computers in existence still can't do things that the average crayfish does with ease.

What is a "deep learning" neural net? The biggest ones run on giant clusters of up to 100,000 Graphics Processing Units (GPU's), and simulate the on-off responses of billions of virtual "neurons". Each GPU models several thousand "neurons" and their connections. They are not universally connected, that would be a combinatorial explosion on steroids! But so far as I can discern, each virtual neuron has dozens to hundreds of connections. The setup needs many petabytes of active memory (not just disk). Such a machine, about the size of a Best Buy store, is touted as having the complexity of a human brain. Maybe (see below). The price tag runs in the billions, and the wattage in the millions.

More modest, and less costly, arrangements, perhaps 1/1000th the size, are pretty good at learning to recognize faces or animals; others perform optical character recognition (OCR). The Weather Channel app on my phone claims to be connected to Watson, the IBM system that won a Jeopardy episode a few years back. The key fact about these systems is that nobody knows what they are really doing "in there", and nobody even knows how to design an oversight system to find out. If someone figures that out, it will probably slow them to uselessness anyway. At least the 40-year-old fad of "expert systems" was able to explain its own reasoning, but proved incapable of much that might be useful.

To expand on the difference between virtual neurons in a deep learning system, and the real neurons found in all animals (except maybe sponges): A biological neuron is a two-part cell. One end includes the axon, a fiber that transmits a signal from the cell to several (or many, or very many) axon terminals, each connected to a different neuron. The other end has many dendrites, which reach toward cells sending signals to the neuron. An axon terminal connects to a dendrite, or directly to the cell body, of a neuron, through a synapse. That is one neuron. It receives signals from tens to hundreds to thousands of other neurons, and sends signals to a similar number of other neurons. Neurons in the cerebellum (which controls bodily functions) can have 100,000 synapses each! The "white matter" in the mid-brain consists of a dense spiderweb of axons that link almost everything to everything else in the brain. The spinal cord consists of long axons between the brain and the rest of the body.

A biological neuron responds (sends a pulse through the axon, or not) to time-coded pulses from one or more of the synapses that connect to it. The response is nonlinear, and can differ for different synapses. The activity of each neuron is very, very complex. It is very likely that, should we ever determine exactly how a particular type of neuron "works", its activity will be found to require one or more GPU's to simulate with any fidelity. Thus, a true deep learning system would need to have, not thousands or even millions of GPU's, but at least 100 billion, simulating the traffic among somewhere between 100 trillion to 1,000 trillion synapses.

Folks, that is what it takes to produce a "superintelligent" AI. As an absolute minimum.

But let's look at that 100 billion figure. It has long been said that a human brain has 100 billion neurons. People think of this as the population of the "gray matter" that makes up the cerebral cortex, the folded outer shell of the brain, the stuff we think with. Here is the actual breakdown:

  • Total average human brain: 86 billion neurons (and a similar number of "glia", that help the neurons in poorly-understood ways)
  • Cerebral Cortex (~80% of brain's mass): 16 billion neurons
  • Cerebellum (18% of brain's mass): 69 billion neurons
  • Brain Stem (~2% of brain's mass): 1 billion neurons

Also, on average, a cerebellar neuron has 10-100 times as many synaptic connections as a cerebral neuron, even though it is much smaller. It takes most of the brain's "horsepower" to run the body! Your much-vaunted IQ really runs on about 19% of the neurons. About 1/3 of that is used for processing vision. (Another aside: This is why I claim that the human visual system is by far the most acute and capable; the human visual cortex exceeds the entire cerebral cortex of any other primate).

I wish that the essayists who wrote in Possible Minds had said more about different possibilities for producing intelligent machinery. Simulating natural machinery is a pretty tall order. The hottest deep learning system is about one-millionth of what is needed to truly replicate human brain function, and by "about" I mean, within a factor of 10 or 100. Considering that Moore's Law ended a decade ago, it may never be worthwhile. Even if Moore's Law restarted, a factor of one million in processing power requires 40 years. Getting the power consumption down to a few hundreds of watts, rather than several million watts (today) and possibly tens of billions of watts in the middle-future, may not be achievable. I hold little hope for "quantum computers". Currently they don't even qualify as a fad, just a laboratory curiosity with coherence times in the range of milliseconds. Quantum coherence needs lots of shielding. Tons. We'll see.

So, robot overlords? At the moment, I put them right next to faster-than-light starships, on the fantasy shelf.

Monday, April 29, 2019

Tea light of another color

kw: analytical projects, lamps, spectroscopy

Recently a friend gave us a goblet she made that she calls Tree of Life. Perhaps there are 12 colors on it to match the 12 fruits mentioned in the book of Revelation (I didn't count). She also gave us a candle to put in it, but the candle was in a large jar and didn't illuminate all of the goblet. So we tried a tea light candle, which worked nicely. However, we don't usually burn candles, and we wanted something safer (the cat might knock it over). So we tried an amber-colored LED tea light. That was a poor choice! We had a different kind of LED candle, with a whitish-yellow colored light, so we tried that, with mixed results. Here is the goblet with the three lights inside in order: flame, whitish LED, and amber LED.


These are the lights in the order shown above. The flame is clearly the best all around. The whitish-yellow LED candle is too tall to illuminate the whole goblet, but it show the colors well. It also uses a moving reflector to make a flame effect, but that blocks most of the light that would go out the back. The amber LED tea light, while it illuminates the whole goblet, has no range of color! (There are blue reflections in the second and third photo from a nearby computer monitor.)

It is clear that the amber LED has a narrow spectrum. How narrow? I determined to find out. Here are the results of spectroscopy of the three lamps, and also an incandescent lantern bulb.


While the flame (second spectrum) is whiter than the whitish-yellow LED, it has a broader color spectrum, though not as broad as the incandescent lamp shown first. The bright peak at the blue end of the third spectrum is normal for an LED using phosphors to add the red through green and light blue colors. LED lamps for home use use the same principle.

The amber LED (fourth spectrum) does not use phosphors. It is a low-voltage LED that has a color peak in the orange-yellow area (near 590 nm). Its bandwidth is similar to that of the blue excitation band of the other LED that does use phosphors. There is just a trace of red and a bit of green, but they are overwhelmed by the yellow-orange peak. So all the colored blobs on the goblet just look yellowish.

The lantern bulb has a full-width spectrum, from below 400 nm to beyond 700 nm; the visible portion is 300 nm wide. The candle and the whitish LED have bandwidths nearly as broad. But the amber LED's bandwidth is a mere 75 nm, and the brightest portion is no wider than half that. It mostly just makes amber-colored light and nothing else.

I plan to carry a hand spectroscope with me the next time I go to buy LED tea lights, to find a brand with a broad spectrum that'll illuminate this goblet properly!

Sunday, April 28, 2019

Seeking inner peace underneath

kw: book reviews, nonfiction, caves, mines, subways, tunnels, catacombs

From a distance the book proclaims boldly, "UNDERGROUND WILL HUNT". After a moment of cognitive dissonance, thinking "Hunt" was a verb, I realized the name of the book's author had to be Will Hunt. The full title is Underground: A Human History of the Worlds Beneath Our Feet.

Will Hunt stumbled across an abandoned tunnel system when he was 16. When he found an old map, he realized that the tunnel ran right under his family's yard. This led to his lifelong fascination with the places beneath our feet. Over the years he has explored subway tunnels—in use and not—, catacombs, mines, a few caves, and underground cities such as that in Cappadocia, Turkey. He writes also of Burrowers, those who excavate as a hobby, though he missed Seymour Cray, founder of Cray Computers, who excavated tunnels under his property, by hand, for about forty years. The labor relaxed him whenever he reached an impasse in computer design.

I can't pass up the chance to mention a few of my fondest underground memories. I was an active spelunker in the 1970's. I used a carbide lamp, which meant carrying in extra "loads" of carbide and plenty of water, because a "load" lasts no more than 4 hours, and a typical day underground lasts 12-16 hours. Three events from that period:

  1. Helping with an NSS (National Speleological Society) re-survey of a portion of Lilburn's Cave, a rare cave in marble rather than limestone. It is near Three Rivers in California. At the time, seven miles of passages had been mapped. The current total is 21 miles. It remains the longest cave in marble in the world (the longest cave in limestone is more than 10 times as long). In one long, high-ceilinged passage that needed re-mapping, we couldn't find the earlier survey markers, dots of colored spray paint. Finally looking upward, we realized that 15-20 feet of sand had been washed out of the passage since the prior survey. We had to chimney-climb up to the survey points and do our work while braced high above the floor.
  2. Mineral collecting in the Red Cloud Mine in New Mexico. (The NSS, that same year, had only taken me as a member after I affirmed that I never collect minerals in natural caves, just in mines. But they never entirely trusted me.) At one point, deep inside, my lamp went out. A girl sat nearby to afford light while I changed the carbide load in my lamp. Then her lamp went out! Everyone else was too far away for any light to reach us. She began to panic. I talked to her calmly while I finished working on my lamp by feel, and got it lit. Then I changed the carbide in her lamp also, and we were good for another 4 hours.
  3. While I was a geology student, the Mineralogy class went on a field trip to the Pine Creek Tungsten Mine in central California. This is essentially a hollowed-out mountain top with miles of tunnels blasted from the rock. It is unique, so far as I know, in that the lowest level of the mine is where you enter via a mine train that goes 2 miles into the mountain. The mine workings go upwards from there. With gravity to help, getting the ore out of the mine is much less costly than in any mine where you have to lift it out. We were helped to collect a few rare minerals that occur only in that mine. This is the only underground experience that I used an electric lamp (provided by our hosts).

The author's second chapter is amazing. He writes of making an underground traverse of the city of Paris, along with several experienced helpers. They also got the help of a number of Cataphiles, Parisians who make a hobby of exploring, sleeping, holding concerts and movie events, and dodging authorities, in the Paris underground.

The 7th chapter presents the use of caves and tunnels (the latter are artificial, whether by man or beast) in pre-history, which seems to go back at least 200,000 years, before there were Homo sapiens in existence. Caves everywhere are found to have numerous artworks, altars, pottery arrangements, and sculptures, usually in their deepest recesses. One might think that shallower stuff existed but has been lost, but at least some traces of such works outside the dark zone ought to remain. We don't find them. There is still today some kind of feeling of holiness about the deepest places.

Wherever it is economical to burrow and tunnel, that seems to be a favored way of making use of the third dimension. I recall strolling 200 feet beneath the streets of Tokyo, in a thriving marketplace (ichiba), almost a bazaar. There were at least three such levels above, between this one and the sunlit zone. I don't know how many different subway lines cross, at different depths, under that part of the city. There are ichiba to be found along every one of them.

The author makes a good case in his last chapter for a seemingly universal feeling of awe and humility that people feel for caves and other underground spaces. He thinks it the origin of religion. Perhaps it is, in part. In any case, this very enjoyable book conducts one on journeys in directions we seldom follow.

A spider spike in a quiet week

kw: blogs, blogging, spider scanning

Just about 8 hours ago, while I was at the church meeting, a bot in the UAR (or spoofing their URL) snarfed up 114 posts:


This was in the 10:00 AM hour here, or the 6:00 PM hour over there.

Thursday, April 25, 2019

A bigger killer than car crashes

kw: book reviews, nonfiction, opioids, opiates, addictions, polemics

For most years I remember, the number of people killed in auto accidents in America was about 50,000. Then after about 2000 it got down into the low 40's, and since 2008 has usually been in the mid-to-low 30's. From 2005 until the end of 2018, just over half a million deaths in America have been auto fatalities. Yet, car deaths are on the rise since 2014, with a 10% increase from 2017 to 2018, and I'll get back to that.  But first:

This chart shows data only from 2005 onward, to better focus attention to the crossing trends: In 2009, for the first time, deaths in America from drug overdoses exceeded those from car crashes. Since then, this trend has increased dramatically.

Figures aren't complete for 2018, and they may not have risen as fast as they did the prior year or two. But the fact remains that, from 2005 to 2017, one year less than the sum in the prior paragraph, drug overdose deaths have exceeded 570,000. These two causes together come to about 1.1 million untimely deaths in America in fifteen years.

Ryan Hampton, for one, wants to do something about that, at least the drug overdose part, as he writes in American Fix: Inside the Opioid Addiction Crisis—and How to End it. While I call this book a polemic, in this case I mean that in a very positive way! Nothing short of loud Jeremiads seems able to get through the thick skulls of not just policy makers but the majority of the public.

I must confess that, while Mr. Hampton is a very good writer, it was quite an exercise for me to read through the book. His aim is to pluck heart strings, and he does it masterfully. He is in recovery himself, and pulls no punches about his path from an "innocent" pain prescription, to doctor-shopping (because the "safe" medication he was given wasn't safe after all), to heroin use, and then through several recovery programs to one that has been working for him. Two messages underlie this paragraph:

  1. The pharmaceutical companies have tragically underplayed the addictive potential of popular pain-killers. Criminally so.
  2. The public and political view of drug abusers is dramatically wrong, and increases their suffering greatly and needlessly.

Americans need to face that fact that drug addiction is a disease. It doesn't resemble other diseases such as cancer or tuberculosis, because the causes are more hidden. They are buried in brain chemistry and the way it is so easily hijacked by certain chemicals. We are also somewhat jaded by hearing numerous radio ads that mention "the disease of addiction", in which it is applied properly no more than half the time. It is not proper to speak of compulsions toward sex, gambling, or chocolate as "addictions", and particularly not proper to treat them as diseases. They are in a different category, even though they do affect the reward circuitry of the brain. Yes, they do so, but not nearly as powerfully.

Let me mention at this point my musing about the 10% rise in traffic deaths last year, and a similar rise a couple of years earlier. How much of it is because the current recreational drug of choice, marijuana, is being legalized?

When I was actively involved in drug rehabilitation programs, some 45-50 years ago, heroin and its relatives was being pushed into the background by psychedelic drugs. LSD was popular, followed by PCP and Meth. Kids I worked with weren't all that likely to OD on LSD. They were more likely to walk out a 4th floor window thinking it was a door to paradise, or fall into a creek and drown, or to have a bad trip or even a flashback and find a way to kill themselves. I was never involved in heroin recovery efforts. And there is a new factor in the OD picture. Heroin is no longer the strongest drug (closely followed by cocaine). Its synthetic cousin Fentanyl is therapeutically effective as a painkiller at doses of around 1-1.5 mg, compared to 2.5-5 mg for morphine; however, its therapeutic range is narrow; 3-4 mg of Fentanyl can kill. There are much worse versions out there, such as Carfentanil. A few mg of that can kill an elephant or rhino.

Thus, I can't really speak intelligently about the author's experiences in recovery. He did not overstate the case. It is grim. If you aren't rich, you can't afford rehab. It's still rare for insurance to cover any part of such programs. The "affordable" rehab programs are so inadequate that they actually set people up for overdose. It works this way. A single heroin dose of 30 mg will usually kill someone who has never used it. The therapeutic dose of morphine (heroin is metabolized into morphine) is single doses in the 2.5-5 mg range, and no more than 30 mg/day. Someone who has abused heroin for a couple of years is using 60-100 mg per "hit", and a total of between 500 mg and one gram daily.

Suppose you are such a user and want to get rehabbed, and you manage to get into a program. It typically ends in four weeks, and you are returned to the street with no more than a "Good luck". If you have at least a little emotional support from peers or parents, you may hold out for a few weeks or months. Then you enter a low spot, emotionally, and go for "just one more hit". But much of the habituation to the drug has been lost. The "nickel bag" you were used to using is now enough to kill you. Many OD deaths are recently rehabbed young people, found with the needle still stuck in, they died so fast.

In a later chapter of the book, the author describes a few programs he found that are working better, because they abandoned old thinking and old ways of treating drug abusers. I think it better to leave it to you at this point. Read this book.

Count yourself lucky if there is absolutely nobody among your family, friends, and acquaintances who has not used a hard drug. We have a lady friend in her late 70's who was successfully weaned from prescription painkillers, just barely! It was costly and pretty hard on her and her husband. Luckily, she had not graduated to heroin!! It is likely that you do know someone, even if you don't know about it yet. So, get this book and read it. Consider Mr. Hampton's call to action in the last chapter. Our voices are needed by those whom society formerly forced into silence.

Friday, April 19, 2019

The unexpected down side of better public health

kw: book reviews, nonfiction, public health, demographics

First impression: Thomas J. Bollyky should have employed a co-author. His book Plagues and the Paradox of Progress is very difficult reading, not just because of the emotional weight of the subject, but because he writes for PhD-level readers (I subjected certain paragraphs to the Gunning Fog Index algorithm, which yielded numbers of 19 and greater. That's worse than most legal contracts!). Yet the subject is very important. So much so, that I read through it anyway.

A thousand years ago, life expectancy at birth, almost anywhere on Earth, was in the range of 25-30. This was still true just 150-200 years ago, but that was beginning to change. Nonetheless, this picture of a cemetery in Missouri, where my ancestors are buried, is instructive (Payne is a family name). When I took this picture I inadvertently recorded the basic demographic fact about life prior to 1880 in America: half of these tombstones are half-size, indicating children five years of age or younger. The larger tombstones commemorate people who typically lived into their fifties to eighties. Even a millennium ago, if a child survived at least five years, remaining (average) life expectancy could exceed sixty years.

The advances in public health have been very uneven. As described in the book, the countries of today's "first world", mainly the U.S., Europe and Japan, began making changes to infrastructure and health practice that reduced infant and youth mortality, and the spread of infectious diseases in general, more than 200 years ago. Progress was slow and not always steady. But the one big effect of all this was a gradual "younging" of the adult population. More children survived childhood and grew into young workers. Because the changes occurred over decades or even a century or more, employment for this glut of young workers grew at a rate that employed most of them. Much of that employment was in the growing cities, which grew into many of the mega-cities of today.

That's oversimplified, but the major and obvious effect of better public health is a demographic shift. If a country or region has a growing employment base, which typically means growing cities and more manufacturing, the economic benefits multiply. A major problem with citification is that crowded conditions foster the spread of infectious diseases, so for a generation or two (or three), the burden of disease shifted from early childhood to the working years. Many of the major cities of the Western world grew only because in-migration exceeded the appalling death rate from cholera, tuberculosis, and other diseases of crowding. Further public health measures, such as quarantine, and later, effective medications (beginning in the early 1900's), reduced the toll of plagues and other infections. Thus the "first world" has average life expectancy at birth in the range of 80 years!

A second demographic effect has been that families got smaller. A couple didn't need to have five or more children to ensure that a few would survive to become their parents' "retirement plan". Of course, some couples just like to have a lot of kids anyway; a neighbor of ours has 18 siblings, all from the same mother and father! But few Western families have more than two or three children, a great many have only one, and among Millennials, the trend is toward having none.

The "third world" (is there a "second world"?) has had it different. And worse. Rich nations began helping poor nations with medical and financial aid around a century ago. So the life expectancy in most countries exceeds 70 years (160 out of 223). I looked up a few in the CIA World FactBook: Albania, 79y; Bangladesh, 74y; Thailand, 75y; Libya, 77. However, other places still lag: Angola, 61y; Ethiopia, 60y; Laos, 65y; Botswana, 64y. The lowest life expectancies are to be found in sub-Saharan Africa, ranging down to the mid-fifties.

The book describes what has happened. Death rates among infants and children, and generally from infectious disease, have been dropping. As a consequence, there is a glut of younger people trying to make a living. The mass migrations, for primarily economic reasons, that began in the 1700's and 1800's, are accelerating as the poor countries of the world struggle to (and usually fail to!) employ their teeming millions.

The last chapter has a "what do we do about it?" message. The answer: much more...but what? So far, there is a little hope here and a little there. Malaria and AIDS and cholera and TB get all the press. Even though there is still a ways to go, progress to date has led to a demographic crisis that is being mainly ignored by the various bodies that have been addressing the diseases. Nobody has broad-based plans to address the resulting glut of young workers.

Well, that just presents the problem. Who will solve it? We're at risk of many good trends of the past few centuries being reversed. At this point I'd usually say, "Read the book." But many won't be able to, unless they are ready to cope with prose that you need 19 years of education to read without difficulty. I hope someone takes this book and its deficiencies as a challenge to write a better one, a more accessible one.

Friday, April 12, 2019

The real masters of the planet are underfoot

kw: book reviews, nonfiction, natural history, termites, technology

This interesting aerial view in northeastern Brazil is a portion of an area the size of England all covered with this kind of patterned landscape. The bumps are termite mounds; there are millions of them, each about 30' (9m) wide and 8' (2.5m) tall. They have had their ages measured: between 700 and nearly 4,000 years. This is just one major area occupied by just one termite species. The area of Britain is 122,000 square miles. The amount of the earth's surface that we humans have covered with roads and buildings and other infrastructure comes to 2 million square miles.

It is safe to say that our "underground footprint" is smaller than our surface works. Not so for termites. Even mound-building termites have substantial subsurface workings. Of course, mound-builders are not the subterranean termites that plague suburban homeowners in the US. Their nests around here are entirely below ground, plus inside the wood of our buildings and of dead and fallen trees. Mound-building termites gather grass or other plant material and return it to the mound, where they either digest it directly or use it to make compost to grow a fungus that they consume.

Here is another view, of an area in Australia, populated with termite mounds, of a different genus and species from those in Brazil. These can be 15' (4.5m) tall.

There are about 2,000 species of termites. Added together, the world's termites weigh ten times as much as the entire human population. For all that, termites are little critters. Typically half the length of a grain of rice, they are similar in size to the little "sugar ants" and "grease ants" that might invade your kitchen (not the big carpenter ants that occasionally come indoors).

This is what Eastern Subterranean Termites (Reticulitermes flavipes) look like, magnified about 10x. The big soldier (with the orange head) is 5mm long, and the workers are mostly 3mm long, although you can see at least one extra-small worker that is no more than 1.5mm long. This image is by Gary Alpert of Harvard University, and is licensed under a Creative Commons Attribution-Noncommercial 3.0 License; see more images at bugwood.org.

Termite are practically ubiquitous, living nearly everywhere except in areas of permafrost. Lisa Margonelli became interested, then fascinated, then obsessed with termites. In particular, she wanted to know what the latest termite research had to say. As she writes in Underbug: An Obsessive Tale of Termites and Technology, most termite research, being funding-driven, is focused in two areas. Firstly, knowing that termites can digest wood, scientists want to know how it is done, with the goal of replicating the feat to make woody and grassy plant matter into fuel. The general term has morphed from "biofuel" to "grassoline". Secondly, termites and their mound-building skills are studied to inform the technology of swarming robots. It has been calculated that the amount of neural matter in a large termite nest or mound is comparable to the brain of a Labrador Retriever. Can multitudes of nearly brainless critters carry on a kind of collective cognition? We are not talking self-awareness here, just calculating power.

Now, guess who funds the latter effort the most. The Military. Midway through the book, a researcher opines that if millions or even billions of tiny, flying robots can be 3D printed for $1 each, and each can carry a tiny shaped charge weighing 1g, capable of piercing the human skull, they might be able to wipe out an entire army without any of "our" folks coming in harm's way. It occurred to me that it might take a rather sophisticated CPU to be able to reliably recognize foe and friend. And now that we're at the limits of Moore's Law for single CPU power, it's unlikely that a robot "brain" lighter than a small cell phone's motherboard could do the trick, now or years into the future. To make such a 'bot fly, you'd need wings large enough that it would be the size of a seagull rather than a big wasp. It's hard for a seagull to sneak up behind someone so as to crack their skull. But then, if it is that big, you could give it a bigger bomb.

Ms Margonelli spent nearly ten years visiting and working alongside scientists in Namibia, Australia, South America, and a few places in the US, being instructed tirelessly by them as she struggled to grasp the concepts. In this image the bent-over witch's hat shape is not random. The angle and its direction optimize the mix of sun and wind exposure. Though one theory has such tall mounds acting like chimneys to facilitate the flow of gas and vapor, the research she witnessed did not bear that out. It is still a mystery, but millions of mounds throughout Namibia have the same shape and orientation. The more vertical mounds in Australia (second picture above) must operate differently, perhaps partly because they are at a different latitude.

So far, grassoline cannot be efficiently produced. The metabolic pathway for digesting cellulose and lignin includes a huge assist from bacteria and protozoa in a termite's gut. There are a great many steps, and yields are low. Consider: If you can only produce a gallon of liquid fuel by also producing 100 gallons of unusable waste—stuff nothing will eat and that can't be burned—is that worth it? In recent years, Americans consumed on average a little more than one gallon of gasoline per day, each. Nearly 400 million gallons daily, more than 110 billion gallons yearly. What do you do with 1.1 trillion tons of waste? In addition to what we already produce…

No matter what we do with termite research, rampantly increasing pollution of all kinds, including "heat pollution" caused by greenhouse warming, will make the future less pleasant than the present. Will we be driven back to a new "stone age"?  That is quite unlikely, but there are certain to be disruptions. One side finding from all the termite research is that the land above termite workings is more fertile and grows more vegetation. The little critters may have a hand in saving the planet for us, for they work steadily to make it better for themselves, with too little brain to indulge in politics.