Saturday, February 17, 2018

Free money for all — Not!

kw: book reviews, nonfiction, socialism, welfare

It was reported in passing in a National Geographic article that people in some Third World country were being paid an ordinary daily wage for working to clear the way for a road. The work was being funded privately, and the on-site managers decided to "do their bit" to improve the lot of the local people. They raised their daily pay by 35%. Four days later hardly anyone showed up to work. It took a lot of questioning to find out the reason: The workers had been very pleased and surprised to get about another third of a day's pay every day. At the end of three days, they had the money they used to earn in four days. Their daily needs had not changed, so they took the day off! The few who had showed up to work were mostly the local drunks who had used their extra cash to go an a roaring bender and needed the ready money.

This story came to mind the minute I saw the title of Basic Income: A Radical Proposal for a Free Society and a Sane Economy, by Philippe Van Parijs and Yannick Vanderborght. In a book of 247 pages plus an enormous number of notes (another 74 pages), they seek to convince a primarily American audience of the necessity for, efficacy of, and liberty to result from implementation of an unconditional national income, to be distributed by dividing around one quarter of a country's GDP amongst all the citizens and legal residents of this country. It would be paid for by eliminating nearly all social welfare programs, which it would replace.

Here's the nuts and bolts for Americans. U.S. GDP per capita in 2016 was about $52,200, so a quarter of that is $13,050 or a little under $1,100 monthly, to be paid to everyone from babies to billionaires. The total U.S. GDP was $18.7 Trillion that year. Non-"entitlement" social programs cost about $1 Trillion (both Federal and state spending); Social Security was slightly less, nearly $900 Billion; and Medicare spending is pushing $750 Billion. These total more than $2.6 Trillion, or 14% of GDP. On the face of it, if all three segments of the U.S. "social safety net" were replaced by a basic income program, the total cost would be $4.6 Trillion, and we'd need an increase in taxes sufficient to generate another 9% of the GDP.

The math gets a little trickier at this point, because the basic income would not be taxed. If that is all you received, you would pay no income taxes. But all earned income anyone received would need to be taxed at about 12% above whatever is needed to fund other programs such as the military, infrastructure projects, the running of government itself, and so forth. Some savings would be realized because of one huge fact: There would be no means test. That is, everyone would get a check for about $1,100 monthly, from the homeless woman sleeping under the bridge to Warren Buffet, tax free. Then, whoever earned "their own money" above that would pay taxes on it. Perhaps some progressive taxation method would be used, but to be clear, some kind of tax would be needed to gather the required $4.7 Trillion. But—here is the savings item—there would be no need to employ some large number of people to check if people are working, or are looking for work, and to run all the other intrusive mechanisms needed to run programs such as Unemployment Insurance or AFDC. I wonder how many officers they'll need to employ to make sure people don't keep getting checks after they die! The authors hardly mention the impact of fraud, nor how to make a person's basic income unstealable.

The authors do their best to tout the added freedom experienced by someone who has a guaranteed, if modest, stipend. Someone working a dead-end job could afford to find more congenial work, even if it pays less. Many other examples are described. But here is the question to ask yourself: Would you keep working if you could afford to "live the leisure life"? How many would prefer to spend their days fishing, surfing, hunting, beach combing, painting, sculpting, composing, and other pastimes that many wage earners dream of doing full time if they could? Would enough people desire to work, at jobs sufficiently well compensated, to support all the "preferentially unemployed"?

Were such a program initiated tomorrow, I suspect that people now in the early-to-middle years of a career, who are used to working and who have developed a kind of identification with their work, would tend to keep working because that is what they know how to do. What would their children do? The story in the first paragraph is my guess. It is not without reason that one passage in the Bible exhorts, "If anyone is unwilling to work, let him not eat." We are not as "good" as we'd like to think we are.

I don't remember the author, nor an exact quote, but I recall reading that once a large segment of the population figures out how to extract largesse from the public purse, the downfall of the republic is unavoidable. In the U.S.A., that point was probably reached during the Johnson administration with the "War on Poverty", and for 50 years the downhill slide has continued almost unabated. The authors of Basic Income have a European viewpoint, and are accustomed to a lot more Socialism than U.S. citizens in general. I think we ought to just sit back and wait for some country or group of nations actually implements a basic income program, then give it a generation and see in what form the collapse comes…but be sure, collapse is certain.

Thursday, February 15, 2018

Getting well plus cabin fever equals fixing a chair

kw: photo essays, home maintenance, repairs, diy

I was sick with a bad cough and recurring fever for about a week, the longest bout of "cold" I can remember. I needed lots of "horizontal time" in bed. A visit to the doctor three days ago brought the cheery news that I had a sinus infection and middle ear infection tagging along, which bought me antibiotics and steroids. Oh Joy! By yesterday morning I had too much energy to sleep all day, but I was still coughing too much for public consumption, so I took apart a wobbly dining room chair and re-glued it. Just for fun, here is the process.

Disclaimer: I am no pro, as you'll easily see. But I've done this enough over the years to be able to get it done in a couple of hours (of my work time, not total time) instead of several days. The glue I use is Titebond II Premium Wood Glue. All these photos are either 800 or 1024 pixels in the long dimension. Click on a photo to see it that size. (There are two pairs that I montaged.)

1) The seat (bottom right, upside down) attaches with 4 screws. The corners of the main frame have 8 more screws holding 4 corner braces. I started by removing all those.

2) Then I used the tip of a chisel to pry open joints that were loose, starting at the front of the chair.

All the pegs holding the joints were firmly glued at one end and loose at the other, except for the two holding the lower brace (the "H" shape at upper right), which was solidly together. At top left are the two front legs; below them the sides and front of the seat support frame. The three bits scattered inside the "H" are corner braces. One corner brace was very firmly attached on one side and I left it alone.

3) This closeup shows that the first gluing needs to be done to close cracks in the structural members. I started by applying some force to open the cracks a little more and then taking a finger-ful of glue and rubbing it into each crack. Then it is ready to clamp and wipe clean.
This shows a joint in the clamp, with glue being squeezed out, ready to wipe up. If the clamp must press onto a crack that will leak glue, I pre-squeeze it by hand and wipe what I can, and use wax paper between the piece and the clamp. The next two photos show various clamps and a vise holding these pieces while they dry. It takes about an hour. With the metal C-clamp I use pine wood blocks to protect the work piece.



4) Once all three seat frame were solid, I assembled the two side supports to the chair back, and also added the "H" strut. This entailed putting glue on all the pieces for one side at a time. In the case of the glue for the corner braces, I was careful not to get glue in the screw holes, so in the future the screws can be removed. After spreading the glue, I tightened the screws and those were set. I looked along them from the side to be sure they were lined up.

For the "H" brace, I just spread glue on the mating surfaces and the pins and pressed them into place, also sighting to get it square.
5) I assembled the front next, the front legs and the front seat support. There was a little wrinkle in getting the legs on, which I'll get into in a moment. But first, I was clamping them with a furniture clamp when one of the legs twisted and cracks opened in its top. So I put glue in the cracks and used the vise to hold that while all the joints dried. I made sure the legs were parallel. Note the blocks of wood in the vise, holding the leg being clamped. Pine blocks are softer than oak so they won't bruise the wood or mar the finish (though that is pretty beat up already from decades of use).
6) Once the front assembly was set, I put glue on all remaining joints and pressed it into place. Then I put the chair on the floor and used a furniture clamp to clamp the seat support, and a rope, looped four times, to clamp at the bottom to hold the "H" brace and provide added support while everything dried. I used a section of floor that I already know is flat so I could press down the chair every-which-way so it would not rock when it was finished.

7) Now to the wrinkle I mentioned. In one end of the front assembly there was glue at the bottom of the holes the pegs go into, that held them back from seating fully. Yeah, I know they came out of there, but wood can subtly shift, and in the two pictures below the one on the left shows a gap that I could not close. Trying to close it caused the cracks in the leg I mentioned in step 5.

I made thin shims by splitting veneer from scraps of paneling and glued them into place all around to fill the gaps and strengthen the joint.


8) Finally, I glued the corner braces and screwed them into place. And now, voila! The finished chair, ready for the seat to be screwed on.


Another week of Russian spider activity

kw: blogs, blogging, spider scanning


Just for the record: somebody/ies in Russia has kicked up blog scanning for about a week now.

Friday, February 09, 2018

I guess it is boring in Russia just now

kw: blogs, blogging, spider scanning

The net spiders sourced in Russia have been quite for about a year. Over the past day or so, they've ramped up again. During the calm, this blog had few readers, 40 or so daily. Now I see that just in the 7:00 am hour today (7:00 pm in Novisibirsk), there were 90 hits. It's too bad this doesn't indicate actual popularity.

Tuesday, February 06, 2018

Some random members of family Orthalicidae

kw: natural history, natural science, museums, research, photographs

I have been in the midst of inventory of terrestrial snails of a large family that is popular with shell collectors, the Orthalicidae. The family is named for the genus Orthalicus, but the family contains numerous species in many genera. In recent years taxonomy professionals have split certain genera out into new families. But we tend to call all these species "Orthalicids". Today I just present a few that I ran across recently, showing some of the breadth of attractive shell forms in these families. Each image is followed by a caption.
These are two species in the genus Placostylus, P. scarabus (Albers, 1854) and P. seemani (Dohrn, 1861). They are found on the islands of the south Pacific: the former in New Caledonia and the latter in Fiji. These island nations are about 850 miles apart (~1,350 km), so there is little natural opportunity for these species to encounter one another. The Fijian shells are visibly narrower than the Caledonian.
These are two more species of Placostylus, P. strangei (Pfeiffer, 1858) and P. stutchburyi (Pfeiffer, 1860). Both are found on the Solomon Islands. The third row consists of five lots of shells that have been identified as Placostylus, but no species is yet assigned. I am particularly intrigued by the one shell with aperture showing, that is bright orange inside.

This closeup shows one lot of P. scarabus. I purposely turned one shell to show the aperture, which shows a pale orange inside, less prominent than the one in the former picture. This also shows the variety of coloration to be seen in a single species, from quite mottled and brownish to smoothly creamy.

I turned two of these shells, of the more distantly related species Auris melastoma (Swainson, 1820), to show the nearly black interior. "Melastoma" means "black mouth". These inhabit Brazil.

Finally, this is a closeup into the plastic box containing one lot of Berendtia taylori (Pfeiffer, 1861). These are from a little closer to home, for us Americans at least: on the Baja peninsula of Mexico. I wanted a closeup of these, to show the fine ridges that cover the shells. You can also see a relic of museum practice in three of the shells: Munroe Walton had written his own number inside the apertures, and these have been crossed out and the DMNH catalog number written there.

Friday, February 02, 2018

Yeah, somebody is looking - should you care?

kw: book reviews, nonfiction, privacy, surveys

At a very early age we learn that the things people like and dislike differ. We learn that what some approve, others disapprove. When we find that what we like—any object, food, behavior, hobby, or whatever—is disliked or disapproved by someone who is powerful, or by large numbers of others we must spend time with, we begin to keep secrets. I remember, in second grade, excitedly telling a classmate of something I saw while watching The Mickey Mouse Club on TV. He said, scornfully, "That's for two-year-olds!" After that I never disclosed that I continued to watch the show.

I didn't have that episode in mind when I began to read Privacy: What Everyone Needs to Know® by Leslie P. Francis and John G. Francis (Nowhere in the book could I find why the subtitle is trademarked). I was simply interested in the subject, one so very popular today. Once I dug in, I found the ride rather difficult. Why?

The authors write well, but the subject is difficult. It is also so broad and all-pervasive that no treatise containing a mere 100,000 words can do more than touch on its many facets. Thus the book is composed of a few paragraphs each about some 130 topics, grouped into 10 chapters. It is actually a pretty good ontology of the subject to two levels. But it is a resource or reference, and don't think it is intended to be read through. So I treated it as such, reading the opening section of each chapter, and then dipping into topics that most interested me. My interests are rather broad, so I still read through a lot of the book, but this is my disclaimer that I did not read every word.

The older I become, the more I realize how exceedingly diverse the human race is. I suspect that for  every choice I make, were I to broadcast it on FaceBook and invite comment, someone would object or blame or scold me for it. In America, at least (and how often do I consider whether it is really OK to write "America" to mean the United States of America, rather than using "The U.S" or some other locution and thus avoid slighting the other 20 or 21 nations that constitute The Americas?), the climate of Political Correctness that has been a-building for some 50 years makes us all paranoid about "offending" a nation full of hair-triggered people.

A side note here, folks: Political Correctness has become a pervasive form of Censorship, and all the idiots out there who moan or scream, "Oh, I am so offended!" a few dozen times a day really need to find a more productive hobby, and grow a useful thickness of skin. So there.

OK, I'm back. Given the current cultural climate, a strong dose of paranoia is entirely justified. You can get shot at for honking your horn at the wrong time. Nobody accepts an apology if they think they can wring out an abject apology or even get you fired. American culture as it now stands constitutes an assault on the privacy of our own thoughts.

The subject of the novel 1984 was pervasive surveillance by the State. These days, that is just the beginning of our worries. My computer-jock colleagues and I used to joke that, if a company like Seagate were to develop a hard disk with infinite capacity, the government would order two of them. To me and my colleagues in the 1980's, a disk drive holding a few hundred Mbytes was a big (and costly) device. Now for $50 I can get a pocket-size Tbyte or two (and I have an "old" 2-Tbyte drive; it is as big as the book I just read). The big data center the NSA keeps in the Utah desert has a capacity of millions of Tbytes (the unit is called an Exabyte, or Xbyte). But it is not just the government. Data-hungry commercial enterprises store similar quantities of data…about us. About you and me, their customers (or critics, or whatever we are to them). George Orwell might be astonished, or he might say, "Why didn't I think of that?" And just you wait: Moore's Law for storage devices isn't slowing down yet, so a pocket Xbyte for $50 or so is probably just a few years away. And with network speeds pushing Gbyte/sec speeds and beyond, plus cameras everywhere, just everywhere, we live in a social surveillance environment. The primary difference between you and me, and the big actors—governments and large corporations—is that they can afford to employ programmers to write software to actually sort, scan, and analyze these massive data stores and create useful intelligence.

Is privacy dead? The authors, the Francises, don't think so. But many aspects of privacy are indeed dead. They are about to get deader. Some things are still humorous. If I neglect to go into InPrivate mode when I search for products and product reviews, or when I buy what I've researched, I'll see ads for such products appearing in all kinds of places for the following several months, in spite of the fact that I already bought it. But I fully expect the day to come that Google and FaceBook and everybody will know I bought it, and the ads will instead target follow-ons. If I begin using cooking or recipe web sites a lot, will the sudden up-tick trigger ads for cookware and blenders and spatulas and toaster ovens? Maybe. And after a couple more years, "they" will likely know I am thinking about upgrading my kitchen range before I even begin my research.

What is there, in your life, that you most keenly desire to be known to nobody, but nobody, except perhaps your partner? What if your pattern of purchases—even if you never, ever buy anything online—reveals your deepest secret to "somebody" whom you'd rather didn't know it? Or, if not purchases, just the streets you drive down or walk along, tracked by the phone in your pocket? What if the traces of DNA you leave on the paper from the table in the doctor's exam room lead to a pre-diagnosis of an embarrassing or dangerous condition you didn't know you were prone to having? … but that information somehow made it to your insurance company before you even knew it? Will technology eventually make it an almost all-revealing act to simply walk through a certain doorway while breathing? Yet you have no idea which doorway it might be? The current trend in DNA sequencing can be projected to the point where doing a total genome sequence will cost a dollar. Then what? Do you really want to know you might not have the same Y chromosome as your "father"? Or your…son?

The last chapter, the final 10 vignettes of the book, consider privacy and democracy. How much secrecy is required for a democracy to function? Conversely, how much transparency is also required? (Would it change your vote to learn that a certain political candidate has a large collection of antique torture devices? or reads 2-3 romance novels every single week (or writes one every 2 months)? or never buys meat, preferring to shoot it personally? or has raw eggs for breakfast every day? or is a total Vegan? Come up with your own list.) We have had a society that functioned, oh, reasonably well, having a certain mix of privacy and transparency. That mix is being forcibly shifted. Like it or not, more transparency is in our future. And the PC culture is accompanied by a trend that asks, "If you are so hell-bent on privacy, what're you trying to hide?"

I, for one, am glad that I have reached curmudgeonhood and will not likely live long enough to see, for example, the $1 genome sequence. There is a point beyond which we can no longer adapt. I am an introvert, with no more than the average amount of paranoia (so I tell myself - 😁!). I'd hate to be pushed until I "go postal" just because of societal nosiness. It is not entirely out of the question, folks. How about you?

So hey, that was a bunch of good riffs from a book that does no more than discuss a hundred-odd questions we will find ourselves asking more and more in the years ahead. Read it only if you can withstand a boost to your paranoia quotient!

Friday, January 26, 2018

Need a daily dose of dirt?

kw: book reviews, nonfiction, health, colon health, microbiome, therapies

RePOOPulation — What a lovely word! Coined by Dr. Emma Allen-Vercoe of the University of Guelph in Canada, it is one culmination of the research outlined in a new book by Drs. Brett Finlay and Marie-Claire Arrieta: Let Them Eat Dirt: How Microbes Can Make Your Child Healthier. For that matter, this implies that our inner bugs can help or harm us at any stage of life. Dr. Vercoe grows bacterial populations in "fermenters" (they make for a smelly building), for use in helping patients restore a healthy inner "farm" of bacteria and other microbes.

I grew up when a certain proverb was popular: "They can't grow up right without eating a peck of dirt." Considering how dirty and messy kids could get in the 1950's, '40's and earlier, a peck might just be the beginning (it's about 15 pounds, or a 2-gallon volume). Rightly understood, the parallel maxim "Cleanliness is next to Godliness" didn't refer to never getting dirty, but to washing well, particularly before meals.

Let Them Eat Dirt is a book of advice, but so well written that I didn't mind. And while it is about raising children, actually it is about raising a good (meaning virtuous!) crop of the hundreds or thousands of microbes that populate the gut of everyone. Even kids such as the "bubble boy" are not microbe-free, they are just being extremely well protected from pathogenic ones.

For all you germophobes out there: Experiments with germ-free mice (GF mice) show that animals with no internal nor external population of microbes are fatter, shorter-lived, and more prone to all the chronic diseases that seem to characterize our "clean" Western social system, such as asthma and diabetes.
Definition: microbiome. "a community of microorganisms (such as bacteria, fungi, and viruses) that inhabit a particular environment and especially the collection of microorganisms living in or on the human body. Your body is home to about 100 trillion bacteria and other microbes, collectively known as your microbiome." [Merriam-Webster]
The book's chapters take us through all the stages of a child's life, beginning with the various ways a newborn's microbiome is formed, nurtured, and possibly damaged and restored. (Antibiotics effectively carpet-bomb our microbiome. Being nursed at the breast helps build a baby's microbiome, and at least partially restores the microbiome if the baby had to have antibiotics as an infant.) A baby born vaginally ingests its mother's vaginal and fecal microbes. All the cuddling, kissing, and even pre-chewing food a mother does for her baby continually adds to the microbes that colonize her baby's gut. Don't think that is icky! Unless the mother is desperately ill, that is very, very good. Later on, letting a kid play outside, including the inevitable dirty-hand-in-mouth. (The book's cover shows a grinning boy with really dirty hands, but a spotless face. Ironic!)

The book also outlines research that shows the relationship between many diseases that were formerly very rare or unknown but are now common, at least in the "advanced" societies of the industrialized countries. Not only allergies, asthma and type 2 diabetes, but many cases of autism produce a microbiome with a genetic signature that can be detected by analysis of the bugs found in the feces. With what is known now, many chronic diseases can be diagnosed by analyzing a stool sample. Although this presently costs more than more traditional diagnostic methods, that could change very soon. In fact, it may soon be possible to mail off a stool sample and get back a list of the diseases a person either has now or is prone to getting, plus suggestions how to change one's microbiome so as to forestall them. That's probiotics at a whole new level!

Whatever stage of life you may be, whether or not you'll be raising children soon…whatever. This book is well worth the read, and even taking notes for later reference. Enjoy!

Wednesday, January 17, 2018

It grabs you where you live

kw: book reviews, nonfiction, addictions, technology

When I saw the book Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked, by Adam Alter, the verse 1 Corinthians 16:15 came to mind. It speaks of a certain family that "have addicted themselves to the ministry of the saints". At least, that is how the King James Version and two others translate the word εταξαν, a form of τασσω, which in the First Century meant "to set, appoint or ordain", but has lost that meaning in the centuries since. Becoming curious about the English usage of the King James era (early 1600's), I found that "addiction" referred mainly to fascination and devotion. Thus many English versions of the verse use either "set" or "devoted". The term was neither positive nor negative prior to the mid-1800's

Addiction has a much stronger and more focused meaning today. To be addicted is to be in the grip of a compulsion or obsession that harms one, or may eventually kill. Since the early 1900's or a little earlier, "addiction" has referred to a compulsion to use substances such as cocaine. As author Adam Alter tells us, there was quite a struggle in the later Twentieth Century among psychiatrists and psychologists about whether to recognize "behavioral addictions". But the modern phenomena—from binge-watching of TV episodes to online game playing, online gambling, twelve- to 24-hour FaceBook sessions and even "checking in" so compulsively that people walk into fountains, manholes and lampposts—have convinced nearly all that behavioral addiction is real and can be really, really bad.

Note the phrase above, "…or may eventually kill." I do not mean just the shortening of life due to bad health from being a "couch potato" or "FB zombie". Suicides have resulted, not just from being trolled online, but from despair over falling behind the social media rat race.

In a fascinating busman's tour through history, we find that addictive tendencies are with us for very good reasons: our distant ancestors did not become ancestors by ignoring the siren call of pleasurable experiences. In pre-agricultural days, over most of the Earth, eating everything that tasted good kept you alive, and getting all the sex you had opportunity to obtain gave you a chance at having descendants. Also, our ancestors traveled, and the dopamine-fueled thrill of seeing what is over the next ridge motivated many of them to seek new pastures and far horizons. Those who traveled the farthest may have been subject to extra risks, but the chance to populate a new and empty landscape was a benefit not to be ignored.

Our tendencies to fall prey to obsessions, compulsions, and addictions are a direct result of the tens of thousands, even millions of years, that humans lived with scarcity. Now about half the human race lives with relative abundance. What happens then? We overdo it; we overdo it big time.

The author describes many behavioral hooks that turn a potentially enjoyable experience into a compelling one. Unsteady rewards are a big, big factor. Even as Pavlov learned, once a dog has learned to associate receiving food with the ringing of a bell, it will salivate when the bell rings, whether food is given or not. But if food is given roughly every third time, the dog will salivate more and more. Rats given the chance to push a bar to get a food pellet will do so, of course. But if pushing the bar doesn't always yield a pellet, they will push the bar again and again, gathering pellets far beyond their need to eat them. Uncertainty is a big hook.

The most addicting games are those that you win about 1/3 of the time. If you win every time, you get bored. If you win less than 1/10 of the time, you look for a "better" game. This is just one example. Apparently, the most addicting computer game to come along, at least up to the time the book was written, is World of Warcraft. The second-most is probably League of Legends, which my son plays more than he should…though so far it hasn't affected his work enough to cut into his income. I hope that day doesn't come, but for many others it has come already (Cue a stereotypical video of a jobless Millennial who lives in the parents' basement and plays games all day).

So, can we do anything about this? Friends of ours despaired of even slowing down their daughter's FaceBook addiction. Her grades suffered badly. She almost dropped out of college. Nobody knows quite what happened, but she somehow developed a backbone, and a level of resistance, so that her grades improved, she graduated, and now has a responsible job. I don't know how much she may still read her News Feed on FB but I don't see a lot of posts from her. There are other folks—well, I just shake my head. I wonder how they have time to put one or two or three dozen posts in their News Feed every single day. Maybe we just have to let people outgrow it. Pity those who never do.

At the end of Irresistible the author discusses one "thing" (I can't think of another word) that seems to make positive use of the hooks that draw us in: Gamification. This is adding an element of fun into otherwise mundane, boring or unpleasant tasks. In the modern era, technological hooks can be used to trigger our compulsions, just enough, but breaks or "units" are inserted so we won't binge out. The FitBit is a potential gamification of exercise, but it doesn't have any checks, so some people damage their health trying to achieve ever-increasing goals. It needs some work.

But even without FitBit and its kin, overdoing it is a risk. I used to exercise a lot, including certain body-mass strengthening routines, and began keeping records. As it happens, that might have been a mistake. Or, at least, I ought to have obtained a buddy or coach to help me keep track and not ramp up my routines too fast. One day I did too many dips and pulled a muscle in my chest. It took five months to heal (I was about 40; were I younger it might have taken only a month or two). By then, the cycle was broken, and since then I primarily walk. There was no FitBit involved, nor have I ever owned one.

I am also reminded of Zooniverse, with more than 70 somewhat gamified "citizen science" projects. There aren't even any bells and whistles, just accumulating numbers of tiny projects completed, but that is enough that millions of people (myself included) enjoy sorting galaxies, counting penguins, or transcribing hand-written museum labels. Without a few little hooks in the projects, it is actually deadly dull work!

I consider the matter unfinished. We don't yet know how to cope with behavioral addictions. As the author writes, we are in the foothills of addictive technology. But not everyone is equally prone to addiction, whether to substances or behaviors. Perhaps Darwinism will run its course, and a future generation will consist mostly of people who are largely immune to the allure of the Like button.

Wednesday, January 10, 2018

Is evidence-based medicine dead?

kw: book reviews, nonfiction, medicine, medical research, critiques

Research incentives are messed up, big time. So much so that Sturgeon's Law of fiction writing applies, doubled: when someone protested to him about the presentations at a science fiction convention, that 90% of it was crud, he replied, "90% of everything is crud!" When people's careers are on the line, when jobs, promotion, tenure and salary all depend on "Publish or Perish", virtue vanishes. Young, idealistic researchers become jaded, cynical cheaters. One medical author has written that as much as 99% of published medical research is valueless or even damaging. Another wrote,
"One must not underestimate the ingenuity of humans to invent new ways to deceive themselves."
This quote is found on page 192 of Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions by Richard Harris. Author Harris admits that his title is a bit tongue-in-cheek, because rigor mortis literally means the stiffness of a corpse, while "rigor" also means strictness in carrying out a procedure. While it might be more accurate to title the book Mortis Rigoris (the death of rigor) or Mortuus est Rigor (rigor has died), it wouldn't resonate with doctors and others of us who know Latin.

More accurately, however, while experimental rigor is neglected more than adhered to, and may be on the ropes, it isn't quite dead yet. The ten chapters in Rigor Mortis illustrate and document every major aspect of medical research, from experimental design (The "gold standard" of the double-blind trial is nearly always compromised to save expenses, and frequently foregone entirely) to animal studies (Suppose you were told that a certain medicine was tested exclusively on women pregnant in their first trimester, of ages between 22 and 25, all from a specific ethnic group in Scandinavia? That's the analogy to a typical mouse study) to statistical analysis (The p-test is dramatically misleading, and we'll get into that one anon).

Have you ever heard of the "desk drawer file"? It is a lot like a Roach Motel; experiments with "negative results" check in, and are never checked out. Some of the few honest researchers left in the field are agitating for a requirement that every study funded with tax dollars be published, no matter what the outcome. The good news: transparency. The bad news: a ten- to 100-fold increase in the number of papers published. There is already an overwhelming deluge of publication! Gack!!

We need look no further than this to validate Sturgeon's Law. Consider the much-overused p-test, or p-value. You take a bunch of numbers, grind the formula (found in every statistical software package out there, including Excel), and out pops a number. Is it smaller than 0.05? Publish! That number gets inverted into "95% statistical probability that the result shown is not due to chance." Hmm. But what is it due to? Sunspots? Batch effects (perform run 1, clean equipment, perform run 2; do they differ because of the cleaning?)? Something you would never think of in your wildest dreams (all too frequently, yes)? But just suppose all those "95% chance it's right, 5% it's wrong" papers actually do have the "correct" cause and effect. How many experiments went to the "desk drawer" since the last time you published? 5, 10, 20, 100? The average is (wait for it) about 20! So, ignoring the desk drawer, five out of 100 publications must be reporting a chance association or correlation. Add the desk drawer factor of 20, and now at least half of them are reporting a correlation due to chance. Just by the way, it is amazing that the vast majority of studies that report a p-value have a number just under 0.05: "Dig around until you get a p-value you like, then stop looking."

Add in other factors, all detailed in Rigor Mortis, and there is little chance that more than a tiny fraction of published research results will stand the test of time. And that is a problem. A little time? That is OK. If a lot of further research and even development and marketing are based on a faulty result, and it takes "medical science" 5, 10, 20 years or more to find and correct the mistake, how many people die or suffer needlessly?

Is there a way out of it? Only partially. Transparency is part of the answer. But bureaucrats are lazy, so even with a law on the books that all studies funded by NIH must publish all results, for example, it is poorly enforced. There are a lot of partial answers out there. Here is my answer: We must live with what we have now, while things are possibly getting better, but today is today. When I must choose a new doctor or specialist, I inspect the waiting room, and later the visitation room. How many drug company trinkets can I find (pens, calendars, note pads, posters, and many more)? The fewer the better. My current doctor's rooms don't have anything with a logo on it. That's a great start; it means the doctor has better-than-usual resistance to high-pressure sales. Such a doctor is more likely to make a decision on medical grounds. Secondly, who do I actually see? Curiously, I prefer to be seen by a PA or NP, rather than a DO or MD. They haven't had all their good sense educated out of them yet. In my experience they are also a lot more willing to answer questions and do so more meaningfully. Also, I do ask a lot of questions, because a brusque doctor is likely to be impatient in the operating room also. In medicine, patience isn't just a virtue, it is a necessity! There is more, but if you aren't doing these things, start there.

What else can you or I do? Educate yourself. Not from medical journals, but from summary materials on things that are known to work. WebMD and Healthline are just the beginning. Don't limit your reading to a single source. When offered a "new" drug, always ask, "Is there an older one that works well enough, perhaps with fewer side effects?" There are always side effects. Some you can live with, some you can't. Do avoid, desperately, a drug that needs another drug to deal with side effects. My wife takes a statin drug for high cholesterol. She was originally prescribed the strongest one, and even taking a tiny dose, had troubling side effects. Her "undrugged" total cholesterol is 240, but that drug is best used for folks in the 400+ range. She demanded a weaker one, and even then, splits the pill in thirds. She has no noticeable side effects, and her "drugged" total cholesterol is about 160. Good enough!

I've learned to tell a doctor, "I am not a patient. I am a customer. You and I will collaborate. I will never cede my right to make decisions, except during anesthesia that we have agreed upon together." Call it an intelligence test. For the doctor. Occasionally a doctor fails it, and then I get another doctor. When needed, I make a doctor aware how skeptical I am of the "evidence" presented in modern journals.

Rigor Mortis is scary. Is it right? Sadly, yes, it is more right than the average published medical study. But don't let that drive you to the amorphous world of "alternative medicine", at least not wholesale. Allopathic medicine has produced amazing health in most Americans and others in the First World. For a generation or so medical research has gone astray. Will it return? Maybe. Until it does, we must be our own best doctors.

Thursday, January 04, 2018

Sleep, beautiful sleep

kw: book reviews, nonfiction, sleep

What a way to start the year! with a book about sleep. Michael McGirr, a former Jesuit priest, and a victim of sleep apnea, writes about sleep and sleeplessness from a few unique perspectives in his book Snooze: The Lost Art of Sleep. I read the book hoping to re-connect with this lost art, but found instead a travelogue, a book of "what" but not "how".

There is no table of contents and I didn't count as I went, but I reckon there are upwards of a dozen chapters. Each is titled by a time and a year (or a few related years), thusly:

10:45pm

[2004]

in which chapter he writes of his diagnosis of sleep apnea and the invention of the CPAP machine, and about his marriage to Jenny who loved him anyway (this after he left the Jesuit order), or

2:15am

[2007BC]

riffing on Jacob son of Isaac, one of history's celebrated sleepers, he of the dream of angels on a ladder, but one who nonetheless complained to his father-in-law,
… by day the heat consumed me, and the cold by night, and my sleep fled from my eyes.
He writes of Edison, who was too busy inventing to sleep; of Florence Nightingale, who slept little but spent some 3/4 of her life directing matters worldwide from her bed; and of coffee and its use to ward off sleep, so much so that Balzac, who fueled his amazing literary output with sixty cups of coffee daily, died of caffeine poisoning at age 51. Balzac might have lived a lot longer on half the coffee, and while writing less daily, his total production might have been even greater.

I am reminded of Alfréd Rényi, who said, "A mathematician is a machine for turning coffee into theorems," a quote usually attributed to Paul Erdős, and tangentially of Leonardo da Vinci, who is said to have kept to a regimen of three hours and 40 minutes of work followed by a 20-minute nap, day in and day out (that comes to two hours in each 24-hour period). I read about one man who tried working on this schedule and did so for a few years, but then gave it up because he ran out of things to keep him busy. I guess to keep Leonardo's schedule you have to have Leonardo's creativity. I wish McGirr had included these also in his travelogue of sleep and its variations, but he did not.

Regardless, his own studies of sleep, restful or not, led him in many directions, including into those antonyms of caffeine, the various sleep-inducing drugs, from Benadryl® to Ambien® and beyond. During a hospital stay, a nurse gave me two Benadryl®, which worked well. My father used a prescription sleep aid that turned out to be a double dose of diphenhydramine in one pill; the exact equivalent of taking two Benadryl®, but a lot more costly. But the more recent drugs induce sleep by messing with the normal sleep cycle, which can put you into a deep sleep without the total sleep paralysis needed to keep you from acting out your dreams. Lots of sleepwalking (and sleep driving, etc.) incidents are known, some with fatal results.

In the last chapter, he writes that reading in bed can help us drowse, but only if we are reading off of printed paper. The reflected light from a page with dark ink does not inhibit melatonin production. The light from a computer of phone screen has a different quality, and does so interfere.

Overstress is a primary enemy of sleep. We need a certain amount of stress to keep life interesting, but overwhelming, chronic stress just burns us out. Some folks respond with depression and may take to their beds, sleeping much or most of the day. Most of us have trouble getting to sleep, wake too early, and feel tired much of the time. While a few chapters of Snooze address chronic insomnia, a broader affliction is that many of us get some sleep each night, but never seem to get enough. Many, many of us have an experience like mine.

During the last ten or so years of employment at DuPont, I seldom slept more than four hours nightly. For some of that time, I was also on one or another medication to address my bipolarity, but they didn't do much so I learned to cope with it unmedicated. During those medicated periods, I usually napped up to two hours daily, so you could say I had six hours of sleep, but not in one installment. However, without medication with a sleep-promoting side effect, four hours was it. No naps. I had work I enjoyed a lot, a congenial boss (the last 8 of the 10 years), and even told my boss I might work until I was 75. But when the company declared a retirement incentive, I retired at age 66.

After retirement, two important things happened. Within a few weeks, I was sleeping 6-7 hours nightly, and over about half that first year I lost 15 pounds. I remember looking back one day, and saying to myself, "I didn't realize the level of stress I was under!" I had also been using a lot of "cold caffeine" (Pepsi Max), up to a liter daily.

Now that four more years have passed and I am over 70, I get 5-7 hours of sleep, and if I wake early I simply get up, read my Bible a while, have breakfast, then have a morning nap for another hour or two. There aren't a lot of conclusions to draw from that. I am thankful that, though I snore some nights (not all), I don't have apnea; I have part time work that keeps some structure in my life, but is incredibly less stressful than any job I had before; I practically eliminated caffeine, using caffeinated cola only for driving alertness on road trips.

You'll have to look elsewhere for advice and information on how to sleep longer and better. For an enjoyable survey of how humans have been sleeping, or not, Snooze is the book for you.

Tuesday, December 26, 2017

Through space and time with a different mind

kw: book reviews, science fiction, multiple genres

I read through most of this book on an airplane from Phoenix to Philadelphia. Sometimes when I fly I work puzzles the whole time in the air, first whatever is in the airline's magazine, then in a puzzle book. I like the books with a great variety of different kinds of puzzles, not just crosswords or Sudoku. This time I began to read right after push-back, and read pretty steadily through most of the flight.

Do you remember Apple's "Think Different" motto of about 20 years ago? They were criticized for not using "Differently", but the word was not intended as an adverb; it was a noun: "Think [things that are] Different". When I first saw it I recalled the century-old NCR/IBM motto "Think". But that one meant "Think [because nobody else is doing it]".

Well, Hugh Howey thinks Different. Though he has published more than 20 novels and novellas, and a passel of short stories, reading the collection Machine Learning was my first exposure to him. I'll make sure it is not the last.

The volume contains short stories and at least one novella made up of short story-length vignettes, in a few SciFi genres. The author supplied endnotes about the stories, of what he was thinking at the time. He'll think inside a character: What is going through the mind of a truly bug-eyed, tentacled alien in a force bent on attacking Earth? ("Second Suicide"); he takes a riff on his friend Kevin Kelly's statement, that when a machine first becomes self-aware, the first thing it will do is hide ("Glitch"); he considers the consequences of love between human and robot (Algorithms of Love and Hate, a 3-story sequence). This last reminded me a little of The Bicentennial Man by Isaac Asimov, but with a very different take on societal reactions. Finally, "Peace in Amber" is the author's memoir of going through 9/11 in the actual shadow of the twin towers (until they fell), interspersed with a truly weird alien zoo story. Based on his endnotes, I think the zoo story was needed to "spread out" the memoir so he could handle the flood of emotions.

The word "gripping" comes to mind. Read the book and see what word it evokes in you.

Sunday, December 24, 2017

Take Tyson's tour

kw: book reviews, nonfiction, science, astrophysics, popular treatments

What's not to like about Neil deGrasse Tyson? He has become the public face of science today. I love his updated Cosmos series. I have privately studied astrophysics and cosmology enough that perhaps I could have passed by his new book, but I couldn't pass by the enjoyable way he treats his subject. Astrophysics for People in a Hurry is well worth anyone's time, whether you know anything about the subject or not...particularly if not!

This is a rather small book, on purpose. Dr. Tyson knows that today's young adults want everything fast, they want it now, and they want it without fuss. If anyone can deliver up a basic survey of astrophysics and cosmology that meets these requirements, he can. He does so in 12 chapters.

When I think of astrophysics, I think mostly of stellar interiors, but there is much more to it than that. Clearly, from the flow of the book, astrophysics includes cosmology in its purview; probably 2/3 of the books content is cosmological. But he really does cover all the bases, from the reasons for roundness (gravity wins), to the shapes of galaxies (the tug-of-war between gravity and angular momentum), and to the reasons for modern cosmological theory to include both "dark matter" and "dark energy". Chapters 5 and 6 present these mysteries as well as I have ever seen, and explain why they seem to be required for the universe to work the way we observe it working.

I had the great pleasure to encounter a professional cosmologist on an airplane flight four days ago, and we had the chance to talk a little (he wasn't in my row, so our time was limited by physical endurance of turning heads rather sharply). I asked him a question I'd have asked Tyson if I had the chance, "If a unified quantum theory requires a quantum of gravity, how can a graviton get out of a black hole so as to interact with the rest of the universe? What is the emitting surface for a graviton?" He admitted that he hadn't thought of that before. After we talked a while of other things, then broke off for a while, he nudged me, saying, "Consider this. A black hole has three qualities: gravity, angular momentum, and electric charge, right?" I agreed. He continued, "The electric charge is carried by virtual photons, the bosons of electromagnetic force. Real photons cannot escape a black hole; that is why it is black. But the electric charge remains in effect anyway. Thus, the virtual photons do escape—and return to—the black hole to keep the electric charge in place." I thanked him for providing a marvelous "hole" in my considerations of gravitons and black holes. I suspect this is the same answer Tyson would give. Now, upon further thought, I wonder if the electric charge is held within the black hole, or remains attached somehow to the event horizon. From there (or very slightly above it), even real photons could escape if needed. But if virtual photons can indeed escape a black hole, then virtual gravitons could also.

This matter doesn't enter into the book. What does enter in, is how all the pieces fit together. Tyson gives us plenty of food for thought. One of my favorites is playing a numbers game with molecules and time. Here is my version of "Whose air are we breathing?":

Part 1
  • The air above 1 cm² of Earth weighs 1 kg.
  • The average molecular weight of air is about 29.
  • Thus each kg of air contains about 34.5 gm-moles.
  • 1 gm-mole contains 6.02x1023 molecules (or atoms) of any substance.
  • That comes to just over 2x1025 air molecules above each cm².
  • The surface area of Earth is 510 million km² or 5.1x1018 cm².
  • Thus the atmosphere contains a bit more than 1044 molecules.
Part 2
  • Our total lung capacity is around 6 liters (with a rather wide range).
  • Our "tidal" capacity, the amount we usually take in with each breath, is about a half liter.
  • That is about 0.022 gm-moles, or 1.3x1022 molecules.
  • An average person breathes about 23,000 times daily, when not exercising a lot, or about 8.4 million breaths yearly.
  • Napoleon Bonaparte lived 64 years.
  • In a 60-year span, the number of breaths would come to about 500 million.
  • All those breaths add up to 6.6x1030 air molecules.
Part 3
  • All the air that Napoleon breathed amounts to 1/15 trillionth of the atmosphere.
  • 1/15 trillionth of one tidal breath is 880 million air molecules.
Conclusion: Every breath you breathe contains nearly one billion of the air molecules once breathed by Napoleon…or by anyone else who has lived at least 60 years! Tyson didn't go into all this gory detail. He names a couple of the figures in a two-sentence riff on the subject. I just went through the figures to work it out for myself, and to share it here.

A particular aim of Dr. Tyson in everything he writes, and says in his programs, is to impress us with the power of the scientific method. We don't learn "how the world works" by guessing. We observe, make tentative conclusions based on observations, argue with others about it, eventually turn the conclusions into a hypothesis that we can test, and then repeat as needed. Now, in cosmology, a "test" would take billions of years. This isn't chemistry, for which you can mix a few things in a jar and take a measurement in a matter of seconds or minutes. Neither is it biology; we have no cosmological Gregor Mendel, crossbreeding stars as though they were peas. But we can work out the math and see how it squares with the things we see.

In science, more than in any other endeavor, "No man is an island." No woman either. The popular trope of the loner in a stained lab coat making a major discovery is simply unknown to real science. Even a few centuries ago, when chemistry was emerging from alchemy and astronomy was emerging from astrology, a "lonely genius" was really a highly social being, surrounded by helpers, colleagues, opponents, and many others. The quintessential scientific loner, Isaac Newton, spent much more time discussing his findings and theories with members of the Royal Society, including friends, "frienemies", and enemies, than he did carrying out observations or even thinking out his theories. Without a helpful gadfly-friend to prod him, he'd never have finished writing his Principia. So although Newton was famously anti-social, he still had to interact socially for his science to have usefulness and meaning. But that's the beauty of science. It is our great, collaborative enterprise of looking back at the Universe that birthed us, to see how it was done, and a great many more things of interest also.

This isn't a textbook. It provides not an education in the subject but a vision of what astrophysics is. If you treat it sort of like a textbook, and write down ideas that interest you as you go along, you'll gather fodder for any further studies you might wish to carry out. That's the kind of thing I've done all my life.

Monday, December 18, 2017

Growing up unique

kw: book reviews, science fiction, space opera, child prodigies

Fiction authors frequently write to explore. I first recognized this while reading one of Isaac Asimov's Robot stories, in which stories he explored the boundaries of the Three Laws of Robotics. He had hinted at them in Robbie and first stated them clearly in I, Robot. Years later I realized he was also exploring the boundaries of neurosis. As I learned of his life, including what he wrote in several memoirs, I understood that he was profoundly neurotic and he used his characters—the ever-more-perfect and godlike robots in contrast to the all-too-faulty humans—to work through the ramifications of neurosis in himself.

I have read novels by Orson Scott Card for about thirty years, beginning with Ender's Game. I don't know if I have read all the Ender series books. I did read all of the Homecoming books, and it is more than clear that in those Card is exploring the boundaries of morality and altruism. His character Nafai is pathologically altruistic.

When I read Ender's Game I wasn't ready for it. I was a mere 40-year-old. I took it at face value, as a coming-of-age novel in a space opera setting. Speaker for the Dead and other Ender series books also left me bemused. Now, just this year, more than thirty years later, Children of the Fleet adds another layer to the Ender saga, and I think I am beginning to understand.

The children in this novel, including the protagonist, Dabeet Ochoa, resemble those in earlier books in that they think rather consistently at an adult level, and perform certain adult tasks, though with some limitations because they are, after all, mostly pre-teens. None has yet hit the pubertal growth spurt, so they wear child-sized space suits, for example.

I was forcibly struck in this novel (and in retrospect, in Ender's Game) that Ender and Dabeet are victims of profound child abuse. Each is massively distorted from what he might have been in a more usual environment. Ender completed his mission, one supplied by others without his knowledge, by becoming the "Xenocide", the one responsible for annihilating the Formics, an insectile alien species. Dabeet's mission is only partly concealed, and he initially conceals it from others. In carrying it out, he brings life, not death (except indirectly, to a couple of all-too-human evildoers), and he prevents massive death.

Rather than dig further into the novel, I want to riff on the meaning of intelligence. We all think we know what intelligence is, but if asked to describe it, none can do so. For a few generations, tests of IQ (Intelligence Quotient) were thought to measure it, but they really tend to measure a small collection of cognitive and memory feats that are more machinelike than I care for. I wonder how the supercomputer Watson would fare on a Stanford-Binet test.

Further, the meaning of IQ has changed over the years. Originally, an IQ test was used with children ten years old and under, to compare their performance with sixteen-year-olds. I don't know how the test was normed (normalized), but apparently youngsters of ages between six and sixteen were tested to establish the "normal" performance of each year cohort. Then higher or lower performance could be compared with these norms to establish an IQ score: 100 for "normal for one's age". Based on the scatter displayed within each cohort, a Gaussian distribution was fitted and a standard deviation of 16 (later 15) was applied. So, when I was given an IQ test in third grade, at age 7, and my IQ score was 170, that supposedly meant that, in the memory and cognitive skills that were measured, I was performing at the level of a 12-year-old (11.9 to be precise). All I knew at the time was that, having begun to learn to read on my own when I began first grade as a 5-year-old (I turned 6 three months later), as a third grader I was indeed reading books usually seen in the book bags of seventh graders.

But how do you measure the IQ of an adult? When I was 20 did I have the "smarts" of a 34-year-old? Does such a question even have meaning? I think not. Others who considered cognitive psychology their calling thought about this quite deeply, and re-normed the test, making the standard deviation (σ) meaningful as a measure of scarcity. Thus, in any Gaussian distribution, the p statistic for ±2σ is 0.9545, or about 21/22. With σ = 15 and a mean of 100, the range ±2σ is from 70 to 130. So if you have a "normal" group of 44 people, one is likely to have an IQ of 70 or less, and one is likely to have an IQ of 130 or more.

I can tell you from experience, though, that IQ has little relation to street smarts. As an adult, my IQ has settled to 160, or 4σ above "average", a level achieved by one person in about 31,000. As a pre-teen and early teen, I finally realized I was not very likable. I began to work toward fixing that. I felt that if I did not have good social reactions automatically, as my age-mates did, I would have to observe, learn, and calculate those reactions. I did so. I used to look at that 170-to-160 shift as "giving up 10 IQ points for a better SQ" (Sociability Quotient). Thus, this paragraph found near the end of Children of the Fleet hit me with special resonance:
Maybe making and keeping friends will always require me to think through the steps of it … Maybe it will never be natural for me, never reflexive, never easy. So be it. I can't live without it, can't accomplish anything without it, so I will become adequate at forcing myself, against my inclinations, to be a friend to my friends. If I'm good at it, they'll never guess the effort that it requires.
Dabeet's musings match mine at just about the same age. Now I'll tell you what happened after I was 40. No details, just this: I had occasion to learn, through a personality test, that my "calculated person" was pretty good; but also, because a part of the test elicited reactions that had to be too fast for my calculations, I learned that a "natural" personality was truly there, and it was also pretty good! I came away with a proverb, "You cannot build a tree." I had found out, after a few decades of tree construction and maintenance, that a perfectly adequate tree had grown up beneath my notice and could be relied upon to be a "me" that didn't need all the effort. I am happier and calmer as a result.

If Dabeet is a reflection of Card's view of himself, as I suspect, maybe he is in the midst of learning, or will soon learn, the same thing. Let's see where the next of Card's novels takes us, and him.

Thursday, December 14, 2017

Bill Nye the Climate Guy

kw: book reviews, nonfiction, scientific method, climate change, polemics

Bill Nye is one of my all-time favorite people. The fact that I was dismayed by some aspects of his recent book doesn't diminish my admiration for him. He is a top-notch science educator and a writer I enjoy reading.

Bill Nye's new book, Everything All At Once: How to Unleash Your Inner Nerd, Tap into Radical Curiosity, and Solve Any Problem, is ostensibly about that middle phrase: "Release your inner nerd." It is primarily an evangelical work, aimed at anyone on the fence between those who "believe" in climate change and the climate-change "deniers". Along the way, though, he offers great examples and advice for many folks who may be a bit tech-averse, to see how humans are by nature technical beings, and that solving problems is what we do best—or we can, if we go about it right.

I hope a great many people will indeed read this book. It is very well written. The author manages to press his pro-climate change case pretty hard without becoming entirely disagreeable. I will address my concerns in a moment.

Let me first state my background in the matter; it is a subject I have followed for nearly sixty years.

When I was a child I heard about the "Greenhouse Effect". It was already old news, because the term was used by Svante Arrhenius in 1896 to describe his calculations that a doubling of CO2 concentration in the atmosphere would raise average global temperature by about 5°C (that is 9°F to us Americans). At the age of twelve I was able to learn enough math to reproduce Arrhenius's result.

In actuality, "greenhouse effect" is not an entirely accurate metaphor. In a greenhouse, the glass physically traps air warmed by the sun, while also providing spectral emissivity to enhance the effect. A "greenhouse gas" cannot physically trap warm air, but causes extra heating solely via spectral emissivity.

The terms "Global Warming" and "Climate Change" began to be used by some in about 1975, and their use ramped up greatly after 1985. "Greenhouse Effect" also took off about that time, when the atmospheric effects they all refer to became a political football. Then a funny thing happened. Looking at the Google Ngram Viewer, I find that since 1992 "Greenhouse Effect" rapidly fell out of favor, "Climate Change" became the term of choice, with "Global Warming" running a rather distant second.

The problem with all this is that "Greenhouse Effect" denotes a possible cause, while the other two terms refer to effects. So now let us back up and examine the term I threw in earlier, "Spectral Emissivity". For solid materials, this refers to a departure from the spectral behavior of a blackbody or graybody. If we could produce a paint that was perfectly gray—at any level of grayness—throughout the electromagnetic spectrum, we could paint it on a surface and it would cause an amount of heating, when the sun shined upon it, directly correlated to the total emissivity. To be specific, a perfect blackbody surface will heat up to a temperature that depends only on the energy being radiated to it. It has an emissivity of 1. A perfect reflector will not be heated at all. It has an emissivity of 0. A perfect graybody surface with emissivity of 0.5 will heat up to an intermediate temperature according to a proportional constant times the Boltzmann factor t4.

Now, consider a "step-spectral" surface. Suppose it has an emissivity of 1 for visible light, and an emissivity of 0 for infrared light. Let's put the cutoff at 700 nm. A surface with this characteristic, in a vacuum so air will not carry off any heat, and with only visible light shined upon it, would heat up until it was hot enough to radiate away that same amount of radiant energy. In visible light it would appear black. It absorbs light, but if it is cool, emits nearly none. Thus it must heat up. You might know from experience that the heating element in an oven gets to about 600°C before it begins to glow reddish, and at 800°C it is getting orange-red. The great majority of its radiation, however, is at infrared wavelengths longer, much longer, than the 700 nm radiation we call "deep red". If it is prevented by the step-spectral emissivity from radiating at those longer wavelengths, it must, perforce, heat up until it is radiating a lot of visible light, to balance the incoming light. Thus a step-spectral surface tends to get very hot indeed, hotter than an oven element.

Now we can consider gases. Oxygen and nitrogen hardly absorb any light at any wavelength of interest to us as we consider the heat balance of our atmosphere. There is a common gas, however, that does absorb a lot of light, at a range of wavelengths that make it a strong greenhouse gas. That is water vapor. Surprised? We will look at some spectra in a moment. First, qualitatively, we find that water vapor absorbs a lot of ultraviolet light, but absorbs even more strongly in several ranges throughout the infrared, with narrow absorption bands at about 1.2 and 1.9 microns, a wider band from 2.5-3 microns, and a wide, almost total absorption feature from 5 to 7.5 microns. The result of this is that if Earth had no atmosphere it would be 32°C (about 60°F) cooler than it is. A perpetual ice age without the ice. So water vapor is by far the strongest greenhouse gas, and is responsible for life being able to exist on earth.

"Climate Change" is all about carbon dioxide (CO2). What does this gas do? It also has spectral emissivity, with an absorption band at about 2.7 microns, a stronger one near 4.2 microns, and a third between 12-16 microns. This last one is of primary interest. It is perfectly placed to absorb about 10% of the thermal radiation from warm dirt, meaning that the dirt has to get a little warmer to radiate that extra energy at other wavelengths. And that is what is behind Arrhenius's greenhouse effect calculation.

Greenhouse gases operate a little differently from painted surfaces. Dirt and other stuff on Earth's surface has spectral emissivity, of course, but not nearly with the perfection of the step-spectral material discussed earlier. So it reflects a lot of light, absorbs some, and gets warm enough to radiate some infrared. In a vacuum, dirt with sunlight shining on it would have some specific temperature. Now put a layer of greenhouse gas above it, an atmosphere containing water vapor. The incoming sunlight is not affected much. But the outgoing infrared from the warm dirt is partly absorbed by the water vapor, which heats up and radiates also, with half going up and half going down. This causes the dirt to get warmer, until it is able to radiate enough to balance its thermal outflow with the radiative inflow from sunlight and also the re-radiated infrared from the warm air above it. How does CO2 modify this picture? It absorbs a little more infrared radiation, in portions of the spectrum in which water is rather transparent. So CO2 strengthens the greenhouse effect. Now, here are the spectra:

I don't know the original source of this graph. It is found all over the place. It also shows a tiny contribution from oxygen and ozone, but we won't consider those here (in the "ozone layer" the temperature goes up significantly, however).

The blue line is for water vapor. The curve marked 255K shows the thermal radiation from a piece of ice at -18°C or 0°F. "Room temperature" is close to 300K or 27°C (81°F). Its radiation curve would be a little to the left of the one shown.

The point is, water vapor reflects back a lot of the radiation from the earth and even from glaciers. Yes, glaciers radiate infrared also. The blue line is for water vapor with a content near 0.3% of the atmosphere, or near saturation (100% relative humidity) at ice temperature. The CO2 curve is for a few hundred ppm; the sources I read didn't state exactly. The result of increasing the amount of CO2 would be to widen the bands, as their "wings" absorbed more and more. This shows what happens when these two gases lead to greenhouse warming.

Now it is a separate issue, whether this is actually causing climate change. "Deniers" say not so, proponents of the idea that CO2 is a "pollutant" say it is. I won't get into that. We have measured that, from the time I was a little child and there was less than 300 ppm CO2 in the atmosphere, and today, when the amount is 400 ppm, global atmospheric average temperature has risen just under 1°C.

Is that a lot, one degree C? Let's look at one factor. Water expands when heated. Heating water by 1°C yields an expansion of 0.000214, or 0.0214%. The ocean averages four km in depth. If the entire ocean were warmed by 1°C, it would be 0.000214x4,000m = 0.856m deeper (33.7 inches). That is enough to force the evacuation of some low-lying areas and certain island nations such as Tuvalu. "Climate evacuation" has already started. But has the whole ocean heated by that much? Not yet. Give it time. The early evacuations were the result of less than one-third of this figure.

I'll stop there. These are not easy points to make with a public that largely doesn't care. Thus, Bill Nye's passion. He wants to make everyone care. But as I read I took careful note: will he mention water vapor? He does not, except for a throwaway phrase in a late chapter. We can't ignore water, for another reason. Trapping a little more heat means adding energy to the system. That means more water could evaporate. Whether it will or not is a huge area of controversy in the climate modeling arena. Water is complex. It might be the most complex substance there is. It is possible that the added energy will yield a net drying rather than adding more water. We might see more rain, or less rain, overall, and nobody yet has a good handle on which areas might experience greater or reduced rainfall. Oh, I've seen a few predictions, but none is well supported by robust evidence.

I agree with Bill Nye, though, that we need to be reducing our dependence on "convenient" energy from burning stuff (mainly fossil fuels), and toward solar, wind and other "alternatives". A generation ago the oil companies began calling themselves energy companies. But they are really still oil and coal and gas companies, with only tiny amounts being spent on non-carbon energy production. They could become the heroes of the 22nd Century. But I fear they will more likely be the goats. I just don't know who else has money enough to do the research to make solar and wind as ubiquitous as they need to become. And there, I think the Science Guy might agree. Read the book. Agree with Bill Nye or not, you're in for a fun ride.

Friday, December 08, 2017

The most popular snails

kw: species summaries, natural history, natural science, museums, research, photographs

For the current series of projects at the Delaware Museum of Natural History, I have worked through several families of terrestrial gastropods (land snails and tree snails). Many of these are quite inconspicuous, being small and not colorful, though they are in general a little more various than the little brown "mud snails" (freshwater gastropods) I worked with for most of 2016.

You know, in any group of creatures, most are rather inconspicuous and poorly known. The "typical" mammal is a "little brown furry thing" such as a mouse, vole, shrew or lemming. The "typical" bird is a "little brown feathered thing" such as a wren or sparrow. The "typical" insect is a "little dark beetle" about the size of a grain of rice. The world is full of little brown things and we hardly notice them.

But we really like the colorful "charismatic" ones. Among the land snails, that would be the tree snails of Florida and the Caribbean, of the genus Liguus.


This is part of a drawer of "unidentified" lots of Liguus fasciatus, the poster child for pretty tree snails. Though these have been identified as to species, L. fasciatus has many "forms" or "varieties", which we provisionally catalog as subspecies, but they probably aren't really subspecies. We usually call them color forms.They hybridize freely, but a particular color form is usually physically separated from most others, being endemic to a few "hammocks", as small patches of raised and heavily vegetated ground are known in the area.

These are mostly from an area of the Everglades called Pinecrest, named for a ghost town tucked away in the middle of a couple of hundred hammocks. You can clearly see that most of these lots are in need of splitting into their color forms. Any particular hammock may be inhabited by a few color forms. A collector in a hurry will gather a couple of dozen shells, put them in a box or bag with a label (date, provisional ID, and location, at the least, to be a useful specimen lot), and move on to the next hammock a few minutes' walk away in the dry season, or a short airboat ride away the rest of the year.

Here is the prettiest of the color forms, in my opinion:

On your computer screen this may be a bit larger than life size. The paper label is 3 inches long, so these shells are about 2 inches long, a little bigger than the average for the species. Liguus fasciatus splendidus Frampton, 1932, must have been Henry Frampton's favorite also. These are indeed splendid! This lot was collected by Erwin Winte a few years after Frampton described them, and in the 1980's it wound up at DMNH.

These shells are so sought after that, though they are prolific and widespread, many color forms are getting hard to find. In the southeast U.S. and the Caribbean, a whole subset of shell collectors are called "Liguus collectors". We are loving them to death!

This only serves to introduce these lovely shells. I hope soon to gather pictures of several color forms, and also to compare L. fasciatus with its sister species in the genus.