Tuesday, December 23, 2025

Just beyond the edge of the usual

 kw: book reviews, science fiction, short stories, ekumen series, space travel, anthologies, collections

I read some of the stories collected in The Birthday of the World and Other Stories, by Ursula K. Le Guin, when they were first published in the middle 1990's. It was a rare pleasure to re-read them, and to get to know their companion pieces, with the perspective offered by thirty years of personal experience and the dramatic social and political changes that have occurred in that time. These stories represent Ms Le Guin twenty years into her prolific career. This collection was published in 2003.

Seven of the stories (maybe only six, by her assessment in the Preface) take place in her speculative universe, the Ekumen, in which all "alien" races are descended from the Hainish on the planet Hain, from which numerous planetary societies have been founded. Sufficient time has passed that quite different, even extreme, societal and physiological variations have arisen. This affords the author a way to explore societal evolution among beings that are at least quasi-human. It removes the difficulty of dealing with totally alien species.

The story I remember best is the opening piece, "Coming of Age in Karhide." Although the Ekumen is mentioned and a few Hainish dwell on the planet, the story focuses on the experiences of a young person approaching "first kemmer", a span of a few days or weeks in which the sexless body transforms into either a male or female body, the newly-sexed man or woman has promiscuous sex in the kemmerhouse, and may become a parent; it can take a few kemmers (which I translate internally as "coming into heat" the way cats, dogs and most animals do) for a female to become pregnant the first time. During each kemmer, a man may remain a man or change to a woman, and vice versa.

The author passed away in 2018, just as "trans ideology" was garnering political power, primarily in "blue" states. I wonder what she thought of it. Thankfully, the ideology is fracturing and I hope it will soon be consigned to the dustbin of history. At present, roughly a quarter of American adults appear to genuinely believe that complete transition is possible. It isn't, "sex reassignment" is cosmetic only. It is only for the rich, of course; transition hormones cost thousands, and the full suite of surgeries costs around a million dollars. The amount of genetic engineering needed to produce a quasi-human with sex-changing "kemmer", should any society be foolish enough to attempt it, would cost trillions.

Other stories in Birthday explore other sexual variations, and the societal mores that must accompany them. These are interesting as exploratory projects. They were written shortly after the death of Christine Jorgensen. Ms Jorgensen was the first American man (but not the first worldwide) to undergo complete sexual reassignment surgery, in the early 1950's. Subjects such as the surgical transformation of the penis into the lining of a manufactured vagina, without disrupting blood vessels and nerves, were actually published in formerly staid newspapers! 

To my mind, in America at least, Ms Jorgensen is the only "transitioner" to whom I accord female pronouns. She transitioned as completely as medical science of the time allowed (and very little progress has been made since). She became an actress and an activist for transsexual rights (she later preferred the term "transgender". I think she learned a thing or two). She even planned to marry a man, but was legally blocked. She intended to enjoy sex as a woman would. Maybe she did.

The last piece in the volume, "Paradises Lost", takes place on a generation spaceship. Population 4,000, strictly regulated to match the supplies sent on a journey that was intended to require more than 200 years. The religious politics that threaten to derail the enterprise don't interest me much. Of much more interest: the mindset of residents in the fifth generation after launch, after all the "Zeroes" and "Ones" have passed away, expecting the sixth generation to be the one to set foot on the new planet; and the way the "Fives" react to their experiences on that planet after an early arrival (sorry for the spoiler).

We are only in part a product of our ancestors' genetics. Much more, we are a product of the environment in which we grew up—which is only in part a product of our ancestors—, in which we had those formative experiences that hone our personalities. While all the stories in this volume explore these issues, "Paradises Lost" does so most keenly.

The work of Ursula K. Le Guin stands as a monument to speculative thinking in areas that few authors of her early years could carry off.

Monday, December 15, 2025

How to not be seen

 kw: book reviews, nonfiction, science, optics, visibility, invisibility

In a video you may have seen (watch it here before continuing; spoiler below),

.

.

.

…titled "Selective Attention Test", you are asked to keep careful watch on certain people throwing basketballs.

.

.

.

…Several seconds in, someone wearing a gorilla suit walks into the middle of the action, turns to the camera, beats its chest, then walks back out of the scene. When this is shown to people who've never heard of it, about half report seeing the "gorilla", and half didn't see it. 

This is called Inattentional Blindness. It is used by stage magicians, whose actions and talk in the early part of a performance direct the audience's attention away from what is happening right in front of them. A magician can't be content with misdirecting half of the audience; the goal is 100%. This is often achieved!

But what if someone wants to vanish from plain sight, without benefit of a flash of fire or smoke (the usual prop for a vanishing act)? Optical science researcher Gregory J. Gbur might have something to say about that in his book Invisibility: The History and Science of How Not to be Seen.

Much of the history Dr. Gbur draws upon is found in science fiction. It seems that every scientific discovery about optics and related fields was fodder for science fiction writers to imagine how someone could be made invisible. This cover image from a February 1921 issue of Science and Invention (edited and mostly written by Hugo Gernsback, later to write lots of science fiction and edit Amazing Stories) shows the rays from something similar to an X-ray machine making part of this woman invisible.

I looked for this cover image online and found an archive of S&I issues. However, the issues were apparently produced with various covers for different regions, and the version in the archive had a cover touting a different application of X-rays. However, the article on page 1074, referred to in the cover shown above, does discuss whether X-rays or something like them can be used to provide invisibility, and also shows another way that structures inside the body may be seen.

Here the "transparascope" makes certain tissues transparent, allowing the viewing of others. IRL, the development of CT scanning and MRI scanning, fifty-odd years later, were required to achieve such views. The invisibility beam of the cover image has so far proved elusive.

Invisibility sits in the broader realm of "how not to be seen." The book shows in detail that the technologies that have been developed to hide or cloak objects can only work perfectly over very narrow ranges of light wavelength (and by analogy, waves in water and other media), and usually a narrow range of viewing angle. Is perfection needed? That depends…

In the late 1960's I worked for a defense contracting company, mainly as an optical technician. I was loaned to a related project as an experimental subject. The team was gathering data on the limits of human vision, detecting the contrast between a lighted object in the sky (an aircraft) and the sky. This was the Vietnam War era. 

The experimental setup was a room with one wall covered with a screen on which versions of "sky blue" were projected. At the center was a hole and various targets were set in this hole. They simulated the look of a dark or darkish object in the sky, and each target had several lighted spots, little lamps. The lamps' color and brightness could be adjusted. I was instructed to tell what I could see. The first day I was there, the background target was black, and the lamps were small and bright. The targets had differing numbers of lamps and their brightness would be adjusted to reduce the visibility of the overall target. This tested acuteness of vision; how many lamps on a certain size target would "fuzz together" and seem to illuminate its entire area? 

For most people, the "fuzz" angle is 1/60th of a degree. When you look up at a Boeing 737 at 30,000 ft elevation, its length of about 130 feet means it subtends and angle of about 1/4 degree. It would take two rows of 25 lamps along the fuselage, and at least 10 lamps, or 10 pairs of lamps, along each wing, to counter-illuminate it and reduce its visibility. That's a lot. A B-52 bomber is 20 feet longer and its engines are huge, like misplaced chucks of fuselage.

On another day, the target's background color was a blue color somewhat darker than the "sky". The target had the optimum size and spacing of lamps to seem of more-or-less uniform brightness, and the brightness and color of the lamps were varied. This tested our color acuity; how far could the colorimetry of the target-lamp combination vary to remain invisible or minimally visible?

This image simulates the second kind of target-lamp combination If you look at this image from a sufficient distance, the simulated target will nearly disappear, or for you it may vanish completely. This works best if you either take off your glasses or look through reading lenses, to defocus the image.

The average color and brightness of the simulated target are a close match to the surrounding sky-blue color. Thus, if an aircraft's belly is painted a medium blue, and a sufficient number of lamps are mounted on it and controlled by an upward-looking system, it can seem to vanish against the sky as long as it is high enough that the angular distances between the lamps is smaller than the circle of confusion (1/60th degree) of the eyes of an observer below.

This set of letter-targets is similar to a different test. Each letter has a little different color and brightness than the "sky". The 5 letters here make up the word "ROAST", but are not in order. For this test the sky color would be adjusted to see which letters were least and most visible. In both panels you will probably see three or four letters, but one or two that are not seen in one panel will be seen in the other.

In the end, it was all for nought. The sky is too variable, and human vision is also variable. There are three kinds of color blindness, and six kinds of "anomalous color vision"; any of these renders visible a target that "normal" eyes cannot see. It's kind of the opposite of those color-blindness tests with pastel "bubbles" that show the letter K to "normies" but the letter G to most color blind people. Also, wearing polarized glasses changes the perceived color of the sky, and tilting your head makes a dramatic difference in the color. Anyone with shades on would see the aircraft easily.

A further drawback of these tests was that no Asians' eyes were tested. In my regular job at the time, we were developing an infrared light source that Asians could not see. The near-infrared lamps used for night vision goggles and SniperScopes were invisible to Anglos, but quite visible to the Vietnamese. Several American snipers lost their lives when they turned on their SniperScope and a bullet came back instantly. What eventually worked was not a different light source but hypersensitive image amplification, the "starlight scope".

My wife is Asian. Certain items that look green to me she tells me are blue. Away from the green-blue boundary, she and I agree on the colors of objects.

The later chapters of Invisibility describe experiments and simulations that could lead to effective cloaking. There is even an appendix that shows a home tinkerer how to make a couple of kinds of visual cloaks that work in at least one direction. Full-surround cloaking is still out of reach, but who knows?

This book earns my "fun book of the year" award. Well written and very informative.

Saturday, December 13, 2025

Nails in the coffin of dark energy?

 kw: science, cosmology, dark energy, supernovae, supernovas, type ia supernova, metallicity

INTRODUCTION

The ΛCDM model of the Universe was proposed after two research groups (led by Adam G. Reiss and Saul Perlmutter) studied certain supernovae. "Λ" (Greek lambda) refers to the cosmological constant, first proposed by Einstein, that describes the expansion of spacetime. The research teams concluded that spacetime was not just expanding, but expanding at an increasing rate. This is called "cosmic acceleration." Their key observation was that distant Type Ia supernovae are fainter than expected. This soon led to the hypothesis that 75% of the energy content of the Universe is "dark energy", which is driving and accelerating the expansion.

When I first read about "dark energy" more than 25 years ago I thought, "How can they be sure that these supernovae are truly 'standard candles' over the full range of ages represented, more than ten billion years?" I soon considered, "Is the brightness of a Type Ia supernova affected by the metallicity of the exploding star?" and "Is it worth positing a huge increase in the energy of the Universe?" From that day until now I have considered dark energy to be the second-silliest hypothesis in cosmology (I may deal with the silliest one on another occasion).

On December 10, 2025, an article appeared that has me very excited: "99.9999999% Certainty: Astronomers Confirm a Discovery with Far-Reaching Consequences for the Universe’s Fate", written by Arezki Amiri. In the article, this figure demonstrates that I was on the right track. The caption reads, "Correlation between SN Ia Hubble residuals and host-galaxy population age using updated age measurements. Both the low-redshift R19 sample and the broader G11 sample show a consistent trend: older hosts produce brighter SNe Ia after standardization, confirming the universality of the age bias. Credit: Chung et al. 2025"

It reveals a correlation between the brightness of a Type Ia supernova and the age of its host galaxy. Galactic age is related to the average metallicity of the stars that make it up. Thus, more distant Type Ia supernovae can be expected to be fainter than closer ones, because more distant galaxies are seen when they were younger, and consequently had lower metallicity. This all requires a bit of explanation.

WHAT IS METALLICITY?

Eighty percent of the naturally-occurring chemical elements are metals. That means they conduct electricity. Astronomers, for convenience, call all elements other than hydrogen (H) and helium (He) "metals". The very early Universe consisted almost entirely of H and He, with a tiny bit of lithium (Li), element #3, the lightest metal. The first stars to form were not like any of the stars we see in our sky. They were composed of 3/4 hydrogen by weight, and 1/4 helium. The spectral emission lines of H and He are sparse and not strong. Thus, the primary way for such a star to shine is almost strictly thermal radiation from a "surface" that has low emissivity.

[Insert Fig2 and add a caption] By contrast, a star like the Sun, which contains 1.39% "metals", has many, many spectral lines emitted by these elements, even as the same elements in the outer photosphere absorb the same wavelengths. On balance, this increases the effective emissivity of the Sun's "surface" and allows it to radiate light more efficiently. The figure below shows the spectra of several stars. Note in particular the lower three spectra. These are metal-poor stars, and few elemental absorption lines are visible (The M4.5 star's spectrum shows mainly molecular absorption lines and bands). However, even such metal-poor stars, with less than 1/10th or 1/100th as much metals content as the Sun, are very metal-rich compared to the very first stars, which were metal-free.

Spectra of stars of different spectral types. The Sun is a G2 star, with a spectrum similar to the line labeled "G0".

One consequence of this is that a metal-poor star of the same size and temperature as the Sun isn't as bright. It produces less energy. Another consequence, for the first stars, is that they had to be very massive, more than 50-100 times as massive as the Sun, because it was difficult for smaller gas clouds to shed radiant heat and collapse into stars. Such primordial supergiant stars burned out fast and either exploded as supernovae of Type II or collapsed directly into black holes.

THE TWO MAIN TYPES OF SUPERNOVAE

1) Type I, little or no H in the spectrum

A star similar to the Sun cannot become a supernova. It fuses hydrogen into helium until about half of its hydrogen is gone. Then its core shrinks and heats up until helium begins to fuse to carbon. While doing so, it grows to be a red giant and gradually sheds the remaining hydrogen as "red giant stellar wind". When the helium runs out, the fusion engine shuts off and the star shrinks to a white dwarf composed mainly of carbon, a sphere about 1% of the star's original size, containing about half the original mass. For an isolated star like the Sun, that is that.

However, most stars have one or more co-orbital companion stars. For any pair of co-orbiting stars, at some point the heavier star becomes a red giant and then a white dwarf. If the orbit is close enough some of the material shed by the red giant will be added to the companion star, which will increase its mass and shorten its life. When it becomes a red giant in turn, its red giant stellar wind will add material to the white dwarf. The figure shows what this might look like.

White dwarfs are very dense, but are prevented from collapsing further by electron degeneracy pressure. This pressure is capable of resisting collapse for a white dwarf with less than 1.44 solar masses (1.44 Ms). That is almost three times as massive a the white dwarf that our Sun is expected to produce in about six billion more years. It takes a much larger star to produce a white dwarf with a mass greater than 1.4 Ms, one that began with about 8 Ms. Such a star can produce more elements before fusion ceases: C fuses to O (oxygen), O fuses to neon (Ne), and so on through Na (sodium) to Mg (magnesium). The white dwarf thus formed will be composed primarily of oxygen, with significant amounts of Ne and Mg. Such a stellar remnant is called an ONeMg white dwarf. Naturally it has more metals present than the original star did when it was formed, but less than a white dwarf formed from a higher-metallicity star.

Now consider a white dwarf with a mass a little greater than 1.4 Ms, with a companion star that is shedding mass, much of which spirals to the white dwarf, as the figure illustrates. When the white dwarf grows to 1.44 Ms, which is called the Chandrasekhar Limit, it will collapse as a powerful Type Ia supernova.

There are two other subtypes, Ib and Ic, that form by different mechanisms. While they are also no-H supernovae, there are differences in their spectra and light curve that distinguish them from Type Ia, so we don't need to consider them further.

2) Type II, strong H in the spectrum

Type II supernovae are important because they provide most of the metals in the Universe. They occur when a star greater than 10 Ms runs out of fusion fuel. It takes a star with 10 Ms to produce elements beyond Mg, from Si (silicon) to Fe (iron). Fe is the heaviest element that can be produced by fusion. These heavy stars experience direct core collapse to a neutron star, with most of the star rebounding from the core as a Type II supernova. During this blast, the extreme environment produces elements heavier than Fe also. (Stars that are much heavier can collapse directly to become a black hole.)

EVOLUTION OF UNIVERSAL METALLICITY

At the time the first stars formed, the Universe was metal-free. It took a few hundred million years for a few generations of supernovae to add newly-formed metals, such that the first galaxies were formed from very-low-metal stars and low metal stars. Even with very-low to low metallicity, smaller stars could form. Since that time, most stars have been Sun-size and smaller, though stars can still form with masses up to about 50 Ms.

Stars of these early generations smaller than about 0.75 Ms are still with us, having a "main sequence lifetime" exceeding 15 million years. I can't get into the topic of the main sequence here. We're going in a different direction.

Stars of the Sun's mass and heavier have progressively shorter lifetimes. Over time, the metallicity of the Universe has steadily increased. That means that the "young" galaxies discussed in the Daily Galaxy article (and the journal article it references) are more distant, were formed at earlier times in the Universe, and thus tend to have lower metallicity.

LOWER METALLICITY MEANS LOWER BRIGHTNESS

This leads directly to my conclusion. A Type Ia supernova erupts when a white dwarf, whatever its composition, exceeds the Chandrasekhar Limit of 1.44 Ms. This has made them attractive as "standard candles" for probing the distant Universe. However, they are not so "standard" as we have been led to believe.

Consider two white dwarfs that have the same mass, say 1.439 Ms, but different compositions. One is composed of C or C+O, with very low amounts of metallic elements. The other has a composition more like stars in the solar neighborhood, with 1% metals or more. As seen with stars, more metals lead to more brightness, for a star of a given mass. Similarly, when these two white dwarfs reach 1.44 Ms and explode, the one with more metals will be brighter than the other.

The final question to be answered: Is this effect sufficient to eliminate all of the faint-early-supernova trend that led to the hypothesis of dark energy in the first place? The headline to the article indicates that the answer is Yes. A resounding yes, with a probability of 99.9999999%. That's seven nines after the decimal. That corresponds to a 6.5-sigma result, where 5 sigma or larger is termed "near certainty".

The article notes that plans are in the works to use a much larger sample of 20,000 supernovae to test this result. I expect it to confirm it. The author also suggests that perhaps Λ is variable and decreasing. My conclusion is that dark energy does not exist at all. Gravity has free reign in the Universe, and is gradually slowing down the expansion that began with the Big Bang (or perhaps Inflation if that actually occurred).

That's my take. No Dark Energy. Not now, not ever.

Wednesday, December 10, 2025

How top down spelling revision didn't work

 kw: book reviews, nonfiction, language, writing, spelling, spelling reform, history

The cover is too good not to show: enough is enuf: our failed attempts to make English eezier to spell by Gabe Henry takes us on a rollicking journey through the stories of numerous persons, societies and clubs that have tried and tried to revise the spelling of English. Just since the founding of the USA, national figures including Benjamin Franklin and Theodore Roosevelt have involved themselves in the pursuit of "logical" spelling. "Simplified spelling" organizations persist to this day.

English is the only language for which spelling bees are held. Nearly all other languages with alphabetic writing are more consistently phonetic. However, I would exempt French from that proviso. I discovered during three years of French classes that the grammar of French verbs is, to quote a Romanian linguist friend, "endless." Putting together all possibilities of conjugation, tense and mood, French has four times as many varieties of verb usage and inflected endings as English does, and then each variety is multiplied by inflections that denote number, person and gender. However, inflections ranging from -ais and -ait to -aient all have the same pronunciation, "-ay" as in "way". Other multi-sonic instances abound. Perhaps French has stalled on its way to being like Chinese, for which the written language is never spoken and the spoken languages aren't written.

But we're talking about English here. The author states several times that there are eight ways of pronouncing "-ough" in English. Long ago a friend loaned me a book, published in 1987, a collection of items from the 1920's and 1930's by Theodor S. Geisel, before he became Dr. Seuss: The Tough Coughs as he Ploughs the Dough. Geisel's essays on English spelling seen from a Romanian perspective (tongue-in-cheek, as usual; he was from Massachusetts, of German origin) dwell on the funnier aspects of our unique written language. The peculiarities of -ough occupy one of the chapters.

Being intrigued by the "8 ways" claim, I compiled this list using words extracted from an online dictionary:

  1. "-ow" in Bough (an old word for branch) and Plough (usually spelled "plow" in the US)
  2. "-off" in Cough and Trough
  3. "-uff" in Enough and Tough and Rough
  4. "-oo" in Through and Slough (but see below)
  5. "-oh" in Though and Furlough
  6. "-aw" in Bought and Sought
  7. "-ə" (the schwa) in Thoroughly ("thu-rə-ly")

And…I could not find an eighth pronunciation for it. Maybe someone will know and leave a comment.

"Slough" is actually a pair of words. Firstly, a slough is a watery swamp. Secondly, slough refers to a large amount of something, and in modern American English it is usually spelled "slew", as, "I bought a whole slew of bedsheets at the linens sale." However, "slew" is also the past tense of the verb "slay": "The knight slew the dragon," which is the only way most folks use that word.

Numerous schemes have been proposed over time. Sum peepl hav sujestid leeving out sum letrz and dubling long vowls (e.g., "cute"→"kuut"). A dozen or more attempts at this are mentioned in the book. Others have invented new alphabets, or added letters to the 26 "usual" ones, so that the 44 phonemes could each have a unique representation. An appendix in many dictionaries introduces IPA, the International Phonetic Alphabet (which includes the schwa for the unaccented "uh" sound). Public apathy and pushback have doomed every scheme.

The only top-down change to American spelling that came into general use was carried out by Noah Webster in his Dictionary. He took what we now call the English U out of many words such as color and favor; the Brits still use colour and favour. And he removed the final K from a number of words, including music and public (I think the Brits have mostly followed suit); pulled the second L from traveler and the second G from wagon; and introduced "plow" and other words that didn't quite make it to present-day usage. His later attempts at further simplification didn't "take".

I could go on and on. It's an entertaining pastime to review so many attempts. However, something has happened in the past generation, really two things. Firstly, advertising pushed the inventers of trademark names to simplify them, particularly in the face of regulations that forbade the use of many common words in product brands. Thus, we have "Top Flite" golf balls, "Shop Rite" and "Rite Aid" retailers, and new uses for numbers, such as "Food4Less" for a midwestern market chain and "2-Qt" for a stuffed toy brand. Secondly, the advent of ubiquitous cell phones motivated kids everywhere to develop "txtspk". Single-letter "words" such as R and U plus substituting W for the long O leads to "R U HWM?" Number-words abound: 2 for "to" and "too", 4 for "for", 8 in "GR8", and even 9 in "SN9" ("asinine", for kids who have that word in their working vocabulary). Acronyms multiply: LOL, ROFL (rolling on floor laughing), TTYS (talk to you soon)…a still-growing list. Even though most of us now have smart phones with a full keyboard (but it's tiny), txtspk saves time and now even X-Gen and Boomers (such as me) use it.

Social change works from the bottom up. Top-down just won't hack it. Unless, of course, you are dictator and can force it through, as Mao did when he simplified Chinese writing the year after taking power in 1949. Many of my Chinese friends cannot read traditional Chinese script. Fortunately, Google Lens can handle both, so Chinese-to-Chinese translation is possible!

We have yet to see any major literature moving to txtspk, let alone technical and scientific journals. If that were to happen, the next generation would need Google Lens or an equivalent to read what I am writing now, and all English publications prior.

It will be a while. Meantime, let this book remind us of the many times our forbears dodged the bullet and declined to shed our traditional written language. The legacy firstly of several long-term invasions (Saxon and Norman in particular), and then the rise of the British Empire, and finally in "melting-pot" America, our language is a mash-up of three giant linguistic traditions and a couple of smaller ones, plus borrowings, complete with original spelling if it existed, from dozens or hundreds of languages. Thus, one more thing found primarily in English: the idea of etymology, the knowledge of a word's origin. I haven't checked; do dictionaries for other languages include the etymologies of the words? My wife has several Japanese dictionaries of various sizes; none mentions the source of words except for noting which are non-Japanese because they have to be spelled with a special syllabary called Katakana.

English is unique. Harder to learn than some languages, but not all, it is still the most-spoken language on Earth. It is probably also the most-written, in spite of all the craziness.

Monday, December 01, 2025

MPFC – If you know, you know

 kw: book reviews, nonfiction, humor, satire, lampoons, parodies

Well, folks, this is a step up from Kindergarten: Everything I Ever Wanted to Know About ____* I Learned From Monty Python by Brian Cogan, PhD and Jeff Massey, PhD. Hmm. If one ignores the learned asides and references, the visual humor of Monty Python in its various incarnations is Kindergarten all the way. The bottom of the book cover has the footnote, "* History, Art, Poetry, Communism, Philosophy, The Media, Birth, Death, Religion, Literature, Latin, Transvestites, Botany, The French, Class Systems, Mythology, Fish Slapping, and Many More!" Various portions of the book do indeed treat of these items, and many more.

The authors make much of the educational background of the six Python members. No doubt, having been steeped in British culture about as much as one is able to steep, Python was eminently qualified to send-up nearly every aspect thereof. Even the "American Python" Terry Gilliam was a naturalized Brit after 1968.

The book is no parody of Monty Python; that's not possible. It is a series of riffs on their treatment of the various and sundry subjects. I have seen only one of the TV shows from Monty Python's Flying Circus, "Spanish Inquisition". The TV show ran on BBC from late 1969 to the end of 1974 and many episodes were re-run in later years on PBS. I've seen scattered bits that made their way to YouTube, and during the period that I could stomach watching PBS, I saw The Life of Brian and Monty Python and the Holy Grail. The book's authors have apparently binge-watched the entire MPFC corpus several times.

I enjoyed the book. I can't write more than this, so I'll leave it to you, dear reader, to delve into it yourself.

Thursday, November 20, 2025

Is half the country enough?

 kw: book reviews, nonfiction, land use, agriculture, prairies, restoration, conservation

About 45% of the land area of the "lower 48" is devoted to agriculture. That is about 900 million acres, or 1.4 million square miles. Roughly one third of that was originally prairie, tallgrass, shortgrass, and mixed prairie ecosystems. Most has been converted to agricultural use. Prior to the arrival of the plow, the prairie encompassed

  • Tallgrass prairie, 140 million acres, or 220,000 sq mi. All but 1% has been plowed and sowed with crops.
  • Mixed-grass prairie, 140 million acres, or 220,000 sq mi. About one-quarter remains unplowed.
  • Shortgrass prairie, 250 million acres, or 390,000 sq mi. About one-fifth remains unplowed.

Taken together, prairie grassland once encompassed 530 million acres, but now more than 440 million acres are devoted to agriculture, making up nearly half the total agricultural land in the US. Surveys of the remaining grasslands show that they are ecologically rich, with dozens of species of grass and hundreds of other plant species, hundreds of bird and mammal and other animal species (and of course tons of insects!), and rich soils that have accumulated over ten to twenty thousand years during the current Interglacial period.

Sea of Grass: The Conquest, Ruin, and Redemption of Nature on the American Prairie, by Dave Hage and Josephine Marcotty, chronicles the history of these grasslands that formerly covered one quarter of the contiguous US. Their characteristics are governed by rainfall. The western edge of the shortgrass prairie laps up against the foothills of the Rocky Mountains, and this semiarid prairie is in the deepest part of the mountains' rain shadow. The eastern half of four states, Montana, Wyoming, Colorado, and New Mexico, plus the Texas panhandle, host shortgrass prairie.

Further east, a little more rainfall allows medium-height and some taller grasses to grow. This mixed-grass prairie makes up most of the area of North and South Dakota, Nebraska and Kansas, plus the middle of Oklahoma and Texas. Tallgrass prairie is supported by the more temperate rainfall amounts in Minnesota, Iowa, Illinois, northern Arkansas, eastern Kansas, and a little bit of eastern Oklahoma. The eastern extent of the prairie abutted the deciduous forests of the Midwest, which are now mostly stripped of trees and used to grow corn and soybeans.

The book's three parts cover, with some overlap, the prehistory of the prairie, the progress of its subjugation to agricultural use, and the progress of efforts to conserve and restore portions. The third part includes 40% of the book, and is clearly the authors' aim.

Rather than repeat details that are better stated by the authors, I'll just display the bottom line: Prairie soils are biologically rich, conferring great ecosystem services. These include sequestering large amounts of carbon dioxide, absorbing rainwater runoff which reduces acute flooding, and quickly taking up excess nitrogen from over-fertilization of nearby agricultural fields rather than permitting it to flow into streams and eventually the Mississippi River and the northern Gulf of America. These points are being made in courtrooms throughout the central US, arguing not only that remaining prairie should be preserved and conserved, but that portions of agricultural fields in this area amounting to several percent should be reverted to native grasses to reduce the damaging effects of pervasive monocropping.

Existing primordial prairie is also a treasure to be enjoyed. The image above is like views I've seen in a few grasslands we've visited. In the early 1980's whenever my wife and I went to visit a rancher we knew in central South Dakota, there is a spot along I-90, about seven miles before reaching Wasta, on the plateau above Boxelder Creek and the Cheyenne River, where we always stopped to get out of the car and stretch our legs. In all directions, the only sign of human life was the highway itself, and, of course, us and our car (I note on current satellite images that there are a number of billboards and a new shelter belt in the area now. Sigh.).

Other efforts are discussed, such as researching the best cover crops to preserve soil from erosion after harvest, and finding the "knee" in the relationship between fertilization and crop yields to better select appropriate levels of nitrogen application—I find it amazing that this is still so little known.

In keeping with the importance of the subject, the book is big and packed with information and gripping stories. It is well written and it rewards close reading. Enjoyable.

Monday, November 17, 2025

Greenhouse Effect – the hidden players

 kw: analytical projects, greenhouse effect, global warming, absorption spectra, saturation

Reading a book about agriculture led me to thinking about the "hidden" greenhouse gases. I am sure almost everyone has read or heard that methane is 80 times as potent as carbon dioxide as a greenhouse gas. I recently learned that nitrous oxide (laughing gas, also a dental anesthetic) is between 250 and 300 times as potent as carbon dioxide. Both of these gases are produced by agricultural activity, so they have increased in the past 200 years as agriculture has been increasingly mechanized, and as chemical fertilizers have been used in ever-increasing amounts. (I generated this image using Leonardo AI; it is free of copyright restrictions)

I researched in several sources to find answers to these questions:

  • What were the concentrations of nitrous oxide and methane prior to the Industrial Revolution?
  • What are their concentrations now?
  • How to they affect global warming?
  • Are there other greenhouse gases we should be concerned about?

To simplify the text, I will dispense with formatting the numbers in chemical formulas as subscripts. Thus, CO2 = Carbon Dioxide, CH4 = Methane, and N2O = Nitrous Oxide (Nitrogen has several oxides; only this one is important here).

Here is the connection with agriculture: The middle-American farm belt was created by plowing the prairie and planting grain crops. Today, by far the most important crops are corn and soybeans. The thick, rich prairie soils contained a 10,000-year store of CO2, deposited by the roots of grasses and held there as they decomposed. Plowing the prairie released the CO2 at a pretty steady rate over the past century. It is still going on. Plowing also releases stored CH4.

When I lived in South Dakota in the 1970's and early 1980's, most of the agriculture in the state was cattle ranching, with some grain crops being grown in the eastern third. Since that time seed companies have developed strains of corn and soybeans that can better resist drought, begin growing at lower temperature and ripen faster. South Dakota cattle ranches are being plowed and sown with grains at a steady rate.

Secondly, overuse of nitrogen fertilizer causes much of the "extra" to be converted to N2O. Large amounts also go downstream and contribute to the Dead Zone offshore of the Mississippi Delta.

Thirdly, cattle produce a lot of methane, and the reduction in cattle numbers in the Dakotas is more than offset by continued increases elsewhere; also, plowing the prairie releases CH4, and all this is added to the amount released by fossil fuel production. I have yet to see a credible analysis of all the sources of CH4.

Yet all we ever hear about is the rise in concentration of CO2 alone. This is indeed significant, from about 280 ppm in the 1700's to about 440 ppm today. This "baseline increase" is (440-280)/280 = 0.57, a 57% increase in the past century or so. 

What of CH4 and N2O? Let us first convert them to equivalent CO2. I'll leave out a lot of words and summarize the figures:

  1. CH4 as a GHG is 80x as effective as CO2. Current CH4 concentration is 1.9 ppm; times 80 that is equivalent to 152 ppm CO2. In the 1700's, CH4 was 0.72 ppm, or CO2 equivalent (CO2eq)  of 57.6 ppm.
  2. N2O as a GHG is ~280x as effective as CO2. Current N2O concentration is 0.34 ppm; times 280 that is equivalent to 95.2 ppm CO2. In the 1700's, N2O was 0.27 ppm, or CO2eq of 75.6 ppm.

Added together, these two gases presently have CO2eq of 247. The preindustrial level was 133. Let's add these to CO2 to see the real picture of the greenhouse effect at these two times:

  • Preindustrial: 280+133 = 413 ppm CO2eq
  • Today: 440+247 = 687 ppm CO2eq
  • (687-413)/413 = 0.66, a 66% increase in CO2eq

The actual increase in CO2eq is greater than the effect of CO2 alone. Suppose we could reduce CH4 and N2O to preindustrial levels. This would subtract 114 ppm CO2eq, for 573. Then (573-413)/413 = 0.39, or 39% increase in CO2eq, compared to preindustrial. To put this in context according to the mental model held by "climate crisis" folks, for CO2 only, a 39% increase over 280 ppm would be 389 ppm. That is about where we stood in 2011; it winds back the clock sixteen years!

Let us focus a moment on N2O. By itself, increase in the concentration of this gas is responsible for about 20 ppm CO2eq, the last nine years of increase. This is nearly all due to overfertilization. Guess which industry complex is bigger and has a stronger lobby in DC than oil and gas? Agriculture plus agrichemicals (particularly fertilizer). I have read in more than one place that without artificial nitrogen-based fertilizer, the world's farmland could support no more than four billion people. It is very complex to analyze just how much fertilizer could be reduced to still support the current world population, but reduce nitrate runoff and outgassing of N2O into the atmosphere. For the moment, I just have to leave these thoughts unfinished. If we could come up with a plan, powerful interests would oppose it.

At this point in my analysis I wondered what other greenhouse gases exist, and how they might modify the picture. As it happens, nothing much. Here is a table I worked from for the figures above, which adds six greenhouse gases that, together, are sometimes written about in very scary terms, but have no practical effect at present:


First, ground level Ozone (O3) has a modest Global Warming Potential (GWP: 1.5 x CO2), and exists in the 1-10 parts per billion range, so it is not effectively a greenhouse gas. Then, the industrial chemicals Sulfur Hexafluoride (SF6) and Nitrogen Trifluoride (NF3) have very high GWP, but exist at levels of a few parts per trillion. To totally eliminate them would reduce CO2eq by much less than one percent (see the black text at the bottom of the table)

Various fluorinated refrigerants, those highlighted in brown, have very high GWP, but also exist at levels of a few parts per trillion, so together, they also amount to less than one percent (the brown text). Thus, they present no useful "targets" for ameliorating the greenhouse effect.

My aim here has been to back off a few steps to see a bigger picture. As it happens, this points a finger where none has been pointed before, at farmers. A significant proportion of the increase in CO2eq results from farm practices. In particular, far too many farmers use more fertilizer than their crops really need. There is too much of, "a little more might help." No, it doesn't, it harms. It even harms the farmer, who spends more than needed on fertilizer that isn't helping.

I have a philosophical point to end with. I think that the greenhouse effect will prove to be more beneficial than otherwise. The "father of greenhouse warming", Svante Arrhenius, thought so. Another degree or two of warming is likely to make more of Canada and Siberia amenable to crop production, and let's not forget South Africa and Argentina. On another note, I saw an article recently with a headline, "550,000 will die of extreme heat." The subhead said, "The greatest cause of early death." The article never mentioned that 4.6 million will die from cold. Nine times as many! The subhead is, quite simply, a lie, and the article is utterly one-sided deception. I suspect many of those 4.6 million would love for their home country to be a little warmer.

Sunday, November 09, 2025

Tied for the oldest sense

 kw: book reviews, nonfiction, science, olfaction, nose, sense of smell

It is fascinating to watch a motile bacterium such as E. coli in motion. It trundles along, its rear-mounted flagella spinning to propel it in a mostly straight line. If it bumps into something it will back up some distance, tumble, and then move off in a new direction. Frequently, after a short distance, it may reverse course or tumble again to pick a new direction.

The latter action hints at what is going on. How does it pick a direction to go; what is it trying to reach? Of course, like all living things, it is searching for food. It is following a chemical gradient by sensing a chemical of interest in the water around it. If, as it moves along, it senses a stronger concentration, it keeps moving. If the concentration is decreasing it reverses course or tumbles to try a new direction. Its chemical sense can be called either "taste" or "smell" and is one of the two oldest senses. The other is touch. Bumping into something, or alternatively, approaching closely enough for cilia on the cell to touch the something, coupled with the chemical sense telling it, "this isn't what you are looking for," triggers actions such as backing up and/or tumbling. Touch is the other "oldest sense." The two seem to go together, and they work together to guide the cell to a possible source of food.

Strictly speaking, smell is thought to relate to chemical cues carried in the air, so the bacterium, being in a watery medium, must be using taste rather than smell. But at the most basic level these are actually the same. Chemical substances that are smelled first enter a watery layer over the sensory nerves, where they are detected.

For air-dwelling creatures, smell is a more long-range sense. Chemical substances travel through the air faster than they do through water, although both aerial and fluid currents can bring them from far away. But diffusion in still air is faster than in still water. Furthermore, when two senses work together, smell precedes touch, while taste follows contact.

The title of Jonas Olofsson's book, The Forgotten Sense: The New Science of Smell and the Extraordinary Power of the Nose led me to think, "Why didn't he call it The Neglected Sense?" No matter. He reveals the neglect that smelling has undergone through the centuries since it was placed by Aristotle at the bottom of the list of useful senses, an error compounded when Paul Broca divided animals into "osmatic" and "anosmatic": those like the dog for which smell was primary and those like humans (he thought) for whom smell was definitely not worth much. I guess that neither Broca or old Ari stopped to consider how he would detect a bad lot of wine or olive oil if he plugged his nose. Tasting without smelling could be a risky business!

I was most fascinated by an analysis to which the author refers, "Human and Animal Olfactory Capabilities Compared," by Matthias Laska in the 2017 Springer Handbook of Odor. Humans and a number of medium-sized and smaller animals were tested for their sensitivity to a few dozen scented substances. Animals tested included rats, dogs, vampire bats, a couple of monkey species…twenty species in all. The only animal with a nose more sensitive than ours was the dog!

Earlier studies that compared the size of an animal's olfactory bulbs to its brain were misleading because it is the absolute size of the bulb that matters. The roughly 60 mm3 volume of human olfactory bulbs is a tiny fraction of the brain's volume, about 1/20,000th. In relative terms, the olfactory bulb of a mouse is enormous, about 1/16th of the total brain. However, its actual size is less than half that for a human: 25 mm3. An ordinary dog (not the tiny breeds or pug-nosed ones) has an olfactory bulb six times larger than a human, which gives it a huge advantage, as was shown clearly by the sensitivity tests. I wonder if they could give a similar test to an elephant, with an olfactory bulb volume of 11,000 mm3 ?

Technical issues aside, a big section of the book relates the emotional effects that smells can mediate, such as a whiff of salty air evoking a favorite memory of a seaside vacation, or the comfy aroma of a morning coffee and bowl of blueberries. 

This also introduces the subject of smell gone wrong: COVID-19 introduced millions around the world to a life without smells, at least temporarily. An interesting consequence of the sudden loss of smell for many people was that they began to wonder how they could tell if their own smell was offensive to others! Another was the loss of interest in food, because most of what we call the taste of many foods is actually a combination of smell and taste. We can taste but five qualities: sweet, salt, savory (umami), sour, and bitter; we can smell thousands or hundreds of thousands of different qualities. So many that we can seldom describe any of them to someone else.

Loss of smell is called anosmia. In some ways, a distorted sense of smell, parosmia, can be even worse. Imagine one day finding that your morning cuppa smells like rotting onions! This can also be caused by viral infections, but there are other causes, including a hard bump on the head. Some people with parosmia can't stand to eat favorite foods, though some are able to eat enough to stay healthy by putting on a nose clip. There is a long section describing ways of desensitizing and retraining the sense of smell. Sadly none of the methods is effective in all cases, but it can be a lifesaver for many.

A side note: There is a reversible distortion of taste I have experienced, caused by an Asian spice called Tiger Claw, related to Star Anise. The seed pods are used whole to flavor soup. They aren't supposed to be ingested. Biting into one causes a shift of taste, such that water tastes like battery acid and nothing seems edible; it lasts several hours. I have numerous Chinese friends, and I wound up mostly fasting during a potluck lunch…

The author is a scientist of the sense of smell. He first wrote the book in Swedish, then translated it into English himself. His and his colleagues' work just might elevate our understanding of the sense of smell, from a "neglected" sense to one that is equally essential. And in time perhaps we'll attain added vocabulary to help us describe our favorite (or otherwise) aromas.

Friday, October 31, 2025

Flipping the script on psychiatry

 kw: book reviews, nonfiction, psychiatry, mental illness, bipolar, recovery, twelve step programs

After reading Unshrunk: A Story of Psychiatric Treatment Resistance by Laura Delano, I became even more thankful that I escaped the depths of psychiatric treatment. Psychiatry is a black hole; getting in is easy, while getting out is usually impossible. Let me settle two pieces of business before going further.

First, a piece of extremely serious advice: If you are feeling depressed and you seek the help of a medical or psychiatric professional, do not ever loosely say you have been thinking of suicide. To be clear, if suicidal thinking dominates your thinking all day, every day, and has done so for as long as you can remember, then it is worth telling this to a professional. Otherwise, beware: if a psychologist or psychiatrist becomes convinced that you are likely to harm or kill yourself, they are required to commit you to psychiatric care in a mental hospital. That's one way of dropping straight into the black hole. If you have no trusted friend or family member, contact the Inner Compass Initiative, founded by Laura Delano. Something to ponder: psychiatrists are people, with the same foibles as all of us. Many of them got into psychiatry because they wanted to find out why they had certain experiences (see image following!).

Second, a very brief mental history of myself. I was diagnosed with "Bipolar II Disorder" at age 54. In retrospect, it made certain old stories make sense. During my K-12 years, I would occasionally "flip out". Otherwise I was a quiet kid, sometimes bullied but too big for the bullies to feel safe going too far. The school I attended in grades 4-6 had an abandoned chimney, the ruins of a demolished house, at the back of the property. One day in 5th grade I led a group of kids to "Play Santa Claus" by climbing down the chimney. We returned to class covered in soot. On a few other occasions I was overcome by a burst of energy and spent the lunch period running through a stand of Sumac, holding a branch and whacking the plants. During the high school years I became the family's firewood chopper and splitter. This continued into college; it is a great way to blow off steam. During my first year at college, on two occasions I was overwhelmed by overwork—I tend to take on more work than I am capable of completing—and wound up in the school clinic under sedation. As an adult I wondered how I could be so moody, so unpredictable, but only for short periods at long intervals. I distinctly remember reading an article that quoted several creative people who discussed their swinging moods (at that time the usual term was "Manic-Depressive"); one writer expressed it this way, "When I am up, I write, and when I am down, I edit." I sat back and thought, "Oh, neither mood swing is essentially bad because both can be useful." At age 53 I made a new friend. He told of spending 10 years taking Zoloft, but being concerned that it wasn't working well any more to control his experiences of major depression. He also told of sometimes spending three days obsessively cleaning his apartment, or seven hours washing a car. I said, "That sounds manic (by this time I knew a little about Bipolar "disorder"). Why don't you mention these things to your shrink the next time you see her?" He did so and was switched from Zoloft to one of the anticonvulsive drugs that are useful in mediating Bipolar mood swings. He was re-diagnosed as Bipolar I (stay tuned). The next time I had a depressive period of my own, that had come on without reason, I spoke to my doctor about it. He prescribed a low dose of Zoloft. At the follow-up appointment a month later, I was high as a kite. He said, "OK, this is mania. Let's try something different." Zoloft is an upper, not a mediator. I don't recall which drug was used. It was apparently helpful, but I began to need a daily nap and I gained some weight. I went to see a psychiatrist, one of only two that I consider competent, named Valentine. She explained more, saying that I seemed to be between Bipolar II and Cyclothymia, and suggested using half the dose of the medication. Six months later, I found that she had moved out of state. There followed a period of yearly visits with four different psychiatrists, all "bottom of the barrel"; I could tell they were crazy, and had got into psychiatry mainly to figure out their own issues...and failed. One of them wanted me to switch my medication to Depakote. I said, "I know someone who takes that. He gained 90 pounds. I already feel bad about being a little overweight. How will this keep me from getting suicidally depressed?" I stormed out and didn't pay the bill when it came. Finally I got another competent psychiatrist, who switched me to Abilify, a low dose, saying, "This will keep depression from going too deep, and allow you to have a little 'fun' when you're manic." I deduced that it was a milder form of Zoloft. It worked quite well, but I still needed a daily nap, and continued to gain weight. Then one day, going to a scheduled appointment, I found his office locked and dark, and nobody in the building knew what happened to him. I had wanted to discuss with him how I could stop the meds completely. I had been taken by a friend to get acquainted with a man who had a severe case of Bipolar I, but had weaned himself off all medications, with his wife's help. Rather than try to find another doctor, I decided, "I'm done with this." I had another month's supply of Abilify. I cut the pills in half and took a half dose for a month, then cut the remaining half-pieces in half and used them up over the next two months. In the meantime, I consciously practiced awareness of my mood. Like the writer, I have things I can do when I am "up", which I call "open", and other things I can do when I am "down", which I call "closed" or "reserved". Now I am 78, off meds for more than 15 years. My wife, who has put up with me just over 50 years, is more relaxed: I am more stable. I thank God I was never institutionalized, and never put on a 4- or 5-drug cocktail. Such regimens are an admission of failure to find an effective treatment.

Now to the book. Ms Delano is apparently prone to overreaction to emotional stimuli. It is common for a young girl to one day look in the mirror and think, "Who is that? Who am I?" In her case, at age 13, it seems to have led to a panic attack, or something very like one. She became edgy and uncooperative at home, and after some time her mother took her to see a psychiatrist. She got a diagnosis (Bipolar) and a medication. That was the entry gate to fourteen years of increasing misery, including a few periods in mental facilities, yet she managed to complete a degree at Harvard. At one point she committed suicide but was rescued.

Along the way she was given various diagnoses—Bipolar wasn't mentioned after the first year—culminating in Borderline Personality Disorder with Treatment Resistance. Those six words really mean, "We don't know what the Hell is going on, she's just impossible to handle." This exposes the darkest part of the underbelly of psychiatry. You can't get away with stopping a psychotropic drug instantly. It is like a heroin addict stopping "cold turkey". Withdrawal symptoms are dreadful, and can kill. I experienced a little of that the few times a drug was switched for me. She had it in spades! When a doctor recommends "tapering off" a drug, they typically recommend a two- to four-week taper. That's too fast. Half a year to a year is better, and in the case of Lithium, it may take several years to wean your body from lithium toxicity. Remember this principle: withdrawal symptoms are very similar to the condition being treated. This does not mean relapse. It means the tapering needs to be more gradual.

Side note: I have read a few times that "the therapeutic window for lithium is narrow," which means that the amount that helps is only a little less than the amount that harms. Actually, that window is negative: The "help" that lithium affords is actually a side effect of lithium toxicity. Lithium "helps" by damaging your nervous system, reducing a harmful syndrome by reducing everything! This is compounded by growing lithium dependence, which takes years to shake. See the "Tapering" section of Inner Compass for more information.

Chapter 34, "Critical Thinking", deserves special mention. It outlines the sad circumstance that psychiatry has become big business. The largest proportion of political lobbying (bribery) is by the pharmaceutical industry. Doctors of all kinds, not just shrinks, are aggressively pushed (sometimes coerced) to push pills at every juncture. Analyzing the DSM (Diagnostic and Statistical Manual, the "psychiatrist's Bible"), the author finds that many of the items listed share more with fads than with facts. Assertions without appropriate evidence, and suppositions without logical reasoning. Maybe you've heard that "chronic depression is an imbalance in brain chemistry." Would it surprise you to learn that no such imbalance has ever been measured? Never, in spite of much trying.

A word about tapering. Drugs are dispensed in sizes of 1,2,4 or ½,¼, etc. If the basic dose is 10 mg, and you're on 40 mg, there's no 30 mg to taper to. And you may need to reduce from 40 to 35 for a week or two, then 30, and then 27, 25, 23, and so forth. What can you do? I suggest pill splitting. My wife takes a statin drug for cholesterol. But she doesn't need much. The smallest available dose is 10 mg. She splits it to 5. For a while she cut the pills into 3 pieces, but that wasn't quite enough. So if you need to go from 40 to 35, what do you do? Have the doctor (it might take a lot of negotiation) prescribe 20's and 10's and 5's. 20+10+5=35. Next reduction, 20+10=30. Then split a 5, so 20+5+2½ = 27.5, and so forth. You get my point.

I'll leave it to you to read this book. If you have any kind of "personality disorder", or know someone who has, get it and read it. Me Delano's Odyssey out of the sloughs of psychiatry is epic. Truly Epic. She is "unshrunk" now, and much happier for it. I also am happier, having escaped the clutches of a system that "disorders" everything.

So get the book!

----------------------------

A little glossary:

  • Syndrome: A collection of signs and symptoms that occur together and characterize a specific condition. 
  • Disorder: A disruption or impairment of normal bodily functions or mental processes.
  • Bipolar I Disorder: Characterized by manic episodes, which are periods of abnormally elevated mood, energy, and activity levels. Individuals may also experience major depressive episodes. 
  • Bipolar II Disorder: Involves hypomanic episodes, which are milder forms of mania, and major depressive episodes. Individuals do not experience full-blown manic episodes. 
  • Cyclothymic Disorder: A chronic condition characterized by numerous periods of hypomania and mild depressive symptoms that do not meet the full criteria for Bipolar I or II. Also known as Bipolar III.
Note that a syndrome describes a condition, which may be a disorder, but not always. The condition of physical fitness, which includes strength, vitality, and energy, can be considered a syndrome. It is in no way a disorder. The key term about disorder is "disruption or impairment".

I would prefer that the three levels of Bipolar be called "syndromes" rather than "disorders." In my case, with appropriate understanding and practice, Bipolar II is a condition that I can take advantage of to experience a broader range of social interaction. Furthermore, I consider bipolar syndromes, regardless of severity, as exaggerations of the normal mood cycling that is inherent in the human condition. A doctor friend of mine (definitely not a psychiatrist; he's to sane to be one) explained it this way: 

The scale of moods runs from zero to ten, from the lowest possible to the highest possible. Most people rock along in the 4-6 range, where 5 is "contented, neither sad nor excitedly happy". After a very fortunate event, such as a promotion, a marriage proposal or acceptance, or reaching a tough goal, we feel extra happy, even excited, getting into the range of 7 or 8 for a while. This can't be sustained for long, and we settle back to a 6, and then a 5. After a very unfortunate even, such as being fired, or the death of someone close to us, we feel very low, even depressed, in the range 2-3. Death of a parent, child, sibling or spouse causes us to experience grief, a solid 2 or even 1, for about a year. But this eases over time and we resume our usual "setting" near 5. A depressive person has a chronic setting near 2 or 3. A maniac is stuck at 7 or 8. Normal mood swings run between 3 and 7, though most of the time the 4-6 range is "home base". Bipolar II and Cyclothymia swing between 2 and 8, while Bipolar 1 ranges between 1 and 9. Hitting zero leads reliably to suicidal thinking and often a suicide attempt. Hitting 10 leads to both internal distress and social ostracization for being "too crazy". Extreme Bipolar I bangs these limits on a regular basis and probably does need medication.

There is also the matter of cycling frequency. Normal mood cycles last 3-12 months. Having a "blue mood" each winter is a sign of a 12-month cycle, emphasized or even triggered by seasonal changes. Rapid cycling is three months or less. I am a rapid cycler, which is common for Bipolar II. I have 4-8 depressive (or "quiet and reclusive") episodes yearly, and shorter periods of hypomanic mood (on the edge and extra social) in between. If I am in a quiet phase but must perform a social task such as giving a speech, I can do so, but I "pay the price" with the need to "hide out" for a number of days thereafter. When I am on an even keel or even hypomanic, giving a speech is no problem; it is a pleasure. I just have to keep myself in check when I am up; I tend to interrupt myself or lose track of a train of thought.

Perhaps all this is foreign to you. You may be solidly sane. I am so glad for that! If any of this rings a bell, recognize that you are still most likely close to normal, perhaps just "a little more than normal." Gravitate to friends or family who give you space without judgment. Be wary of psychiatry, but don't write it totally off. Sometimes it's needed. Just don't give up your autonomy to a shrink.

Thursday, October 23, 2025

Freakonomics with a stethoscope

 kw: book reviews, nonfiction, medicine, economics, motivation, biases

You're 48 years old. You have a pain in your gut. Over the next few days it gets worse, and you begin to have diarrhea. You see a doctor, who says it might be an ulcer and suggests an OTC antacid. That seems to help, but not 100%. Being an agreeable sort, and in the midst of a demanding career, you carry on for several months. The diarrhea and pain come and go, come and go. Then the pain gets worse, and the diarrhea gets worse, and gets darker, even tarlike. What now?

This happened to a young friend's mother, and "What Now? meant seeing her doctor quickly, getting a colonoscopy at age 49, and dying of colon cancer a month later.

Now suppose the age above was not 48 but 52. The fourth sentence and what follows is likely to read, "You see a doctor, who orders a colonoscopy. Cancer is confirmed, and removed in an operation. Several months of chemotherapy follow, and you live many more years."

When I had gut pain at age 53, the doctor should have ordered a colonoscopy, but didn't. Why? The insurance industry scores doctors badly who order too many of the more costly tests! I eventually did have a colonoscopy, but I had to order it myself! I soon had a serious operation, half a year of chemotherapy, and now I am 78. Had I waited for this rather passive doctor to get around to ordering the test, I'd have died 25 years ago. I didn't go back to that doctor. There's a further wrinkle in this, which I'll return to.

What's the difference between age 48 and age 52? I could have used 49 and 51. The cutoff for "elevated risk of colon cancer" is age 50. Not only is it hard to get the insurance company to pay for a colonoscopy if you are "too young", the guidelined cutoff is a mental barrier for your doctor, who probably just won't think of investigating more deeply.

Such cognitive biases and blind spots are the subject of Random Acts of Medicine: The Hidden Forces that Sway Doctors, Impact Patients, and Shape Our Health, by doctors Anupam B. Jena and Christopher Worsham. Dr. Jena is host of the podcast Freakonomics, M.D., and the book takes an approach similar to that in Freakonomics and Superfreakonomics by Steven Leavitt and Stephen J. Dubner, two books I have close at hand. Together, these three books emphasize that at its root, economics is the study of motivation, of why people do things.

In the dozens of cases reported in the book, the doctors and their associates plumbed the databases of Medicare and the CDC for information that allows them to winkle out the little anomalies that reveal biases such as the "first digit bias" that puts age 48 or 49 into the "forties" bin and 51 or 52 with the "fifties". Untimely deaths can and do result form such biases.

If these two doctors stepped into the waiting room and you had the chance to choose one of them to perform your yearly physical, which would you choose? (The image was generated using Leonardo AI)

Granted, neither you nor I will be offered the chance to choose a doctor "on the spot," but if you were…? Here's the wrinkle from above: Fifty years ago I would have been more inclined to choose the man, not because of race but because he's male. Since then I've had to change doctors a number of times because of my moving or doctors moving elsewhere or retiring (or the passive doctor I "fired"). I've had both male and female doctors. By age forty, a man starts getting the "digital prostate exam", sometimes called the "golden finger". Having been probed by both male and female doctors, I found that I really prefer a doctor with long, slender fingers! A female musician who happens to be a doctor fits the bill perfectly. I've also learned that women are more willing to take an extra few minutes, and more likely to think sideways in case there is a second factor, not just "the diagnosis". My current doctor is female, and is tied for best doctor I've ever had.

What do doctors Jena and Worsham have to say about that? They studied Medicare records of 1.5 million hospitalizations, and gathered information about the outcome of care by 58,000 doctors, of which 32.1% were women. The criteria were thirty-day survival and rate of readmission. After the data were normalized to eliminate confounding things, here are the key facts:

  • 11.3% of the patients died within 30 days of being hospitalized.
  • For women internists, mortality was 11.1% and readmission rate was 15.0%.
  • For men internists, mortality was 11.5% and readmission rate was 15.6%.

Are these differences small enough to be negligible? No. More than 10 million seniors are hospitalized for medical conditions (excluding accidents) yearly. The doctors conclude, "…if male internists were performing at the level of women, there would be thirty-two thousand fewer deaths…each year." 32,000. That's 80% of the death toll from highway accidents.

Earlier in the book, we find that the month a child is born influences the likelihood of getting a flu shot during yearly pediatric exams. That influences the number of kids that get the flu. Why? The new flu shots become available in autumn, preparing to deal with the surge of influenza in the wintertime. A parent of a youngster whose checkup is in May or June is told to return to the doctor in October for a flu shot. Less than half do so. Some may take the child to a drug store clinic or instant clinic, but that is a small percentage. So kids with birthdays in the spring or fall, or even late winter, are less likely to be vaccinated, and more likely to get the flu, or to get a bad case.

The last chapter of the book dwells on the COVID-19 pandemic, and the role of politics in medicine. Humans have been called "the political animal"; politics gets into everything! The struggle for power is the source of the world's greatest evils. I'll leave it up to you to read their insipid take on the matter (sorry, docs!). Instead I'll riff on the experiences of myself and my wife.

We were reluctant to get the mRNA agent that was being called a vaccine. We learned some stories of people who survived the disease well enough, but had "Long Covid" and in some cases were debilitated for months. That tipped the scales; we decided to get the shots, which we did in April 2020. We were generally compliant with things like masking and "social distancing". By the time various "boosters" were announced, we'd done sufficient research to realize that the "vaccine" was usually useless and often harmful. Here is a point I wish the doctors had put in the book: The yearly number of serious adverse reactions to the mRNA agent is just a little greater than the sum total of serious adverse reactions to all other vaccines combined!

How many remember in the middle of the controversy, Dr. Anthony Fauci saying, "I AM Science!" He had already admitted to lying a couple of times, and had been caught in a few other lies. Here he lost all remaining credibility. He doesn't understand science, not even a little bit!

Here is what the mRNA agent does: It induces your body to create a particular protein found on the spike of the SARS-Cov-2 virus. That protein triggers the immune system to create antibodies to that single protein. It is a two step process. By contrast, a vaccine consists of broken-up viruses or proteins extracted from them, which triggers the immune system to create antibodies to most or all of the proteins in the vaccine. The extra step that came before increases the variability:

  • Different people have different levels of response to a "foreign" protein. One person's immune system may produce ten or one hundred times as many antibodies as another's. This is why vaccines aren't 100% effective. Flu vaccines in particular show this effect.
  • Different people have different levels of response to the mRNA agent. One person may wind up with ten or one hundred times the level of "spike protein", which in turn is subject to the range of variable response noted above.

A good portion of my career I used the statistics of distributions. I'll save you the agony of figuring out any equations. Rather, let me just say that when you have two distributions, the mathematical tool used is called convolution. The final, overall distribution is very wide indeed. In this case, a range of a few thousand to one. Also, you may have heard of the "Gaussian distribution", also called the "Normal curve", a smooth curve with a symmetrical hump in the middle. That's not what we have here. The response distributions here are more likely Lognormal distributions, which have a small number of large values and a much larger number of small values. Convolving two of these yields an extra-wide distribution, but heavily weighted toward very few powerful responses, a large-ish number of "middling" ones (centered on the "target" response the pharma company aimed for), and an overwhelming number of small to almost nonexistent responses. These small responses led to the "breakthrough" cases of COVID-19 disease among those who took the shots. For some people, the shot may as well have been distilled water.

My wife and I count ourselves lucky. We had mild reactions to the mRNA shots, a little stronger than the "sore arm" we get from a flu shot, but not too bad. The same-day response was to the mRNA itself, and the next day's soreness was in reaction to the protein thus created. We learned later that some people dropped dead on the spot! These must have been those with a super-strong response in both steps of the process. Their immune responses overwhelmed the body.

I have several friends who are doctors. One of them, because of his work, was doubly treated; he received both the Pfizer and the Moderna mRNA agents. He has since had COVID twice. But before that, he and I worked out a strategy to deal with the infection: Stop eating for a couple of days. Those who died from the infection actually died from pneumonia, which was caused by the body's overreaction to the virus. The ones with the strongest immune systems died first! What is the gooey junk that fills the lungs during pneumonia made from? Sugar. This is why diabetics have the highest risk. What happens when we skip meals? Blood sugar is reduced. It drops a lot. This hinders pneumonia.

Secondly, the two "Democrat-hated" drugs, Hydroxychloroquine and Ivermectin, are useful not because they are anti-viral. They aren't. It is because they tamp down the cytokine reactions that lead to pneumonia. HCQ works in the first day or two, and Ivermectin later on. My doctor friend obtained supplies of both medications for himself and for my wife and me.

At the end of August, 2022, I caught COVID. Here I found out that the joke was on me. The primary symptom I had was powerful nausea. I threw up everything, and I couldn't even drink water! So I couldn't take HCQ!! I went to the one clinic in the area with antivirals on hand, and was given those plus anti-nausea pills so I could swallow the meds and keep them down. Although I'd moved to the spare bedroom the day I woke up sick, my wife got sick a week after I did. She had a very sore throat, making it painful to take any pills. She went to the clinic and also got the antiviral. Both of us recovered quickly. By the way, I had lingering low appetite, my kind of "long Covid", and I took advantage of it to lose some weight, around 30#.

A year and a half later, the above scenario was repeated. Same symptoms, same need to get the antivirals, same quick recovery. I was able to lose another 15#, and I learned what it takes to hold my weight. So I count SARS-Cov-2 my friend!

Another year or so has passed. My family doctor is in agreement with what we've done and with my determination never to get a "booster." They're too dangerous.

That is a long digression from a wonderful book. Doctors Jena and Worsham show how and why doctors make certain kinds of errors, and discuss ways these errors are being mitigated. Reading this book is useful to all of us as patients, so we have the mental tools to work with our doctor(s). We can't replace them, but we can either help or hinder their work. We all know that something will get us sooner or later. Together we can make "later" be even later, and thrive in the meantime.

Thursday, October 16, 2025

Finding Rex

 kw: book reviews, nonfiction, science, paleontology, biographies, dinosaurs, t rex, tyrannosaurus rex

The last summer that I was a geology student I spent six weeks in an area above Lee Vining in the Sierras. Midway through, in the early afternoon I was resting near a pond when several backpackers came by, hiking toward a wilderness area another mile along the trail. I asked them how long they would be there. One said, "A week. How long have you been here?" I answered, "Three weeks." "Wow! That's neat," another one answered. I grinned ruefully and said, "Not really." I was already getting tired of tenting. Of course, since part of my daily routine was gathering rocks and hauling 10-20 pounds of them back to base camp for identification, I suppose I wasn't having quite as much fun as the average tourist. By the end of the summer I was pretty clear that I wasn't cut out for a job that required lots of field work.

Reading The Monster's Bones: The Discovery of T. Rex and How it Shook Our World by David K. Randall, I could only admire the grit of Barnum Brown. To this day, roughly half of the dinosaurs and early mammals on display at the American Museum of Natural History in New York City are specimens he collected. In addition to an iron constitution, he had a quick mind and had learned to recognize the kinds of geological deposits most likely to contain great fossils. He had also learned enough anatomy to make a good guess as to what animal a new bone belonged to.

While Barnum Brown collected on most of the continents, his main stomping ground was the great fossil beds of north-central United States. This was the area made famous by the "bone wars" of Professors Marsh and Cope, from 1877 to 1982. These rivals went broke trying to outdo each other as they collected, and described, species after species of large vertebrate fossils. In the end, the biggest and baddest dinosaur of all escaped their grasp. Brown found the first specimen of Tyrannosaurus rex in 1902, and for some years, during which he found a few more, he was the only collector to find any.

The book is largely a biography of Barnum Brown, who was named for P.T. Barnum almost on a whim, because the circus was in town. The author details his "life and hard times" and the resulting drive that motivated him to seek solace in the wilderness. Yet he wasn't antisocial, as so many "mountain men" are. In addition to great strength and persistence and lively intelligence, he could be intensely social, the kind of guy who is the life of the party, whatever party he happens across. This enabled him to befriend ranchers and farmers in the field and to maintain a, if not good, at least useful working relationship with the notoriously prickly director of the American Museum, Henry Fairfield Osborne. It greased many a relationship necessary to get access and tips to the best bone deposits. Brown lived just days short of ninety years.

Rather than focus on Barnum Brown, I find most interesting the social changes in attitudes toward dinosaurs that resulted from the discovery of T. rex. All the large dinosaurs found previously were herbivores such as the familiar Brontosaurus and Diplodocus and duckbills such as Hadrosaurus. They were portrayed as overgrown cows, brainless, plodding beasts that had to move about half submerged in swamps to buoy up their enormous bulk. When the first Triceratops was described in 1889 by Charles O. Marsh, not many wondered what the great horns were needed for. Then when the first tyrannosaur was discovered by Brown in 1902, it soon became clear that those horns were sorely needed! Public interest was piqued, and as "tyrant king" specimens or replicas were displayed in a growing number of museums, museum attendance boomed.

T. rex is still the most popular dinosaur. The proliferation of celebrity knockoffs such as Barney and the popularity of tyrannosaur suits, used for all sorts of pranks, has made this terrifying beast almost a cuddly member of the household. But I'm sure I wouldn't want to stumble across this picnic in a nearby forest! (Image generated with the Flux1.Kontext engine in Leonardo AI.)

I'm a six footer. Were I in this picture, the top of my head would barely reach the larger animal's knee. Even the "babies" shown here would outweigh me by a big factor. The teeth of an adult T. rex are the size of bananas. Big bananas. Perhaps Plantains.

I have great admiration for men like Brown who can go into the field and bring back cool stuff. My style of collecting is day trips…no more weeks in a tent for me!