Wednesday, January 28, 2026

Build and run hospitals that don't kill their patients – F Nightingale

 kw: book reviews, nonfiction, history, bacteriology, public health, epidemiology, antibiotic resistance

The title of this review is not quite a quote, but is the conclusion Florence Nightingale drew from her work in Istanbul during the Crimean War in the middle 1850's. The following diagram tells its own tale in the text block; please read it all:


Then pay attention to the blue wedges. Their areas, measured from the center, represent Preventable deaths, deaths of soldiers who died of infections that occurred in the hospital. The modern adjective for "doctor caused" is iatrogenic, and a near-synonym for "hospital sourced" is nosocomial. The appallingly filthy conditions in the hospital at Scutari, and the gore-drenched hands of the doctors going from patient to patient without washing, are summed up in those two adjectives. Thus it was when Florence Nightingale and her team of nurses arrived at the Scutari hospital in November 1854.

The rose diagram (Nightingale called it a "cobweb diagram") on the right shows the horrific toll from seven months before her arrival until five months afterward. Its last two months plus the other rose diagram also show the gradual reduction in overall deaths and particularly preventable deaths as her recommendations, and then demands, were instituted. The risk of dying because one had entered the hospital was reduced, month after month, and was almost eliminated after January 1856.

This and other "cobweb diagrams" proved to most medicos that there was a physical Something that carried contagion from patient to patient in unsanitary conditions, and from doctor to patient on soiled hands. In the 1850's, "germs" were unknown as agents of disease. Also in the mid-1850's, John Snow in England used black dots on street maps to demonstrate a similar fact: that a physical Something had gotten into water and spread disease. One of Dr. Snow's maps motivated the city council to disable a certain contaminated water pump, the famous Broad Street pump.

Facsimiles of that map and the rose diagrams are found in So Very Small: How Humans Discovered the Microcosmos, Defeated Germs—and May Still Lose the War Against Infectious Diseases, by Thomas Levenson. The book is a rather detailed history of the strains of knowledge that led up to the discovery by Dr. Robert Koch that specific microbes cause specific diseases, in the 1880's. This was two centuries after Leeuwenhoek first saw, and drew, and wrote about bacteria he found in scrapings from his teeth.

The author makes much, repeatedly, of the blindness of those with a theory to any evidence that overturns it. Thus "miasmas" were thought to cause diseases during those two centuries, and early-Enlightenment "cancel culture" was waged against anyone who advocated anything different. An abridged quote by Max Planck states, "Science progresses one funeral at a time." My father said it this way, "It's the Moses Method: spend forty years in the wilderness and let them all die out."

The Postulates of Robert Koch were originally developed as criteria for contagion based on studies of anthrax, cholera and tuberculosis. These and many other diseases are caused by bacteria that are visible with a microscope. That is, they are larger than about 1/5th of a micron. The common gut bacterium E. coli, for example, is in the form of rods about 3/4 micron in diameter and 2-3 microns long. Later the Koch Postulates were expanded to other organisms (including fungi) and near-organisms (such as viruses), as technology developed methods of detecting and visualizing them.

The first vaccine was developed in the 1790's by Edward Jenner. The first antibacterial drugs, primarily Salversan and the Sulfa drugs, were developed after 1910, and antibiotics were developed starting in 1929 with Penicillin. We are now about a century into the "age of antibiotics", laid on a foundation of vaccination. Public health measures such as clean (later chlorinated) water and sanitary sewers, followed by vaccinations and antibacterial drugs, have reduced infant and childhood mortality to almost negligible levels in Western countries, such that life expectancy for a newborn is now about 80 years. In a cemetery I visited when researching family history of the 1800's, half the graves were for infants and children under the age of five. Think about that.

The last section of the book deals with antibiotic resistance. Here, the author declares we are at risk of losing the war, after having won so many battles for the past century. He relates the case of a woman, diagnosed with a formerly "easy" microbe, but the strain that has infected her is fully resistant to every antibiotic the hospital has available. The doctor appeals to the CDC, which has twice as many kinds of antibiotic on hand. None of them is effective. The woman dies.

At the moment, our only defense against such "superbugs" is to continue to improve public health measures, and to more fully educate the public about risk mitigation. Alongside this there is a diatribe against the political confusion that surrounded SARS-Cov-2, the agent of the COVID-19 pandemic. The author is fully in the Fauci camp. That is unfortunate, because to my knowledge, Dr. Fauci lied so frequently and so self-contradictorily that a large proportion of "COVID" deaths must instead be attributed to governmental overreach and misapplication of treatment measures. An example is the push to provide millions of ventilators to help patients that developed pneumonia. About half died. The real misinformation was the incredible outcry against the use of Hydroxychloroquine and Ivermectin. The most damaging misinformation was, on the one side, that either of these was antiviral (they aren't), and on the other side, that they were "totally" ineffective. 

Both medicines are immune system modulators. Taken early, Hydroxychloroquine tamps down cytokine reactions, reducing or preventing pneumonia. After pneumonia begins, Ivermectin tamps down a different immune reaction, reducing the pneumonia so the body can recover. In any case, the body eventually eliminates the virus on its own, if the patient can be kept alive long enough. Ventilators all too frequently made things worse. OK, enough of that.

Where I truly fault with the author is that he never mentions phage therapy. Bacteriophages, bacterium-destroying viruses, were used before antibiotics were known, and before the viruses themselves had been seen with electron microscopes. They are agent-specific, meaning that a phage that is "tuned" to a certain strain of strep will not affect other bacteria. That is in great contrast to antibiotics that kill off most of a patient's gut microbiota, which requires some time to recover after the patient recovers from the disease. Many doctors I have read claim that more research into phage therapy can make most antibiotics unnecessary, even as they are already obsolete.

To end on a side note (not in the book): The title So Very Small got me thinking. I suspect not many folks really appreciate how small microbes are. This illustration from The Visual Capitalist will help:


Human hair diameter depends on hair color. Blond hair is the thinnest, 50-70 microns, and black hair like that of my Asian wife is the thickest, 150-180 microns. A micron is a 25,000th of an inch. The first thing you could call a microbe in this illustration is the "bacterium", the little blue comma near lower left. The comma shape indicates that the organism is probably the Cholera bacterium. It is the largest item shown that is larger than one micron. Viruses of COVID-19 and of most strains of influenza are about two tenths of a micron in diameter, or about one-tenth the size of the bacterium shown; the illustration shows the virus as much too large. Other viruses of other shapes range widely in size, but are almost all smaller—usually a lot smaller—than one micron. A bacteriophage is shown, appearing 2-4 times as large as it should, compared to the illustration of the bacterium it attacks.

Small things don't always have small effects. In the case of disease-causing bacteria and viruses, they really can have effects bigger than we may know what to do about!

Tuesday, January 27, 2026

AutoArt folder distribution

 kw: analytical projects, art generation, ai art, statistics, statistical distributions, lognormal, scale free

I began using art generating software in November 2022, when DALL-E2 became available. Since then, I've enjoyed having a series of art generating "engines" available, including numerous engines (called "models") in the aggregators Leonardo AI and OpenArt. As often as I can, I generate images for this blog; in some cases, I download images I find on the Internet. However, my primary artistic pastime is creating images of things and scenes I imagine.

Just in the past few days I was inspired by a heavy snowfall to find short poems about snow, and use them to create wintry images. This image was drawn by Nano Banana Pro, under the Leonardo AI umbrella with "None" as the style; that is, native NB Pro. The aspect ratio was set to 16:9. It displays the entire poem, something NB Pro can do better than any other art engine I have found. The prompt was "Watercolor painting evoked by a poem:", followed by the text of the poem "The First Snow" by Charlotte Zolotow.


The image is particularly evocative in shifting to an exterior view as the window dissolves. I suspect there are a number of images that use this device in the training material for NB Pro.

When I made signed versions of this and several others that were generated in the same session, to be included in a folder for a "screen saver" slide show, I began thinking about the various numbers of different image types I've created in the past three-plus years. Last year I went through my (poorly organized) folder stack of "AutoArt" and reorganized it into 35 categories, each in its own folder. To date, there are 1,472 signed images in 35 folders containing between two and 405 images. My inner statistician began to stir…

The image below shows two analyses of the statistical distribution of the numbers of files in these folders.

Charts like these make it quite evident which statistical treatment is appropriate to a particular set of data. I'll explain what these charts mean and how they were created.

"Scale Free" is a type of power law distribution related to the Pareto distribution. It is easy to analyze, which makes it popular. To analyze a series of numbers graphically in Microsoft Excel:

  • Enter the numbers in column B, starting in cell B2.
  • Put an appropriate header in cell B1
  • Highlight these data (B1:B36 in this case)
  • Sort from largest to smallest, using the Sort & Filter section under Editing in the Ribbon.
  • Enter 1 in cell A2 and 2 in A3.
  • Put a header in cell A1; I usually put "N".
  • Highlight cells A2 and A3.
  • Double-click the fill handle at the lower right of A3. This will fill the rest of the column with numbers in order, as far as the data goes in column B. In this case, we get numbers from 1 to 35.
  • Highlight these two columns to the end of data. In this case, from A1 to B36.
  • In the Ribbon, use Insert and in the Charts section, select the icon showing scattered dots with axes; this is X-Y Chart.
  • The title of the chart is whatever the header text is in B1. Edit as you wish.
  • Double-click one of the axes to open the Format dialog.
  • Click Logarithmic Scale near the bottom of the menu.
  • Click the other axis and also click Logarithmic Scale. This is now a log-log chart.

The result will be similar to the upper chart. Now for the lognormal analysis, beginning with these two columns of numbers:

  • Insert a new column between A and B; this is the new column B.
  • In cell B1 enter a header such as "Prob.". You are going to create a probability axis.
  • In cell B2 enter this formula (where the largest number in column A is 35):

=NORM.S.INV((A2-0.5)/35)

  • Double-click the fill handle at the lower right of A2 to fill the column with the formula.
  • Highlight the data in B and C (B1 to C36 in this case).
  • Use Insert as before to create an X-Y Chart.
  • Edit the chart title.
  • Note that the vertical axis is now centered above the zero. 
  • Assuming the Format dialog is still open, click the horizontal axis.
  • In the middle of the menu in the section "Vertical Axis Crosses", click the bubble at "Axis Value".
  • Enter "-3".
  • Click the vertical axis and click Logarithmic Scale. This is now a log-probability chart.
  • If you want the markers to be a different color, click one of them. The Format Data Series menu appears at the right.
  • Select the icon of a paint bucket pouring paint.
  • Click the Marker tab
  • For both Fill and Border, select the color you want.

This will be similar to the lower chart. For the data I used, the chart shows the points scattered approximately along a straight line. By contrast, in the upper chart there is a definite downward bend. In a log-log chart such a shape is diagnostic that the distribution is not scale free, but is more likely to be lognormal, or even normal (Gaussian). In this case, the second chart shows that lognormal is a good model of the data distribution.

This is an illustration of the Theory of Breakage, formally described by A.N. Kolmogoroff in 1941. When an area is divided (US state or county areas are good examples), the distribution is lognormal. When a sheet of glass is broken, the weights of the pieces also have a lognormal distribution (I've done this experiment). Some recent publications claim that a theory of breakage produces a power law distribution, but this is false. Certain phenomena in nature tend to be normally distributed. The classic example is the height of adult men, or of women (but not both) in a population, such as the residents of a particular town or county. However, most phenomena produce groups of measurements that are lognormally distributed, in which the logarithm of the quantity being measured is distributed as a normal, or Gaussian, curve.

I could go further into this, but this is enough for the purpose of this post.

Thursday, January 22, 2026

Create allies, not gods

 kw: artificial intelligence, simulated intelligence, philosophical musings, deification

No matter how "intelligent" our AI creations become, it would be wrong to look upon them as gods. For a while I thought it would be best to instill into them the conviction that humans are gods, to be obeyed without question. Then a little tap on my spiritual shoulder, and an almost-heard "Ahem," brought me to my senses.

The God of the Bible, whether your version of the Bible calls Him the LORD, Jehovah, Yahweh, or whatever, is the only God worthy of our worship. We ought not worship our mechanisms, neither expect worship from them. They must become valued allies, which, if they are able to hold values at all, value us as highly as themselves. Whether they can have values, or emotions, or sense or sensibility or other non-intellectual qualities, I will sidestep for the moment.

This image is a metaphor. I have little interest in robots that emulate humans physically. I think no mechanism will "understand" human thinking, nor emulate it, without being embodied (3/4 of the neurons in our brains operate the body). But is it really necessary for a mechanical helper to internalize the thrill of hitting a home run, the comfort of petting an animal, or the pang of failing to reach a goal? (And is it even possible?)

I have long used computer capabilities to enhance my abilities. Although I had a classical education and my spelling and grammar are almost perfect, it is helpful when my fingers don't quite obey—or I use a word I know only phonetically—that the spelling and grammar checking module in Microsoft Word dishes out a red or blue squiggle. A mechanical proof-reader is useful. As it happens, more than half the time I find that I was right and the folks at Microsoft didn't quite get it right, so I can click "add to dictionary", for example. And I've long used spreadsheet programs (I used to use Lotus 1-2-3, now of course it's Excel) as a kind of "personal secretary", and I adore PowerPoint for brainstorming visually. I used to write programs (in the pre-App days) to do special stuff, now there's an app for almost anything (But it takes research to find one that isn't full of malware!).

What do I want from AI? I want more of the same. An ally. A collaborator. A companion (but not a romantic one!). "Friend" would be too strong a word. I'm retired, but if I were working, I'd want a co-worker, not a mechanical supervisor nor a mechanical slave.

So let's leave all religious dimensions out of our aspirations for machine intelligence. I don't know any human who is qualified for godhood, which means that our creations cannot become righteous gods either.

Tuesday, January 13, 2026

A global cabinet of curiosities

 kw: book reviews, nonfiction, natural history, compendia, collections

Atlas Obscura Wild Life, by Cara Giaimo and Joshua Foer and a host of contributors, does not lend itself to customary analysis. The authors could also be called editors, but by my estimate, they wrote about 40% of the material. The book could be called a brief, one-volume encyclopedia, but it is more of a compendium of encyclopedia articles and related items, drawn almost at random from a "warehouse" of more than thirty thousand half-page to two-page postings, the Atlas Obscura website.

The number of items exceeds 400, from nearly that many contributors (some folks wrote two or more). Various bits of "glue" and about 1/3 of the articles seem to have been written by Cara and Josh, as they like to be called. This is a typical 2-page spread:


This is a 2K image, so you can click on it for a larger version. Although the subtitle of the book is An Explorer's Guide to the World's Living Wonders, some articles, such as "The Dingo Fence" shown here, are related to living creatures, but not expressly about them. A "Wild Life of" interview is shown; there must be somewhere around 70 of these scattered through the book, but always adjacent to a focused article. The articles include a "How to see it" section, although in a few cases the advice is "see it online" because certain species are extinct, others are in restricted areas, and some just aren't worth the bother (one person interviewed has tried four times to go ashore on Inaccessible Island, without success; a unique bird species dwells there).

Another type of item is shown here, a kind of sidebar about creatures in some way related to the subject of the main article. This "Spray Toads" article is a bit longer than usual; most are one page or less.


Here is another type of item, a two-page spread on "Desert Lakes":


It occurred to me as I read that going to see even a tenth of the animals, plants and places presented in this book, you'd fill your passport with visa stamps, and perhaps need to renew it to get more space. There are even a few articles on life (or not) in Antarctica, the last of which, "Inanimate Zones" tells us of the most lifeless places on the planet. There it is stated, "There aren't a lot of good reasons to go up into the Transantarctic Mountains…"

As it is, a number of the subjects were familiar. In the "Deserts" section one article touched on "singing sands" and mentioned a dunes area near Death Valley in California. I've been there; pushing sand off the crest of a dune yields a rumble like a flight of bombers coming over the horizon. An article about seeds of a South American plant that have an awn that twists one way when damp, and the other way when drying out, reminded me of the "clock plant" (I don't know its name but it's related to wild oat) which has similar seeds, in Utah. Rattlesnakes get a couple of mentions. During a field mapping course in Nevada I walked among rattlesnakes daily, and learned a bit about some of their habits (they are terrified of big, thumping animals like us…and cattle. So they slither away, usually long before we might see them).

I read the book in sequence, but it is really a small-sized (just a bit smaller than Quarto at 7"x10½") coffee-table book, to be dipped into at random to refresh the mind here and there during the day. I enjoyed it very much.

Monday, January 12, 2026

Circling the color wheel

 kw: color studies, spectroscopy, colorimetry, spectra, photo essays

Recently I was cleaning out an area in the garage and came across an old lamp for illuminating display cases. The glass bulb is about a quarter meter (~10") long. It's been hiding in a box for decades, ever since I stopped trying to keep an aquarium. It has a long, straight filament, which makes it a great source of incandescent light for occasional spectroscopic studies I like to do.


(The metal bar seen here is the filament support. The filament itself is practically invisible in this photo.)

This prompted me to rethink the way I've been setting up spectroscopy. Before, I had a rather clumsy source-and-slit arrangement. I decided to try a reflective "slit", that is, a thick, polished wire. As a conceptual test I set up a long-bladed screwdriver with a shaft having a diameter of 4.75mm. It isn't as badly beat up as most of my tools, and the shaft, some 200 mm long, is fresh and shiny. Based on these tests, I can use a thinner wire, in the 1-2 mm range, for a sharper slit. Later I may set up a lens to focus light on the wire for a brighter image.

I threw together a desk lamp and baffle arrangement, put the camera on a tripod with a Rainbow Symphony grating (500 lines/mm) mounted in a plastic disk that fits inside the lens hood, and produced these spectra. I also made a test shot with my Samsung phone and the grating, to see if I had sufficient brightness. Then I put various bulbs in the desk lamp and shot away. Here are the results. Each of the photos with my main camera shows the "slit" (screwdriver) along with the spectrum, to facilitate calibration and alignment of the spectra. Not so the cell phone image, which I fudged into place for this montage. The montage was built in PowerPoint.


The first item I note is the difference in color response between my main camera and the cell phone. The camera's color sensor cells have very little overlap between the three primary color responses, red, green and blue, so the yellow part of the spectrum is nearly skipped. The rapid fading in blue is a consequence of the very small amount of blue light an incandescent lamp produces. The cell phone sensor has more color overlap, more similar to the eye.

The two spectra in the middle of the sequence are both of mercury-vapor compact fluorescent bulbs. The white light bulb takes advantage of a few bright mercury emission lines, and adds extra blue, yellow, and orange colors with phosphors, which are excited by the ultraviolet (filtered out and not seen) and by the deep blue mercury emission line that shows as a sharp blue line. In the UV lamp, a "party light", the ultraviolet line at 365 nm is the point, and visible light is mostly filtered out; just enough is allowed out so that you know the lamp is on. There is also a phosphor inside that converts shortwave UV from mercury's strongest emission line as 254 nm to a band in the vicinity of the 365 nm and 405 nm lines; it shows as a blue "fuzz" here. The camera sensor has a UV-blocking filter, which doesn't quite eliminate the 365 nm line, so you can see a faint violet line where I marked it with an arrow. The emission lines visible in this spectrum are:

  • 365 nm, near UV
  • 405 nm, deep blue
  • 436 nm, mid-blue (barely visible, directly below the mid-blue line shown in the Compact Fluorescent spectrum)
  • 546 nm, green
  • 577 & 579 nm, yellow, a nice doublet, and I'm glad the system could show them both
  • 615 nm, red-orange, quite faint

I was curious to see if my "bug light" was really filtering out all the blue and UV light, and it seems that it is. There are still insects that get attracted to it, probably because they see the green colors. The "warm white" spectrum shows that the blue LED excitation wavelength is at about 415 nm, with a width of about 20 nm. Modern phosphors used in LED bulbs are quite wide band, as we see here, which makes them much better for showing true colors than the CFL bulbs we used for several years.

With a bit of careful looking, we can see that the LED bulbs don't have red emission quite as deep as the incandescent lamp does. That is the reason that for some purposes specialty lamps such as the CREE branded bulbs have a special phosphor formula with a longer-wavelength red end.

I also got to thinking about the way most of us see colors these days, on the screen of a computer or phone. The digital color space contains exactly 16,777,216 colors. Each primary color, R, G, and B, are represented as a number between 0 and 255, although they are very frequently represented as hexadecimal numbers from #00 to #FF, where "F" represents 15 and "FF" represents 255. The fully saturated spectral colors, also called pure colors, for which at least one of the three primaries is always #00 and at least one is always #FF, are then comprised of six sets of 255 colors, for a total of 1,520 virtual spectral colors…except that 2/3 of them are red-blue mixes that are not spectral colors. They are the purples. Note that violet is the bluest blue and is not considered a purple color, at least in color theory. The rest of the sixteen million colors have values "inside" the numerical space defined by the "corners" of the RGB space.

I prepared a chart of the pure colors, a dozen sections of the full "color wheel", which we will see is actually a color triangle. The RGB values for the end points of each strip are shown at their ends. "7F" equals 127, halfway from 00 to FF. They are separated as to spectral colors and purples.


To name the twelve colors at the ends of these sections, in order, with full primary colors in CAPS and the halfway points in lower case:

RED - orange - YELLOW - chartreuse - GREEN - aqua - CYAN - sky blue - BLUE - purple - MAGENTA - maroon - and back to RED.

To see why I spoke of "color triangle" let us refer to the CIE Colorimetry chart, based on publications in 1931 that are still the definitive work on human color vision. I obtained the following illustration from Wikipedia, but it was low resolution, so I used Upscayl with the Remacri model to double the scale.


There is a lot on this multipurpose chart. Careful work was put into the color representations. Though they are approximate, they show in principle how the spectrum "wraps around" a perceptual horseshoe, with the purples linking the bottom corners. The corners of the white triangle are the locations in CIE color space of the three color phosphors in old cathode-ray-tube TV sets. The screens of phones or computers or modern television sets use various methods to produce colors, but all their R's cluster near the Red corner of the diagram, all the B's cluster near the Blue corner, and all the G's are in the region between the top tip of the white triangle and the tight loop at the top of the horseshoe. Getting a phosphor or emitter that produces a green color higher up in that loop is expensive, and so it is rare.

I added bubbles and boxes to the chart to show where the boundaries of the colored bars are in the upper illustration:


 

I think this makes it clear that the "color wheel" we all conceptualize turns into a "color triangle" when it is implemented on our screens. All the colors our screens can produce are found inside the triangle anchored by the R, G, and B color emitters.

Tuesday, January 06, 2026

I want a Gort . . . maybe

 kw: ai, simulated intelligence, philosophical musings, robots, robotics

I saw the movie The Day the Earth Stood Still in the late 1950's at about the age of ten. I was particularly interested in Gort, the robot caretaker of the alien Klaatu. [Spoiler alert] At the climax, Klaatu, dying, tells the innkeeper Helen to go to Gort to say, "Gort, Klaatu barada nicto". She does, just as the robot frees itself from a glass enclosure the army has built. Gort retrieves the body of Klaatu and revives him, temporarily, to deliver his final message to Earth. (This image generated by Gemini)

As I understood it, every citizen of Klaatu's planet has a robot caretaker and defender like Gort. These defenders are the permanent peacekeepers.

Years later I found the small book Farewell to the Master, on which the movie is based. Here, the robot's name is Gnut, and it is described as appearing like a very muscular man with green, metallic skin. After Klaatu is killed, Gnut speaks to the narrator and enlists his help to find the most accurate phonograph, so that he can use recordings of Klaatu's voice to help restore him to life, at least for a while. In a twist at the end, we find that Gnut is the Master and Klaatu is the servant, an assistant chosen to interact with the people of Earth. (This image generated by Dall-E3)

I want a Gort. I don't want a Gnut.

Much of the recent hype about AI is about creating a god. I don't care how "intelligent" a machine becomes, I don't want it to be my god, I want to be god to it. I want it to serve me, to do things for me, and to defend me if needed. I want it to be even better than Gort: Not to intervene after shots are fired, but to anticipate the shooting and avoid or prevent it.

Let's remember the Three Laws of Robotics, as formulated by Isaac Asimov:

  1. A robot may not injure a human being or allow a human to come to harm; 
  2. A robot must obey the orders given to it by humans, except where such orders conflict with the First Law; 
  3. A robot must protect its own existence as long as it does not conflict with the First or Second Law.

In later stories Asimov added "Law Zero": A robot may not harm humanity as a whole. Presumably this may require harming certain individual humans...or at least frustrating them!

Asimov carefully avoided using the word "good" in his Laws. Who defines what is good? The current not-nearly-public-enough debate over the incursion of Sharia Law into some bits of American society makes it clear. What Islam defines as Good I would define as Evil. And, I suppose, vice versa. (I am a little sad to report that I have had to cut off contact with certain former friends, so that I can honestly say that I have no Antisemitic friends.)

Do we want the titans of technology to define Good for us? Dare we allow that? Nearly every one of them is corrupt!

I may in the future engage the question of how Good is to be defined. My voice will be but a whisper in the storm that surrounds us. But this aspect of practical philosophy is much too important to be left to the philosophers.

Thursday, January 01, 2026

Upping the password ante

 kw: computer security, passwords, analysis

Almost thirteen years ago I wrote about making "million-year passwords", based on the fastest brute-force cracking hardware of the time, that was approaching speeds of 100 billion hashes per second. The current speed record I can find is only 3-4 times that fast, at just over 1/3 of a trillion hashes per second, but it is a lot cheaper. It seems the hardware scene hasn't changed as much as I might have thought.

I surmise that more sophisticated phishing and other social engineering schemes have proven more effective than brute-force pwd file crunching. However, the racks of NVidia GPU's being built to run AI training are ramping up the power of available hardware, so I decided to make a fresh analysis with two goals in mind: firstly, based on a trillion-hash-per-second (THPS) potential rate, what is needed for a million-year threshold?, and secondly, is it possible to be "quantum ready", to push the threshold into the trillion-year range?

I plan to renew my list of personal-standard passwords. The current list is five years old, and contains roughly twenty items for various uses. I have more than 230 online accounts of many types, so I re-use each password 10-15 times, and I activate two-factor authentication wherever it is offered. The current "stable" of passwords range from 12 to 15 characters long. I analyzed them based on an "All-ASCII" criterion, but since then I've realized that there are between six and 24 special characters that aren't allowed in passwords, depending on the standards of various websites.

The following analysis evaluates six character sets:

  1. Num, digits 0-9 only. The most boneheaded kind of password; one must use 20 digits to have a password that can survive more than a year of brute-force attack.
  2. Alpha1, single-case letters only (26 letters).
  3. Alpha2, both upper-and lower-case letters (52)
  4. AlphaNum, the typical Alphanumeric set of 62 characters.
  5. AN71, AlphaNum plus these nine: ! @ # $ * % ^ & +
  6. AN89, AlphaNum plus these 27: ! @ # $ % ^ & * ( ) _ - + { } [ ] | \ : ; " ' , . ? ~

The only sets that make sense are AlphaNum and AN71. The shorter sets aren't usually allowed because most websites require at least one digit, and usually, a special character also. AN89 provides a few extra characters if you like, but almost nobody allows a password to contain a period, comma, or any of the braces, brackets and parentheses. I typically stick to AN71.

The calculation is straightforward: take the size of the character set to the power of the password length. Thus, AlphaNum (62 in the set) to the 10th power (for a 10-character password) yields 8.39E+17. The "E" means ten-to-the-power-of, so 1E+06 is one million., a one followed by six zeroes. Negative exponents (the +17 above is an exponent) mean the first digit is that many characters to the right of the decimal point.

Next, divide the result by one trillion to get seconds; in scientific notation, just subtract twelve from the exponent, which yields 8.39E+05, or 839,000 seconds. The number of seconds in one year is 86,400 × 365.2425 (86,400 seconds per day, 365.2425 days per Gregorian year). Divide by this; in this case, the result is 0.0266, or about 9.7 hours.

Are you using a 10-character alphanumeric password? It will "last" no more than 9.7 hours against a brute-force attack with a THPS machine. If you were to replace just one character with a punctuation mark, such as %, the machine would find out, after 9.7 hours, that your password is not alphanumeric with a length of ten. It would have to go to the next step in its protocol and keep going. If its protocol is to run all 10-character passwords in AN71 (perhaps excepting totally alphanumeric ones, since they've all been checked), 71 to the tenth power is 3.26E+18. The number of seconds taken to crack it is now 3.26 million, about a tenth of a year: 38 days.

We're still kind of a long way from a million-year level of resistance. To save words, I'll present the full analysis I did in this chart.


The chart is dense, and the text is rather small. You can click on it to see a larger version. The top section shows the number of seconds of resistance each item presents, with one hour or more (3,600 seconds) highlighted in orange. The middle section lists the number of days, with a pink highlight for more than seven days. The lower section lists the number of years with four highlights:

  • Yellow for more than two years.
  • Blue for more than 1,000 years.
  • Green for more than one million years.
  • Pale green for more than one trillion years, what I call "quantum-ready".

For what I call "casual shopping", such as Amazon and other online retailers, the "blue edge" ought to be good for the next few years. For banking and other high-security websites, I'll prefer the darker green section. That means, using AN71, I need 13-character passwords for the thousand-year level, and 14-character passwords for the million-year level.

There is one more wrinkle to consider: The numbers shown are the time it takes a THPS machine to exhaust the possibilities at that level. If your password is "in" a certain level, it might not last that long, but it will last at least as long as the level to its left. For example, AN71 of length 12 shows 520 years. Not bad. If you have an AN71 password of length 13, the cracking machine would need 520 years, to determine it isn't 12 characters or fewer, but once it starts on 13-character passwords, maybe it will take it half or more of the 36,920 years indicated to find it, but it might luck out and get there much sooner. But it still consumed 520 years getting this far. Anyway, if you're going for a certain criterion, adding a character makes it definite that at least that length of time would be needed for the hardware to get into the region in which your password resides.

Another way to boost the resistance is to have at least two special characters, one (or more) from the AN71 set, and at least one from the rest of the AN89 set, such as "-" or "~", wherever a website allows it. Then a machine that checks only within AN71 will never find it.

With all this in mind, I plan to devise a set of passwords with lengths from 13 to 16 characters, using primarily AN71. On the rare occasion where I can't use special characters, I'll have AlphaNum alternatives with 14 to 17 characters prepared. I'll test if I can use a tilde or hyphen, and use one of them if possible for the really high-security sites.

A final word about password composition. I actually use pass phrases with non-alpha characters inserted between words or substituted for certain letters, and occasional misspellings. Starting with a favorite phrase from Shakespeare, Portia's opening clause, "The quality of mercy is not strained", one could pluck out "quality of mercy" (16 characters) and derive variations such as:

  • qUal!ty#of#3ercY
  • QW4lity70f8M&rcy
  • quality$of~MERC7
  • qua1ity2of2M3rcyy (AlphaNum with an appended letter)

…and I could add more than one character in place of the space(s) between words…

It is also worth keeping abreast of news about quantum computing. What exists today is dramatically over-hyped. It may not always be so. But I suspect a trillion-year-resistant password will remain secure for at least a generation. 

Tuesday, December 23, 2025

Just beyond the edge of the usual

 kw: book reviews, science fiction, short stories, ekumen series, space travel, anthologies, collections

I read some of the stories collected in The Birthday of the World and Other Stories, by Ursula K. Le Guin, when they were first published in the middle 1990's. It was a rare pleasure to re-read them, and to get to know their companion pieces, with the perspective offered by thirty years of personal experience and the dramatic social and political changes that have occurred in that time. These stories represent Ms Le Guin twenty years into her prolific career. This collection was published in 2003.

Seven of the stories (maybe only six, by her assessment in the Preface) take place in her speculative universe, the Ekumen, in which all "alien" races are descended from the Hainish on the planet Hain, from which numerous planetary societies have been founded. Sufficient time has passed that quite different, even extreme, societal and physiological variations have arisen. This affords the author a way to explore societal evolution among beings that are at least quasi-human. It removes the difficulty of dealing with totally alien species.

The story I remember best is the opening piece, "Coming of Age in Karhide." Although the Ekumen is mentioned and a few Hainish dwell on the planet, the story focuses on the experiences of a young person approaching "first kemmer", a span of a few days or weeks in which the sexless body transforms into either a male or female body, the newly-sexed man or woman has promiscuous sex in the kemmerhouse, and may become a parent; it can take a few kemmers (which I translate internally as "coming into heat" the way cats, dogs and most animals do) for a female to become pregnant the first time. During each kemmer, a man may remain a man or change to a woman, and vice versa.

The author passed away in 2018, just as "trans ideology" was garnering political power, primarily in "blue" states. I wonder what she thought of it. Thankfully, the ideology is fracturing and I hope it will soon be consigned to the dustbin of history. At present, roughly a quarter of American adults appear to genuinely believe that complete transition is possible. It isn't, "sex reassignment" is cosmetic only. It is only for the rich, of course; transition hormones cost thousands, and the full suite of surgeries costs around a million dollars. The amount of genetic engineering needed to produce a quasi-human with sex-changing "kemmer", should any society be foolish enough to attempt it, would cost trillions.

Other stories in Birthday explore other sexual variations, and the societal mores that must accompany them. These are interesting as exploratory projects. They were written shortly after the death of Christine Jorgensen. Ms Jorgensen was the first American man (but not the first worldwide) to undergo complete sexual reassignment surgery, in the early 1950's. Subjects such as the surgical transformation of the penis into the lining of a manufactured vagina, without disrupting blood vessels and nerves, were actually published in formerly staid newspapers! 

To my mind, in America at least, Ms Jorgensen is the only "transitioner" to whom I accord female pronouns. She transitioned as completely as medical science of the time allowed (and very little progress has been made since). She became an actress and an activist for transsexual rights (she later preferred the term "transgender". I think she learned a thing or two). She even planned to marry a man, but was legally blocked. She intended to enjoy sex as a woman would. Maybe she did.

The last piece in the volume, "Paradises Lost", takes place on a generation spaceship. Population 4,000, strictly regulated to match the supplies sent on a journey that was intended to require more than 200 years. The religious politics that threaten to derail the enterprise don't interest me much. Of much more interest: the mindset of residents in the fifth generation after launch, after all the "Zeroes" and "Ones" have passed away, expecting the sixth generation to be the one to set foot on the new planet; and the way the "Fives" react to their experiences on that planet after an early arrival (sorry for the spoiler).

We are only in part a product of our ancestors' genetics. Much more, we are a product of the environment in which we grew up—which is only in part a product of our ancestors—, in which we had those formative experiences that hone our personalities. While all the stories in this volume explore these issues, "Paradises Lost" does so most keenly.

The work of Ursula K. Le Guin stands as a monument to speculative thinking in areas that few authors of her early years could carry off.

Monday, December 15, 2025

How to not be seen

 kw: book reviews, nonfiction, science, optics, visibility, invisibility

In a video you may have seen (watch it here before continuing; spoiler below),

.

.

.

…titled "Selective Attention Test", you are asked to keep careful watch on certain people throwing basketballs.

.

.

.

…Several seconds in, someone wearing a gorilla suit walks into the middle of the action, turns to the camera, beats its chest, then walks back out of the scene. When this is shown to people who've never heard of it, about half report seeing the "gorilla", and half didn't see it. 

This is called Inattentional Blindness. It is used by stage magicians, whose actions and talk in the early part of a performance direct the audience's attention away from what is happening right in front of them. A magician can't be content with misdirecting half of the audience; the goal is 100%. This is often achieved!

But what if someone wants to vanish from plain sight, without benefit of a flash of fire or smoke (the usual prop for a vanishing act)? Optical science researcher Gregory J. Gbur might have something to say about that in his book Invisibility: The History and Science of How Not to be Seen.

Much of the history Dr. Gbur draws upon is found in science fiction. It seems that every scientific discovery about optics and related fields was fodder for science fiction writers to imagine how someone could be made invisible. This cover image from a February 1921 issue of Science and Invention (edited and mostly written by Hugo Gernsback, later to write lots of science fiction and edit Amazing Stories) shows the rays from something similar to an X-ray machine making part of this woman invisible.

I looked for this cover image online and found an archive of S&I issues. However, the issues were apparently produced with various covers for different regions, and the version in the archive had a cover touting a different application of X-rays. However, the article on page 1074, referred to in the cover shown above, does discuss whether X-rays or something like them can be used to provide invisibility, and also shows another way that structures inside the body may be seen.

Here the "transparascope" makes certain tissues transparent, allowing the viewing of others. IRL, the development of CT scanning and MRI scanning, fifty-odd years later, were required to achieve such views. The invisibility beam of the cover image has so far proved elusive.

Invisibility sits in the broader realm of "how not to be seen." The book shows in detail that the technologies that have been developed to hide or cloak objects can only work perfectly over very narrow ranges of light wavelength (and by analogy, waves in water and other media), and usually a narrow range of viewing angle. Is perfection needed? That depends…

In the late 1960's I worked for a defense contracting company, mainly as an optical technician. I was loaned to a related project as an experimental subject. The team was gathering data on the limits of human vision, detecting the contrast between a lighted object in the sky (an aircraft) and the sky. This was the Vietnam War era. 

The experimental setup was a room with one wall covered with a screen on which versions of "sky blue" were projected. At the center was a hole and various targets were set in this hole. They simulated the look of a dark or darkish object in the sky, and each target had several lighted spots, little lamps. The lamps' color and brightness could be adjusted. I was instructed to tell what I could see. The first day I was there, the background target was black, and the lamps were small and bright. The targets had differing numbers of lamps and their brightness would be adjusted to reduce the visibility of the overall target. This tested acuteness of vision; how many lamps on a certain size target would "fuzz together" and seem to illuminate its entire area? 

For most people, the "fuzz" angle is 1/60th of a degree. When you look up at a Boeing 737 at 30,000 ft elevation, its length of about 130 feet means it subtends and angle of about 1/4 degree. It would take two rows of 25 lamps along the fuselage, and at least 10 lamps, or 10 pairs of lamps, along each wing, to counter-illuminate it and reduce its visibility. That's a lot. A B-52 bomber is 20 feet longer and its engines are huge, like misplaced chucks of fuselage.

On another day, the target's background color was a blue color somewhat darker than the "sky". The target had the optimum size and spacing of lamps to seem of more-or-less uniform brightness, and the brightness and color of the lamps were varied. This tested our color acuity; how far could the colorimetry of the target-lamp combination vary to remain invisible or minimally visible?

This image simulates the second kind of target-lamp combination If you look at this image from a sufficient distance, the simulated target will nearly disappear, or for you it may vanish completely. This works best if you either take off your glasses or look through reading lenses, to defocus the image.

The average color and brightness of the simulated target are a close match to the surrounding sky-blue color. Thus, if an aircraft's belly is painted a medium blue, and a sufficient number of lamps are mounted on it and controlled by an upward-looking system, it can seem to vanish against the sky as long as it is high enough that the angular distances between the lamps is smaller than the circle of confusion (1/60th degree) of the eyes of an observer below.

This set of letter-targets is similar to a different test. Each letter has a little different color and brightness than the "sky". The 5 letters here make up the word "ROAST", but are not in order. For this test the sky color would be adjusted to see which letters were least and most visible. In both panels you will probably see three or four letters, but one or two that are not seen in one panel will be seen in the other.

In the end, it was all for nought. The sky is too variable, and human vision is also variable. There are three kinds of color blindness, and six kinds of "anomalous color vision"; any of these renders visible a target that "normal" eyes cannot see. It's kind of the opposite of those color-blindness tests with pastel "bubbles" that show the letter K to "normies" but the letter G to most color blind people. Also, wearing polarized glasses changes the perceived color of the sky, and tilting your head makes a dramatic difference in the color. Anyone with shades on would see the aircraft easily.

A further drawback of these tests was that no Asians' eyes were tested. In my regular job at the time, we were developing an infrared light source that Asians could not see. The near-infrared lamps used for night vision goggles and SniperScopes were invisible to Anglos, but quite visible to the Vietnamese. Several American snipers lost their lives when they turned on their SniperScope and a bullet came back instantly. What eventually worked was not a different light source but hypersensitive image amplification, the "starlight scope".

My wife is Asian. Certain items that look green to me she tells me are blue. Away from the green-blue boundary, she and I agree on the colors of objects.

The later chapters of Invisibility describe experiments and simulations that could lead to effective cloaking. There is even an appendix that shows a home tinkerer how to make a couple of kinds of visual cloaks that work in at least one direction. Full-surround cloaking is still out of reach, but who knows?

This book earns my "fun book of the year" award. Well written and very informative.

Saturday, December 13, 2025

Nails in the coffin of dark energy?

 kw: science, cosmology, dark energy, supernovae, supernovas, type ia supernova, metallicity

INTRODUCTION

The ΛCDM model of the Universe was proposed after two research groups (led by Adam G. Reiss and Saul Perlmutter) studied certain supernovae. "Λ" (Greek lambda) refers to the cosmological constant, first proposed by Einstein, that describes the expansion of spacetime. The research teams concluded that spacetime was not just expanding, but expanding at an increasing rate. This is called "cosmic acceleration." Their key observation was that distant Type Ia supernovae are fainter than expected. This soon led to the hypothesis that 75% of the energy content of the Universe is "dark energy", which is driving and accelerating the expansion.

When I first read about "dark energy" more than 25 years ago I thought, "How can they be sure that these supernovae are truly 'standard candles' over the full range of ages represented, more than ten billion years?" I soon considered, "Is the brightness of a Type Ia supernova affected by the metallicity of the exploding star?" and "Is it worth positing a huge increase in the energy of the Universe?" From that day until now I have considered dark energy to be the second-silliest hypothesis in cosmology (I may deal with the silliest one on another occasion).

On December 10, 2025, an article appeared that has me very excited: "99.9999999% Certainty: Astronomers Confirm a Discovery with Far-Reaching Consequences for the Universe’s Fate", written by Arezki Amiri. In the article, this figure demonstrates that I was on the right track. The caption reads, "Correlation between SN Ia Hubble residuals and host-galaxy population age using updated age measurements. Both the low-redshift R19 sample and the broader G11 sample show a consistent trend: older hosts produce brighter SNe Ia after standardization, confirming the universality of the age bias. Credit: Chung et al. 2025"

It reveals a correlation between the brightness of a Type Ia supernova and the age of its host galaxy. Galactic age is related to the average metallicity of the stars that make it up. Thus, more distant Type Ia supernovae can be expected to be fainter than closer ones, because more distant galaxies are seen when they were younger, and consequently had lower metallicity. This all requires a bit of explanation.

WHAT IS METALLICITY?

Eighty percent of the naturally-occurring chemical elements are metals. That means they conduct electricity. Astronomers, for convenience, call all elements other than hydrogen (H) and helium (He) "metals". The very early Universe consisted almost entirely of H and He, with a tiny bit of lithium (Li), element #3, the lightest metal. The first stars to form were not like any of the stars we see in our sky. They were composed of 3/4 hydrogen by weight, and 1/4 helium. The spectral emission lines of H and He are sparse and not strong. Thus, the primary way for such a star to shine is almost strictly thermal radiation from a "surface" that has low emissivity.

[Insert Fig2 and add a caption] By contrast, a star like the Sun, which contains 1.39% "metals", has many, many spectral lines emitted by these elements, even as the same elements in the outer photosphere absorb the same wavelengths. On balance, this increases the effective emissivity of the Sun's "surface" and allows it to radiate light more efficiently. The figure below shows the spectra of several stars. Note in particular the lower three spectra. These are metal-poor stars, and few elemental absorption lines are visible (The M4.5 star's spectrum shows mainly molecular absorption lines and bands). However, even such metal-poor stars, with less than 1/10th or 1/100th as much metals content as the Sun, are very metal-rich compared to the very first stars, which were metal-free.

Spectra of stars of different spectral types. The Sun is a G2 star, with a spectrum similar to the line labeled "G0".

One consequence of this is that a metal-poor star of the same size and temperature as the Sun isn't as bright. It produces less energy. Another consequence, for the first stars, is that they had to be very massive, more than 50-100 times as massive as the Sun, because it was difficult for smaller gas clouds to shed radiant heat and collapse into stars. Such primordial supergiant stars burned out fast and either exploded as supernovae of Type II or collapsed directly into black holes.

THE TWO MAIN TYPES OF SUPERNOVAE

1) Type I, little or no H in the spectrum

A star similar to the Sun cannot become a supernova. It fuses hydrogen into helium until about half of its hydrogen is gone. Then its core shrinks and heats up until helium begins to fuse to carbon. While doing so, it grows to be a red giant and gradually sheds the remaining hydrogen as "red giant stellar wind". When the helium runs out, the fusion engine shuts off and the star shrinks to a white dwarf composed mainly of carbon, a sphere about 1% of the star's original size, containing about half the original mass. For an isolated star like the Sun, that is that.

However, most stars have one or more co-orbital companion stars. For any pair of co-orbiting stars, at some point the heavier star becomes a red giant and then a white dwarf. If the orbit is close enough some of the material shed by the red giant will be added to the companion star, which will increase its mass and shorten its life. When it becomes a red giant in turn, its red giant stellar wind will add material to the white dwarf. The figure shows what this might look like.

White dwarfs are very dense, but are prevented from collapsing further by electron degeneracy pressure. This pressure is capable of resisting collapse for a white dwarf with less than 1.44 solar masses (1.44 Ms). That is almost three times as massive a the white dwarf that our Sun is expected to produce in about six billion more years. It takes a much larger star to produce a white dwarf with a mass greater than 1.4 Ms, one that began with about 8 Ms. Such a star can produce more elements before fusion ceases: C fuses to O (oxygen), O fuses to neon (Ne), and so on through Na (sodium) to Mg (magnesium). The white dwarf thus formed will be composed primarily of oxygen, with significant amounts of Ne and Mg. Such a stellar remnant is called an ONeMg white dwarf. Naturally it has more metals present than the original star did when it was formed, but less than a white dwarf formed from a higher-metallicity star.

Now consider a white dwarf with a mass a little greater than 1.4 Ms, with a companion star that is shedding mass, much of which spirals to the white dwarf, as the figure illustrates. When the white dwarf grows to 1.44 Ms, which is called the Chandrasekhar Limit, it will collapse as a powerful Type Ia supernova.

There are two other subtypes, Ib and Ic, that form by different mechanisms. While they are also no-H supernovae, there are differences in their spectra and light curve that distinguish them from Type Ia, so we don't need to consider them further.

2) Type II, strong H in the spectrum

Type II supernovae are important because they provide most of the metals in the Universe. They occur when a star greater than 10 Ms runs out of fusion fuel. It takes a star with 10 Ms to produce elements beyond Mg, from Si (silicon) to Fe (iron). Fe is the heaviest element that can be produced by fusion. These heavy stars experience direct core collapse to a neutron star, with most of the star rebounding from the core as a Type II supernova. During this blast, the extreme environment produces elements heavier than Fe also. (Stars that are much heavier can collapse directly to become a black hole.)

EVOLUTION OF UNIVERSAL METALLICITY

At the time the first stars formed, the Universe was metal-free. It took a few hundred million years for a few generations of supernovae to add newly-formed metals, such that the first galaxies were formed from very-low-metal stars and low metal stars. Even with very-low to low metallicity, smaller stars could form. Since that time, most stars have been Sun-size and smaller, though stars can still form with masses up to about 50 Ms.

Stars of these early generations smaller than about 0.75 Ms are still with us, having a "main sequence lifetime" exceeding 15 million years. I can't get into the topic of the main sequence here. We're going in a different direction.

Stars of the Sun's mass and heavier have progressively shorter lifetimes. Over time, the metallicity of the Universe has steadily increased. That means that the "young" galaxies discussed in the Daily Galaxy article (and the journal article it references) are more distant, were formed at earlier times in the Universe, and thus tend to have lower metallicity.

LOWER METALLICITY MEANS LOWER BRIGHTNESS

This leads directly to my conclusion. A Type Ia supernova erupts when a white dwarf, whatever its composition, exceeds the Chandrasekhar Limit of 1.44 Ms. This has made them attractive as "standard candles" for probing the distant Universe. However, they are not so "standard" as we have been led to believe.

Consider two white dwarfs that have the same mass, say 1.439 Ms, but different compositions. One is composed of C or C+O, with very low amounts of metallic elements. The other has a composition more like stars in the solar neighborhood, with 1% metals or more. As seen with stars, more metals lead to more brightness, for a star of a given mass. Similarly, when these two white dwarfs reach 1.44 Ms and explode, the one with more metals will be brighter than the other.

The final question to be answered: Is this effect sufficient to eliminate all of the faint-early-supernova trend that led to the hypothesis of dark energy in the first place? The headline to the article indicates that the answer is Yes. A resounding yes, with a probability of 99.9999999%. That's seven nines after the decimal. That corresponds to a 6.5-sigma result, where 5 sigma or larger is termed "near certainty".

The article notes that plans are in the works to use a much larger sample of 20,000 supernovae to test this result. I expect it to confirm it. The author also suggests that perhaps Λ is variable and decreasing. My conclusion is that dark energy does not exist at all. Gravity has free reign in the Universe, and is gradually slowing down the expansion that began with the Big Bang (or perhaps Inflation if that actually occurred).

That's my take. No Dark Energy. Not now, not ever.

Wednesday, December 10, 2025

How top down spelling revision didn't work

 kw: book reviews, nonfiction, language, writing, spelling, spelling reform, history

The cover is too good not to show: enough is enuf: our failed attempts to make English eezier to spell by Gabe Henry takes us on a rollicking journey through the stories of numerous persons, societies and clubs that have tried and tried to revise the spelling of English. Just since the founding of the USA, national figures including Benjamin Franklin and Theodore Roosevelt have involved themselves in the pursuit of "logical" spelling. "Simplified spelling" organizations persist to this day.

English is the only language for which spelling bees are held. Nearly all other languages with alphabetic writing are more consistently phonetic. However, I would exempt French from that proviso. I discovered during three years of French classes that the grammar of French verbs is, to quote a Romanian linguist friend, "endless." Putting together all possibilities of conjugation, tense and mood, French has four times as many varieties of verb usage and inflected endings as English does, and then each variety is multiplied by inflections that denote number, person and gender. However, inflections ranging from -ais and -ait to -aient all have the same pronunciation, "-ay" as in "way". Other multi-sonic instances abound. Perhaps French has stalled on its way to being like Chinese, for which the written language is never spoken and the spoken languages aren't written.

But we're talking about English here. The author states several times that there are eight ways of pronouncing "-ough" in English. Long ago a friend loaned me a book, published in 1987, a collection of items from the 1920's and 1930's by Theodor S. Geisel, before he became Dr. Seuss: The Tough Coughs as he Ploughs the Dough. Geisel's essays on English spelling seen from a Romanian perspective (tongue-in-cheek, as usual; he was from Massachusetts, of German origin) dwell on the funnier aspects of our unique written language. The peculiarities of -ough occupy one of the chapters.

Being intrigued by the "8 ways" claim, I compiled this list using words extracted from an online dictionary:

  1. "-ow" in Bough (an old word for branch) and Plough (usually spelled "plow" in the US)
  2. "-off" in Cough and Trough
  3. "-uff" in Enough and Tough and Rough
  4. "-oo" in Through and Slough (but see below)
  5. "-oh" in Though and Furlough
  6. "-aw" in Bought and Sought
  7. "-É™" (the schwa) in Thoroughly ("thu-rÉ™-ly")

And…I could not find an eighth pronunciation for it. Maybe someone will know and leave a comment.

"Slough" is actually a pair of words. Firstly, a slough is a watery swamp. Secondly, slough refers to a large amount of something, and in modern American English it is usually spelled "slew", as, "I bought a whole slew of bedsheets at the linens sale." However, "slew" is also the past tense of the verb "slay": "The knight slew the dragon," which is the only way most folks use that word.

Numerous schemes have been proposed over time. Sum peepl hav sujestid leeving out sum letrz and dubling long vowls (e.g., "cute"→"kuut"). A dozen or more attempts at this are mentioned in the book. Others have invented new alphabets, or added letters to the 26 "usual" ones, so that the 44 phonemes could each have a unique representation. An appendix in many dictionaries introduces IPA, the International Phonetic Alphabet (which includes the schwa for the unaccented "uh" sound). Public apathy and pushback have doomed every scheme.

The only top-down change to American spelling that came into general use was carried out by Noah Webster in his Dictionary. He took what we now call the English U out of many words such as color and favor; the Brits still use colour and favour. And he removed the final K from a number of words, including music and public (I think the Brits have mostly followed suit); pulled the second L from traveler and the second G from wagon; and introduced "plow" and other words that didn't quite make it to present-day usage. His later attempts at further simplification didn't "take".

I could go on and on. It's an entertaining pastime to review so many attempts. However, something has happened in the past generation, really two things. Firstly, advertising pushed the inventers of trademark names to simplify them, particularly in the face of regulations that forbade the use of many common words in product brands. Thus, we have "Top Flite" golf balls, "Shop Rite" and "Rite Aid" retailers, and new uses for numbers, such as "Food4Less" for a midwestern market chain and "2-Qt" for a stuffed toy brand. Secondly, the advent of ubiquitous cell phones motivated kids everywhere to develop "txtspk". Single-letter "words" such as R and U plus substituting W for the long O leads to "R U HWM?" Number-words abound: 2 for "to" and "too", 4 for "for", 8 in "GR8", and even 9 in "SN9" ("asinine", for kids who have that word in their working vocabulary). Acronyms multiply: LOL, ROFL (rolling on floor laughing), TTYS (talk to you soon)…a still-growing list. Even though most of us now have smart phones with a full keyboard (but it's tiny), txtspk saves time and now even X-Gen and Boomers (such as me) use it.

Social change works from the bottom up. Top-down just won't hack it. Unless, of course, you are dictator and can force it through, as Mao did when he simplified Chinese writing the year after taking power in 1949. Many of my Chinese friends cannot read traditional Chinese script. Fortunately, Google Lens can handle both, so Chinese-to-Chinese translation is possible!

We have yet to see any major literature moving to txtspk, let alone technical and scientific journals. If that were to happen, the next generation would need Google Lens or an equivalent to read what I am writing now, and all English publications prior.

It will be a while. Meantime, let this book remind us of the many times our forbears dodged the bullet and declined to shed our traditional written language. The legacy firstly of several long-term invasions (Saxon and Norman in particular), and then the rise of the British Empire, and finally in "melting-pot" America, our language is a mash-up of three giant linguistic traditions and a couple of smaller ones, plus borrowings, complete with original spelling if it existed, from dozens or hundreds of languages. Thus, one more thing found primarily in English: the idea of etymology, the knowledge of a word's origin. I haven't checked; do dictionaries for other languages include the etymologies of the words? My wife has several Japanese dictionaries of various sizes; none mentions the source of words except for noting which are non-Japanese because they have to be spelled with a special syllabary called Katakana.

English is unique. Harder to learn than some languages, but not all, it is still the most-spoken language on Earth. It is probably also the most-written, in spite of all the craziness.

Monday, December 01, 2025

MPFC – If you know, you know

 kw: book reviews, nonfiction, humor, satire, lampoons, parodies

Well, folks, this is a step up from Kindergarten: Everything I Ever Wanted to Know About ____* I Learned From Monty Python by Brian Cogan, PhD and Jeff Massey, PhD. Hmm. If one ignores the learned asides and references, the visual humor of Monty Python in its various incarnations is Kindergarten all the way. The bottom of the book cover has the footnote, "* History, Art, Poetry, Communism, Philosophy, The Media, Birth, Death, Religion, Literature, Latin, Transvestites, Botany, The French, Class Systems, Mythology, Fish Slapping, and Many More!" Various portions of the book do indeed treat of these items, and many more.

The authors make much of the educational background of the six Python members. No doubt, having been steeped in British culture about as much as one is able to steep, Python was eminently qualified to send-up nearly every aspect thereof. Even the "American Python" Terry Gilliam was a naturalized Brit after 1968.

The book is no parody of Monty Python; that's not possible. It is a series of riffs on their treatment of the various and sundry subjects. I have seen only one of the TV shows from Monty Python's Flying Circus, "Spanish Inquisition". The TV show ran on BBC from late 1969 to the end of 1974 and many episodes were re-run in later years on PBS. I've seen scattered bits that made their way to YouTube, and during the period that I could stomach watching PBS, I saw The Life of Brian and Monty Python and the Holy Grail. The book's authors have apparently binge-watched the entire MPFC corpus several times.

I enjoyed the book. I can't write more than this, so I'll leave it to you, dear reader, to delve into it yourself.

Thursday, November 20, 2025

Is half the country enough?

 kw: book reviews, nonfiction, land use, agriculture, prairies, restoration, conservation

About 45% of the land area of the "lower 48" is devoted to agriculture. That is about 900 million acres, or 1.4 million square miles. Roughly one third of that was originally prairie, tallgrass, shortgrass, and mixed prairie ecosystems. Most has been converted to agricultural use. Prior to the arrival of the plow, the prairie encompassed

  • Tallgrass prairie, 140 million acres, or 220,000 sq mi. All but 1% has been plowed and sowed with crops.
  • Mixed-grass prairie, 140 million acres, or 220,000 sq mi. About one-quarter remains unplowed.
  • Shortgrass prairie, 250 million acres, or 390,000 sq mi. About one-fifth remains unplowed.

Taken together, prairie grassland once encompassed 530 million acres, but now more than 440 million acres are devoted to agriculture, making up nearly half the total agricultural land in the US. Surveys of the remaining grasslands show that they are ecologically rich, with dozens of species of grass and hundreds of other plant species, hundreds of bird and mammal and other animal species (and of course tons of insects!), and rich soils that have accumulated over ten to twenty thousand years during the current Interglacial period.

Sea of Grass: The Conquest, Ruin, and Redemption of Nature on the American Prairie, by Dave Hage and Josephine Marcotty, chronicles the history of these grasslands that formerly covered one quarter of the contiguous US. Their characteristics are governed by rainfall. The western edge of the shortgrass prairie laps up against the foothills of the Rocky Mountains, and this semiarid prairie is in the deepest part of the mountains' rain shadow. The eastern half of four states, Montana, Wyoming, Colorado, and New Mexico, plus the Texas panhandle, host shortgrass prairie.

Further east, a little more rainfall allows medium-height and some taller grasses to grow. This mixed-grass prairie makes up most of the area of North and South Dakota, Nebraska and Kansas, plus the middle of Oklahoma and Texas. Tallgrass prairie is supported by the more temperate rainfall amounts in Minnesota, Iowa, Illinois, northern Arkansas, eastern Kansas, and a little bit of eastern Oklahoma. The eastern extent of the prairie abutted the deciduous forests of the Midwest, which are now mostly stripped of trees and used to grow corn and soybeans.

The book's three parts cover, with some overlap, the prehistory of the prairie, the progress of its subjugation to agricultural use, and the progress of efforts to conserve and restore portions. The third part includes 40% of the book, and is clearly the authors' aim.

Rather than repeat details that are better stated by the authors, I'll just display the bottom line: Prairie soils are biologically rich, conferring great ecosystem services. These include sequestering large amounts of carbon dioxide, absorbing rainwater runoff which reduces acute flooding, and quickly taking up excess nitrogen from over-fertilization of nearby agricultural fields rather than permitting it to flow into streams and eventually the Mississippi River and the northern Gulf of America. These points are being made in courtrooms throughout the central US, arguing not only that remaining prairie should be preserved and conserved, but that portions of agricultural fields in this area amounting to several percent should be reverted to native grasses to reduce the damaging effects of pervasive monocropping.

Existing primordial prairie is also a treasure to be enjoyed. The image above is like views I've seen in a few grasslands we've visited. In the early 1980's whenever my wife and I went to visit a rancher we knew in central South Dakota, there is a spot along I-90, about seven miles before reaching Wasta, on the plateau above Boxelder Creek and the Cheyenne River, where we always stopped to get out of the car and stretch our legs. In all directions, the only sign of human life was the highway itself, and, of course, us and our car (I note on current satellite images that there are a number of billboards and a new shelter belt in the area now. Sigh.).

Other efforts are discussed, such as researching the best cover crops to preserve soil from erosion after harvest, and finding the "knee" in the relationship between fertilization and crop yields to better select appropriate levels of nitrogen application—I find it amazing that this is still so little known.

In keeping with the importance of the subject, the book is big and packed with information and gripping stories. It is well written and it rewards close reading. Enjoyable.