Wednesday, January 28, 2026

Build and run hospitals that don't kill their patients – F Nightingale

 kw: book reviews, nonfiction, history, bacteriology, public health, epidemiology, antibiotic resistance

The title of this review is not quite a quote, but is the conclusion Florence Nightingale drew from her work in Istanbul during the Crimean War in the middle 1850's. The following diagram tells its own tale in the text block; please read it all:


Then pay attention to the blue wedges. Their areas, measured from the center, represent Preventable deaths, deaths of soldiers who died of infections that occurred in the hospital. The modern adjective for "doctor caused" is iatrogenic, and a near-synonym for "hospital sourced" is nosocomial. The appallingly filthy conditions in the hospital at Scutari, and the gore-drenched hands of the doctors going from patient to patient without washing, are summed up in those two adjectives. Thus it was when Florence Nightingale and her team of nurses arrived at the Scutari hospital in November 1854.

The rose diagram (Nightingale called it a "cobweb diagram") on the right shows the horrific toll from seven months before her arrival until five months afterward. Its last two months plus the other rose diagram also show the gradual reduction in overall deaths and particularly preventable deaths as her recommendations, and then demands, were instituted. The risk of dying because one had entered the hospital was reduced, month after month, and was almost eliminated after January 1856.

This and other "cobweb diagrams" proved to most medicos that there was a physical Something that carried contagion from patient to patient in unsanitary conditions, and from doctor to patient on soiled hands. In the 1850's, "germs" were unknown as agents of disease. Also in the mid-1850's, John Snow in England used black dots on street maps to demonstrate a similar fact: that a physical Something had gotten into water and spread disease. One of Dr. Snow's maps motivated the city council to disable a certain contaminated water pump, the famous Broad Street pump.

Facsimiles of that map and the rose diagrams are found in So Very Small: How Humans Discovered the Microcosmos, Defeated Germs—and May Still Lose the War Against Infectious Diseases, by Thomas Levenson. The book is a rather detailed history of the strains of knowledge that led up to the discovery by Dr. Robert Koch that specific microbes cause specific diseases, in the 1880's. This was two centuries after Leeuwenhoek first saw, and drew, and wrote about bacteria he found in scrapings from his teeth.

The author makes much, repeatedly, of the blindness of those with a theory to any evidence that overturns it. Thus "miasmas" were thought to cause diseases during those two centuries, and early-Enlightenment "cancel culture" was waged against anyone who advocated anything different. An abridged quote by Max Planck states, "Science progresses one funeral at a time." My father said it this way, "It's the Moses Method: spend forty years in the wilderness and let them all die out."

The Postulates of Robert Koch were originally developed as criteria for contagion based on studies of anthrax, cholera and tuberculosis. These and many other diseases are caused by bacteria that are visible with a microscope. That is, they are larger than about 1/5th of a micron. The common gut bacterium E. coli, for example, is in the form of rods about 3/4 micron in diameter and 2-3 microns long. Later the Koch Postulates were expanded to other organisms (including fungi) and near-organisms (such as viruses), as technology developed methods of detecting and visualizing them.

The first vaccine was developed in the 1790's by Edward Jenner. The first antibacterial drugs, primarily Salversan and the Sulfa drugs, were developed after 1910, and antibiotics were developed starting in 1929 with Penicillin. We are now about a century into the "age of antibiotics", laid on a foundation of vaccination. Public health measures such as clean (later chlorinated) water and sanitary sewers, followed by vaccinations and antibacterial drugs, have reduced infant and childhood mortality to almost negligible levels in Western countries, such that life expectancy for a newborn is now about 80 years. In a cemetery I visited when researching family history of the 1800's, half the graves were for infants and children under the age of five. Think about that.

The last section of the book deals with antibiotic resistance. Here, the author declares we are at risk of losing the war, after having won so many battles for the past century. He relates the case of a woman, diagnosed with a formerly "easy" microbe, but the strain that has infected her is fully resistant to every antibiotic the hospital has available. The doctor appeals to the CDC, which has twice as many kinds of antibiotic on hand. None of them is effective. The woman dies.

At the moment, our only defense against such "superbugs" is to continue to improve public health measures, and to more fully educate the public about risk mitigation. Alongside this there is a diatribe against the political confusion that surrounded SARS-Cov-2, the agent of the COVID-19 pandemic. The author is fully in the Fauci camp. That is unfortunate, because to my knowledge, Dr. Fauci lied so frequently and so self-contradictorily that a large proportion of "COVID" deaths must instead be attributed to governmental overreach and misapplication of treatment measures. An example is the push to provide millions of ventilators to help patients that developed pneumonia. About half died. The real misinformation was the incredible outcry against the use of Hydroxychloroquine and Ivermectin. The most damaging misinformation was, on the one side, that either of these was antiviral (they aren't), and on the other side, that they were "totally" ineffective. 

Both medicines are immune system modulators. Taken early, Hydroxychloroquine tamps down cytokine reactions, reducing or preventing pneumonia. After pneumonia begins, Ivermectin tamps down a different immune reaction, reducing the pneumonia so the body can recover. In any case, the body eventually eliminates the virus on its own, if the patient can be kept alive long enough. Ventilators all too frequently made things worse. OK, enough of that.

Where I truly fault with the author is that he never mentions phage therapy. Bacteriophages, bacterium-destroying viruses, were used before antibiotics were known, and before the viruses themselves had been seen with electron microscopes. They are agent-specific, meaning that a phage that is "tuned" to a certain strain of strep will not affect other bacteria. That is in great contrast to antibiotics that kill off most of a patient's gut microbiota, which requires some time to recover after the patient recovers from the disease. Many doctors I have read claim that more research into phage therapy can make most antibiotics unnecessary, even as they are already obsolete.

To end on a side note (not in the book): The title So Very Small got me thinking. I suspect not many folks really appreciate how small microbes are. This illustration from The Visual Capitalist will help:


Human hair diameter depends on hair color. Blond hair is the thinnest, 50-70 microns, and black hair like that of my Asian wife is the thickest, 150-180 microns. A micron is a 25,000th of an inch. The first thing you could call a microbe in this illustration is the "bacterium", the little blue comma near lower left. The comma shape indicates that the organism is probably the Cholera bacterium. It is the largest item shown that is larger than one micron. Viruses of COVID-19 and of most strains of influenza are about two tenths of a micron in diameter, or about one-tenth the size of the bacterium shown; the illustration shows the virus as much too large. Other viruses of other shapes range widely in size, but are almost all smaller—usually a lot smaller—than one micron. A bacteriophage is shown, appearing 2-4 times as large as it should, compared to the illustration of the bacterium it attacks.

Small things don't always have small effects. In the case of disease-causing bacteria and viruses, they really can have effects bigger than we may know what to do about!

Tuesday, January 27, 2026

AutoArt folder distribution

 kw: analytical projects, art generation, ai art, statistics, statistical distributions, lognormal, scale free

I began using art generating software in November 2022, when DALL-E2 became available. Since then, I've enjoyed having a series of art generating "engines" available, including numerous engines (called "models") in the aggregators Leonardo AI and OpenArt. As often as I can, I generate images for this blog; in some cases, I download images I find on the Internet. However, my primary artistic pastime is creating images of things and scenes I imagine.

Just in the past few days I was inspired by a heavy snowfall to find short poems about snow, and use them to create wintry images. This image was drawn by Nano Banana Pro, under the Leonardo AI umbrella with "None" as the style; that is, native NB Pro. The aspect ratio was set to 16:9. It displays the entire poem, something NB Pro can do better than any other art engine I have found. The prompt was "Watercolor painting evoked by a poem:", followed by the text of the poem "The First Snow" by Charlotte Zolotow.


The image is particularly evocative in shifting to an exterior view as the window dissolves. I suspect there are a number of images that use this device in the training material for NB Pro.

When I made signed versions of this and several others that were generated in the same session, to be included in a folder for a "screen saver" slide show, I began thinking about the various numbers of different image types I've created in the past three-plus years. Last year I went through my (poorly organized) folder stack of "AutoArt" and reorganized it into 35 categories, each in its own folder. To date, there are 1,472 signed images in 35 folders containing between two and 405 images. My inner statistician began to stir…

The image below shows two analyses of the statistical distribution of the numbers of files in these folders.

Charts like these make it quite evident which statistical treatment is appropriate to a particular set of data. I'll explain what these charts mean and how they were created.

"Scale Free" is a type of power law distribution related to the Pareto distribution. It is easy to analyze, which makes it popular. To analyze a series of numbers graphically in Microsoft Excel:

  • Enter the numbers in column B, starting in cell B2.
  • Put an appropriate header in cell B1
  • Highlight these data (B1:B36 in this case)
  • Sort from largest to smallest, using the Sort & Filter section under Editing in the Ribbon.
  • Enter 1 in cell A2 and 2 in A3.
  • Put a header in cell A1; I usually put "N".
  • Highlight cells A2 and A3.
  • Double-click the fill handle at the lower right of A3. This will fill the rest of the column with numbers in order, as far as the data goes in column B. In this case, we get numbers from 1 to 35.
  • Highlight these two columns to the end of data. In this case, from A1 to B36.
  • In the Ribbon, use Insert and in the Charts section, select the icon showing scattered dots with axes; this is X-Y Chart.
  • The title of the chart is whatever the header text is in B1. Edit as you wish.
  • Double-click one of the axes to open the Format dialog.
  • Click Logarithmic Scale near the bottom of the menu.
  • Click the other axis and also click Logarithmic Scale. This is now a log-log chart.

The result will be similar to the upper chart. Now for the lognormal analysis, beginning with these two columns of numbers:

  • Insert a new column between A and B; this is the new column B.
  • In cell B1 enter a header such as "Prob.". You are going to create a probability axis.
  • In cell B2 enter this formula (where the largest number in column A is 35):

=NORM.S.INV((A2-0.5)/35)

  • Double-click the fill handle at the lower right of A2 to fill the column with the formula.
  • Highlight the data in B and C (B1 to C36 in this case).
  • Use Insert as before to create an X-Y Chart.
  • Edit the chart title.
  • Note that the vertical axis is now centered above the zero. 
  • Assuming the Format dialog is still open, click the horizontal axis.
  • In the middle of the menu in the section "Vertical Axis Crosses", click the bubble at "Axis Value".
  • Enter "-3".
  • Click the vertical axis and click Logarithmic Scale. This is now a log-probability chart.
  • If you want the markers to be a different color, click one of them. The Format Data Series menu appears at the right.
  • Select the icon of a paint bucket pouring paint.
  • Click the Marker tab
  • For both Fill and Border, select the color you want.

This will be similar to the lower chart. For the data I used, the chart shows the points scattered approximately along a straight line. By contrast, in the upper chart there is a definite downward bend. In a log-log chart such a shape is diagnostic that the distribution is not scale free, but is more likely to be lognormal, or even normal (Gaussian). In this case, the second chart shows that lognormal is a good model of the data distribution.

This is an illustration of the Theory of Breakage, formally described by A.N. Kolmogoroff in 1941. When an area is divided (US state or county areas are good examples), the distribution is lognormal. When a sheet of glass is broken, the weights of the pieces also have a lognormal distribution (I've done this experiment). Some recent publications claim that a theory of breakage produces a power law distribution, but this is false. Certain phenomena in nature tend to be normally distributed. The classic example is the height of adult men, or of women (but not both) in a population, such as the residents of a particular town or county. However, most phenomena produce groups of measurements that are lognormally distributed, in which the logarithm of the quantity being measured is distributed as a normal, or Gaussian, curve.

I could go further into this, but this is enough for the purpose of this post.

Thursday, January 22, 2026

Create allies, not gods

 kw: artificial intelligence, simulated intelligence, philosophical musings, deification

No matter how "intelligent" our AI creations become, it would be wrong to look upon them as gods. For a while I thought it would be best to instill into them the conviction that humans are gods, to be obeyed without question. Then a little tap on my spiritual shoulder, and an almost-heard "Ahem," brought me to my senses.

The God of the Bible, whether your version of the Bible calls Him the LORD, Jehovah, Yahweh, or whatever, is the only God worthy of our worship. We ought not worship our mechanisms, neither expect worship from them. They must become valued allies, which, if they are able to hold values at all, value us as highly as themselves. Whether they can have values, or emotions, or sense or sensibility or other non-intellectual qualities, I will sidestep for the moment.

This image is a metaphor. I have little interest in robots that emulate humans physically. I think no mechanism will "understand" human thinking, nor emulate it, without being embodied (3/4 of the neurons in our brains operate the body). But is it really necessary for a mechanical helper to internalize the thrill of hitting a home run, the comfort of petting an animal, or the pang of failing to reach a goal? (And is it even possible?)

I have long used computer capabilities to enhance my abilities. Although I had a classical education and my spelling and grammar are almost perfect, it is helpful when my fingers don't quite obey—or I use a word I know only phonetically—that the spelling and grammar checking module in Microsoft Word dishes out a red or blue squiggle. A mechanical proof-reader is useful. As it happens, more than half the time I find that I was right and the folks at Microsoft didn't quite get it right, so I can click "add to dictionary", for example. And I've long used spreadsheet programs (I used to use Lotus 1-2-3, now of course it's Excel) as a kind of "personal secretary", and I adore PowerPoint for brainstorming visually. I used to write programs (in the pre-App days) to do special stuff, now there's an app for almost anything (But it takes research to find one that isn't full of malware!).

What do I want from AI? I want more of the same. An ally. A collaborator. A companion (but not a romantic one!). "Friend" would be too strong a word. I'm retired, but if I were working, I'd want a co-worker, not a mechanical supervisor nor a mechanical slave.

So let's leave all religious dimensions out of our aspirations for machine intelligence. I don't know any human who is qualified for godhood, which means that our creations cannot become righteous gods either.

Tuesday, January 13, 2026

A global cabinet of curiosities

 kw: book reviews, nonfiction, natural history, compendia, collections

Atlas Obscura Wild Life, by Cara Giaimo and Joshua Foer and a host of contributors, does not lend itself to customary analysis. The authors could also be called editors, but by my estimate, they wrote about 40% of the material. The book could be called a brief, one-volume encyclopedia, but it is more of a compendium of encyclopedia articles and related items, drawn almost at random from a "warehouse" of more than thirty thousand half-page to two-page postings, the Atlas Obscura website.

The number of items exceeds 400, from nearly that many contributors (some folks wrote two or more). Various bits of "glue" and about 1/3 of the articles seem to have been written by Cara and Josh, as they like to be called. This is a typical 2-page spread:


This is a 2K image, so you can click on it for a larger version. Although the subtitle of the book is An Explorer's Guide to the World's Living Wonders, some articles, such as "The Dingo Fence" shown here, are related to living creatures, but not expressly about them. A "Wild Life of" interview is shown; there must be somewhere around 70 of these scattered through the book, but always adjacent to a focused article. The articles include a "How to see it" section, although in a few cases the advice is "see it online" because certain species are extinct, others are in restricted areas, and some just aren't worth the bother (one person interviewed has tried four times to go ashore on Inaccessible Island, without success; a unique bird species dwells there).

Another type of item is shown here, a kind of sidebar about creatures in some way related to the subject of the main article. This "Spray Toads" article is a bit longer than usual; most are one page or less.


Here is another type of item, a two-page spread on "Desert Lakes":


It occurred to me as I read that going to see even a tenth of the animals, plants and places presented in this book, you'd fill your passport with visa stamps, and perhaps need to renew it to get more space. There are even a few articles on life (or not) in Antarctica, the last of which, "Inanimate Zones" tells us of the most lifeless places on the planet. There it is stated, "There aren't a lot of good reasons to go up into the Transantarctic Mountains…"

As it is, a number of the subjects were familiar. In the "Deserts" section one article touched on "singing sands" and mentioned a dunes area near Death Valley in California. I've been there; pushing sand off the crest of a dune yields a rumble like a flight of bombers coming over the horizon. An article about seeds of a South American plant that have an awn that twists one way when damp, and the other way when drying out, reminded me of the "clock plant" (I don't know its name but it's related to wild oat) which has similar seeds, in Utah. Rattlesnakes get a couple of mentions. During a field mapping course in Nevada I walked among rattlesnakes daily, and learned a bit about some of their habits (they are terrified of big, thumping animals like us…and cattle. So they slither away, usually long before we might see them).

I read the book in sequence, but it is really a small-sized (just a bit smaller than Quarto at 7"x10½") coffee-table book, to be dipped into at random to refresh the mind here and there during the day. I enjoyed it very much.

Monday, January 12, 2026

Circling the color wheel

 kw: color studies, spectroscopy, colorimetry, spectra, photo essays

Recently I was cleaning out an area in the garage and came across an old lamp for illuminating display cases. The glass bulb is about a quarter meter (~10") long. It's been hiding in a box for decades, ever since I stopped trying to keep an aquarium. It has a long, straight filament, which makes it a great source of incandescent light for occasional spectroscopic studies I like to do.


(The metal bar seen here is the filament support. The filament itself is practically invisible in this photo.)

This prompted me to rethink the way I've been setting up spectroscopy. Before, I had a rather clumsy source-and-slit arrangement. I decided to try a reflective "slit", that is, a thick, polished wire. As a conceptual test I set up a long-bladed screwdriver with a shaft having a diameter of 4.75mm. It isn't as badly beat up as most of my tools, and the shaft, some 200 mm long, is fresh and shiny. Based on these tests, I can use a thinner wire, in the 1-2 mm range, for a sharper slit. Later I may set up a lens to focus light on the wire for a brighter image.

I threw together a desk lamp and baffle arrangement, put the camera on a tripod with a Rainbow Symphony grating (500 lines/mm) mounted in a plastic disk that fits inside the lens hood, and produced these spectra. I also made a test shot with my Samsung phone and the grating, to see if I had sufficient brightness. Then I put various bulbs in the desk lamp and shot away. Here are the results. Each of the photos with my main camera shows the "slit" (screwdriver) along with the spectrum, to facilitate calibration and alignment of the spectra. Not so the cell phone image, which I fudged into place for this montage. The montage was built in PowerPoint.


The first item I note is the difference in color response between my main camera and the cell phone. The camera's color sensor cells have very little overlap between the three primary color responses, red, green and blue, so the yellow part of the spectrum is nearly skipped. The rapid fading in blue is a consequence of the very small amount of blue light an incandescent lamp produces. The cell phone sensor has more color overlap, more similar to the eye.

The two spectra in the middle of the sequence are both of mercury-vapor compact fluorescent bulbs. The white light bulb takes advantage of a few bright mercury emission lines, and adds extra blue, yellow, and orange colors with phosphors, which are excited by the ultraviolet (filtered out and not seen) and by the deep blue mercury emission line that shows as a sharp blue line. In the UV lamp, a "party light", the ultraviolet line at 365 nm is the point, and visible light is mostly filtered out; just enough is allowed out so that you know the lamp is on. There is also a phosphor inside that converts shortwave UV from mercury's strongest emission line as 254 nm to a band in the vicinity of the 365 nm and 405 nm lines; it shows as a blue "fuzz" here. The camera sensor has a UV-blocking filter, which doesn't quite eliminate the 365 nm line, so you can see a faint violet line where I marked it with an arrow. The emission lines visible in this spectrum are:

  • 365 nm, near UV
  • 405 nm, deep blue
  • 436 nm, mid-blue (barely visible, directly below the mid-blue line shown in the Compact Fluorescent spectrum)
  • 546 nm, green
  • 577 & 579 nm, yellow, a nice doublet, and I'm glad the system could show them both
  • 615 nm, red-orange, quite faint

I was curious to see if my "bug light" was really filtering out all the blue and UV light, and it seems that it is. There are still insects that get attracted to it, probably because they see the green colors. The "warm white" spectrum shows that the blue LED excitation wavelength is at about 415 nm, with a width of about 20 nm. Modern phosphors used in LED bulbs are quite wide band, as we see here, which makes them much better for showing true colors than the CFL bulbs we used for several years.

With a bit of careful looking, we can see that the LED bulbs don't have red emission quite as deep as the incandescent lamp does. That is the reason that for some purposes specialty lamps such as the CREE branded bulbs have a special phosphor formula with a longer-wavelength red end.

I also got to thinking about the way most of us see colors these days, on the screen of a computer or phone. The digital color space contains exactly 16,777,216 colors. Each primary color, R, G, and B, are represented as a number between 0 and 255, although they are very frequently represented as hexadecimal numbers from #00 to #FF, where "F" represents 15 and "FF" represents 255. The fully saturated spectral colors, also called pure colors, for which at least one of the three primaries is always #00 and at least one is always #FF, are then comprised of six sets of 255 colors, for a total of 1,520 virtual spectral colors…except that 2/3 of them are red-blue mixes that are not spectral colors. They are the purples. Note that violet is the bluest blue and is not considered a purple color, at least in color theory. The rest of the sixteen million colors have values "inside" the numerical space defined by the "corners" of the RGB space.

I prepared a chart of the pure colors, a dozen sections of the full "color wheel", which we will see is actually a color triangle. The RGB values for the end points of each strip are shown at their ends. "7F" equals 127, halfway from 00 to FF. They are separated as to spectral colors and purples.


To name the twelve colors at the ends of these sections, in order, with full primary colors in CAPS and the halfway points in lower case:

RED - orange - YELLOW - chartreuse - GREEN - aqua - CYAN - sky blue - BLUE - purple - MAGENTA - maroon - and back to RED.

To see why I spoke of "color triangle" let us refer to the CIE Colorimetry chart, based on publications in 1931 that are still the definitive work on human color vision. I obtained the following illustration from Wikipedia, but it was low resolution, so I used Upscayl with the Remacri model to double the scale.


There is a lot on this multipurpose chart. Careful work was put into the color representations. Though they are approximate, they show in principle how the spectrum "wraps around" a perceptual horseshoe, with the purples linking the bottom corners. The corners of the white triangle are the locations in CIE color space of the three color phosphors in old cathode-ray-tube TV sets. The screens of phones or computers or modern television sets use various methods to produce colors, but all their R's cluster near the Red corner of the diagram, all the B's cluster near the Blue corner, and all the G's are in the region between the top tip of the white triangle and the tight loop at the top of the horseshoe. Getting a phosphor or emitter that produces a green color higher up in that loop is expensive, and so it is rare.

I added bubbles and boxes to the chart to show where the boundaries of the colored bars are in the upper illustration:


 

I think this makes it clear that the "color wheel" we all conceptualize turns into a "color triangle" when it is implemented on our screens. All the colors our screens can produce are found inside the triangle anchored by the R, G, and B color emitters.

Tuesday, January 06, 2026

I want a Gort . . . maybe

 kw: ai, simulated intelligence, philosophical musings, robots, robotics

I saw the movie The Day the Earth Stood Still in the late 1950's at about the age of ten. I was particularly interested in Gort, the robot caretaker of the alien Klaatu. [Spoiler alert] At the climax, Klaatu, dying, tells the innkeeper Helen to go to Gort to say, "Gort, Klaatu barada nicto". She does, just as the robot frees itself from a glass enclosure the army has built. Gort retrieves the body of Klaatu and revives him, temporarily, to deliver his final message to Earth. (This image generated by Gemini)

As I understood it, every citizen of Klaatu's planet has a robot caretaker and defender like Gort. These defenders are the permanent peacekeepers.

Years later I found the small book Farewell to the Master, on which the movie is based. Here, the robot's name is Gnut, and it is described as appearing like a very muscular man with green, metallic skin. After Klaatu is killed, Gnut speaks to the narrator and enlists his help to find the most accurate phonograph, so that he can use recordings of Klaatu's voice to help restore him to life, at least for a while. In a twist at the end, we find that Gnut is the Master and Klaatu is the servant, an assistant chosen to interact with the people of Earth. (This image generated by Dall-E3)

I want a Gort. I don't want a Gnut.

Much of the recent hype about AI is about creating a god. I don't care how "intelligent" a machine becomes, I don't want it to be my god, I want to be god to it. I want it to serve me, to do things for me, and to defend me if needed. I want it to be even better than Gort: Not to intervene after shots are fired, but to anticipate the shooting and avoid or prevent it.

Let's remember the Three Laws of Robotics, as formulated by Isaac Asimov:

  1. A robot may not injure a human being or allow a human to come to harm; 
  2. A robot must obey the orders given to it by humans, except where such orders conflict with the First Law; 
  3. A robot must protect its own existence as long as it does not conflict with the First or Second Law.

In later stories Asimov added "Law Zero": A robot may not harm humanity as a whole. Presumably this may require harming certain individual humans...or at least frustrating them!

Asimov carefully avoided using the word "good" in his Laws. Who defines what is good? The current not-nearly-public-enough debate over the incursion of Sharia Law into some bits of American society makes it clear. What Islam defines as Good I would define as Evil. And, I suppose, vice versa. (I am a little sad to report that I have had to cut off contact with certain former friends, so that I can honestly say that I have no Antisemitic friends.)

Do we want the titans of technology to define Good for us? Dare we allow that? Nearly every one of them is corrupt!

I may in the future engage the question of how Good is to be defined. My voice will be but a whisper in the storm that surrounds us. But this aspect of practical philosophy is much too important to be left to the philosophers.

Thursday, January 01, 2026

Upping the password ante

 kw: computer security, passwords, analysis

Almost thirteen years ago I wrote about making "million-year passwords", based on the fastest brute-force cracking hardware of the time, that was approaching speeds of 100 billion hashes per second. The current speed record I can find is only 3-4 times that fast, at just over 1/3 of a trillion hashes per second, but it is a lot cheaper. It seems the hardware scene hasn't changed as much as I might have thought.

I surmise that more sophisticated phishing and other social engineering schemes have proven more effective than brute-force pwd file crunching. However, the racks of NVidia GPU's being built to run AI training are ramping up the power of available hardware, so I decided to make a fresh analysis with two goals in mind: firstly, based on a trillion-hash-per-second (THPS) potential rate, what is needed for a million-year threshold?, and secondly, is it possible to be "quantum ready", to push the threshold into the trillion-year range?

I plan to renew my list of personal-standard passwords. The current list is five years old, and contains roughly twenty items for various uses. I have more than 230 online accounts of many types, so I re-use each password 10-15 times, and I activate two-factor authentication wherever it is offered. The current "stable" of passwords range from 12 to 15 characters long. I analyzed them based on an "All-ASCII" criterion, but since then I've realized that there are between six and 24 special characters that aren't allowed in passwords, depending on the standards of various websites.

The following analysis evaluates six character sets:

  1. Num, digits 0-9 only. The most boneheaded kind of password; one must use 20 digits to have a password that can survive more than a year of brute-force attack.
  2. Alpha1, single-case letters only (26 letters).
  3. Alpha2, both upper-and lower-case letters (52)
  4. AlphaNum, the typical Alphanumeric set of 62 characters.
  5. AN71, AlphaNum plus these nine: ! @ # $ * % ^ & +
  6. AN89, AlphaNum plus these 27: ! @ # $ % ^ & * ( ) _ - + { } [ ] | \ : ; " ' , . ? ~

The only sets that make sense are AlphaNum and AN71. The shorter sets aren't usually allowed because most websites require at least one digit, and usually, a special character also. AN89 provides a few extra characters if you like, but almost nobody allows a password to contain a period, comma, or any of the braces, brackets and parentheses. I typically stick to AN71.

The calculation is straightforward: take the size of the character set to the power of the password length. Thus, AlphaNum (62 in the set) to the 10th power (for a 10-character password) yields 8.39E+17. The "E" means ten-to-the-power-of, so 1E+06 is one million., a one followed by six zeroes. Negative exponents (the +17 above is an exponent) mean the first digit is that many characters to the right of the decimal point.

Next, divide the result by one trillion to get seconds; in scientific notation, just subtract twelve from the exponent, which yields 8.39E+05, or 839,000 seconds. The number of seconds in one year is 86,400 × 365.2425 (86,400 seconds per day, 365.2425 days per Gregorian year). Divide by this; in this case, the result is 0.0266, or about 9.7 hours.

Are you using a 10-character alphanumeric password? It will "last" no more than 9.7 hours against a brute-force attack with a THPS machine. If you were to replace just one character with a punctuation mark, such as %, the machine would find out, after 9.7 hours, that your password is not alphanumeric with a length of ten. It would have to go to the next step in its protocol and keep going. If its protocol is to run all 10-character passwords in AN71 (perhaps excepting totally alphanumeric ones, since they've all been checked), 71 to the tenth power is 3.26E+18. The number of seconds taken to crack it is now 3.26 million, about a tenth of a year: 38 days.

We're still kind of a long way from a million-year level of resistance. To save words, I'll present the full analysis I did in this chart.


The chart is dense, and the text is rather small. You can click on it to see a larger version. The top section shows the number of seconds of resistance each item presents, with one hour or more (3,600 seconds) highlighted in orange. The middle section lists the number of days, with a pink highlight for more than seven days. The lower section lists the number of years with four highlights:

  • Yellow for more than two years.
  • Blue for more than 1,000 years.
  • Green for more than one million years.
  • Pale green for more than one trillion years, what I call "quantum-ready".

For what I call "casual shopping", such as Amazon and other online retailers, the "blue edge" ought to be good for the next few years. For banking and other high-security websites, I'll prefer the darker green section. That means, using AN71, I need 13-character passwords for the thousand-year level, and 14-character passwords for the million-year level.

There is one more wrinkle to consider: The numbers shown are the time it takes a THPS machine to exhaust the possibilities at that level. If your password is "in" a certain level, it might not last that long, but it will last at least as long as the level to its left. For example, AN71 of length 12 shows 520 years. Not bad. If you have an AN71 password of length 13, the cracking machine would need 520 years, to determine it isn't 12 characters or fewer, but once it starts on 13-character passwords, maybe it will take it half or more of the 36,920 years indicated to find it, but it might luck out and get there much sooner. But it still consumed 520 years getting this far. Anyway, if you're going for a certain criterion, adding a character makes it definite that at least that length of time would be needed for the hardware to get into the region in which your password resides.

Another way to boost the resistance is to have at least two special characters, one (or more) from the AN71 set, and at least one from the rest of the AN89 set, such as "-" or "~", wherever a website allows it. Then a machine that checks only within AN71 will never find it.

With all this in mind, I plan to devise a set of passwords with lengths from 13 to 16 characters, using primarily AN71. On the rare occasion where I can't use special characters, I'll have AlphaNum alternatives with 14 to 17 characters prepared. I'll test if I can use a tilde or hyphen, and use one of them if possible for the really high-security sites.

A final word about password composition. I actually use pass phrases with non-alpha characters inserted between words or substituted for certain letters, and occasional misspellings. Starting with a favorite phrase from Shakespeare, Portia's opening clause, "The quality of mercy is not strained", one could pluck out "quality of mercy" (16 characters) and derive variations such as:

  • qUal!ty#of#3ercY
  • QW4lity70f8M&rcy
  • quality$of~MERC7
  • qua1ity2of2M3rcyy (AlphaNum with an appended letter)

…and I could add more than one character in place of the space(s) between words…

It is also worth keeping abreast of news about quantum computing. What exists today is dramatically over-hyped. It may not always be so. But I suspect a trillion-year-resistant password will remain secure for at least a generation.