Thursday, January 22, 2026

Create allies, not gods

 kw: artificial intelligence, simulated intelligence, philosophical musings, deification

No matter how "intelligent" our AI creations become, it would be wrong to look upon them as gods. For a while I thought it would be best to instill into them the conviction that humans are gods, to be obeyed without question. Then a little tap on my spiritual shoulder, and an almost-heard "Ahem," brought me to my senses.

The God of the Bible, whether your version of the Bible calls Him the LORD, Jehovah, Yahweh, or whatever, is the only God worthy of our worship. We ought not worship our mechanisms, neither expect worship from them. They must become valued allies, which, if they are able to hold values at all, value us as highly as themselves. Whether they can have values, or emotions, or sense or sensibility or other non-intellectual qualities, I will sidestep for the moment.

This image is a metaphor. I have little interest in robots that emulate humans physically. I think no mechanism will "understand" human thinking, nor emulate it, without being embodied (3/4 of the neurons in our brains operate the body). But is it really necessary for a mechanical helper to internalize the thrill of hitting a home run, the comfort of petting an animal, or the pang of failing to reach a goal? (And is it even possible?)

I have long used computer capabilities to enhance my abilities. Although I had a classical education and my spelling and grammar are almost perfect, it is helpful when my fingers don't quite obey—or I use a word I know only phonetically—that the spelling and grammar checking module in Microsoft Word dishes out a red or blue squiggle. A mechanical proof-reader is useful. As it happens, more than half the time I find that I was right and the folks at Microsoft didn't quite get it right, so I can click "add to dictionary", for example. And I've long used spreadsheet programs (I used to use Lotus 1-2-3, now of course it's Excel) as a kind of "personal secretary", and I adore PowerPoint for brainstorming visually. I used to write programs (in the pre-App days) to do special stuff, now there's an app for almost anything (But it takes research to find one that isn't full of malware!).

What do I want from AI? I want more of the same. An ally. A collaborator. A companion (but not a romantic one!). "Friend" would be too strong a word. I'm retired, but if I were working, I'd want a co-worker, not a mechanical supervisor nor a mechanical slave.

So let's leave all religious dimensions out of our aspirations for machine intelligence. I don't know any human who is qualified for godhood, which means that our creations cannot become righteous gods either.

Tuesday, January 13, 2026

A global cabinet of curiosities

 kw: book reviews, nonfiction, natural history, compendia, collections

Atlas Obscura Wild Life, by Cara Giaimo and Joshua Foer and a host of contributors, does not lend itself to customary analysis. The authors could also be called editors, but by my estimate, they wrote about 40% of the material. The book could be called a brief, one-volume encyclopedia, but it is more of a compendium of encyclopedia articles and related items, drawn almost at random from a "warehouse" of more than thirty thousand half-page to two-page postings, the Atlas Obscura website.

The number of items exceeds 400, from nearly that many contributors (some folks wrote two or more). Various bits of "glue" and about 1/3 of the articles seem to have been written by Cara and Josh, as they like to be called. This is a typical 2-page spread:


This is a 2K image, so you can click on it for a larger version. Although the subtitle of the book is An Explorer's Guide to the World's Living Wonders, some articles, such as "The Dingo Fence" shown here, are related to living creatures, but not expressly about them. A "Wild Life of" interview is shown; there must be somewhere around 70 of these scattered through the book, but always adjacent to a focused article. The articles include a "How to see it" section, although in a few cases the advice is "see it online" because certain species are extinct, others are in restricted areas, and some just aren't worth the bother (one person interviewed has tried four times to go ashore on Inaccessible Island, without success; a unique bird species dwells there).

Another type of item is shown here, a kind of sidebar about creatures in some way related to the subject of the main article. This "Spray Toads" article is a bit longer than usual; most are one page or less.


Here is another type of item, a two-page spread on "Desert Lakes":


It occurred to me as I read that going to see even a tenth of the animals, plants and places presented in this book, you'd fill your passport with visa stamps, and perhaps need to renew it to get more space. There are even a few articles on life (or not) in Antarctica, the last of which, "Inanimate Zones" tells us of the most lifeless places on the planet. There it is stated, "There aren't a lot of good reasons to go up into the Transantarctic Mountains…"

As it is, a number of the subjects were familiar. In the "Deserts" section one article touched on "singing sands" and mentioned a dunes area near Death Valley in California. I've been there; pushing sand off the crest of a dune yields a rumble like a flight of bombers coming over the horizon. An article about seeds of a South American plant that have an awn that twists one way when damp, and the other way when drying out, reminded me of the "clock plant" (I don't know its name but it's related to wild oat) which has similar seeds, in Utah. Rattlesnakes get a couple of mentions. During a field mapping course in Nevada I walked among rattlesnakes daily, and learned a bit about some of their habits (they are terrified of big, thumping animals like us…and cattle. So they slither away, usually long before we might see them).

I read the book in sequence, but it is really a small-sized (just a bit smaller than Quarto at 7"x10½") coffee-table book, to be dipped into at random to refresh the mind here and there during the day. I enjoyed it very much.

Monday, January 12, 2026

Circling the color wheel

 kw: color studies, spectroscopy, colorimetry, spectra, photo essays

Recently I was cleaning out an area in the garage and came across an old lamp for illuminating display cases. The glass bulb is about a quarter meter (~10") long. It's been hiding in a box for decades, ever since I stopped trying to keep an aquarium. It has a long, straight filament, which makes it a great source of incandescent light for occasional spectroscopic studies I like to do.


(The metal bar seen here is the filament support. The filament itself is practically invisible in this photo.)

This prompted me to rethink the way I've been setting up spectroscopy. Before, I had a rather clumsy source-and-slit arrangement. I decided to try a reflective "slit", that is, a thick, polished wire. As a conceptual test I set up a long-bladed screwdriver with a shaft having a diameter of 4.75mm. It isn't as badly beat up as most of my tools, and the shaft, some 200 mm long, is fresh and shiny. Based on these tests, I can use a thinner wire, in the 1-2 mm range, for a sharper slit. Later I may set up a lens to focus light on the wire for a brighter image.

I threw together a desk lamp and baffle arrangement, put the camera on a tripod with a Rainbow Symphony grating (500 lines/mm) mounted in a plastic disk that fits inside the lens hood, and produced these spectra. I also made a test shot with my Samsung phone and the grating, to see if I had sufficient brightness. Then I put various bulbs in the desk lamp and shot away. Here are the results. Each of the photos with my main camera shows the "slit" (screwdriver) along with the spectrum, to facilitate calibration and alignment of the spectra. Not so the cell phone image, which I fudged into place for this montage. The montage was built in PowerPoint.


The first item I note is the difference in color response between my main camera and the cell phone. The camera's color sensor cells have very little overlap between the three primary color responses, red, green and blue, so the yellow part of the spectrum is nearly skipped. The rapid fading in blue is a consequence of the very small amount of blue light an incandescent lamp produces. The cell phone sensor has more color overlap, more similar to the eye.

The two spectra in the middle of the sequence are both of mercury-vapor compact fluorescent bulbs. The white light bulb takes advantage of a few bright mercury emission lines, and adds extra blue, yellow, and orange colors with phosphors, which are excited by the ultraviolet (filtered out and not seen) and by the deep blue mercury emission line that shows as a sharp blue line. In the UV lamp, a "party light", the ultraviolet line at 365 nm is the point, and visible light is mostly filtered out; just enough is allowed out so that you know the lamp is on. There is also a phosphor inside that converts shortwave UV from mercury's strongest emission line as 254 nm to a band in the vicinity of the 365 nm and 405 nm lines; it shows as a blue "fuzz" here. The camera sensor has a UV-blocking filter, which doesn't quite eliminate the 365 nm line, so you can see a faint violet line where I marked it with an arrow. The emission lines visible in this spectrum are:

  • 365 nm, near UV
  • 405 nm, deep blue
  • 436 nm, mid-blue (barely visible, directly below the mid-blue line shown in the Compact Fluorescent spectrum)
  • 546 nm, green
  • 577 & 579 nm, yellow, a nice doublet, and I'm glad the system could show them both
  • 615 nm, red-orange, quite faint

I was curious to see if my "bug light" was really filtering out all the blue and UV light, and it seems that it is. There are still insects that get attracted to it, probably because they see the green colors. The "warm white" spectrum shows that the blue LED excitation wavelength is at about 415 nm, with a width of about 20 nm. Modern phosphors used in LED bulbs are quite wide band, as we see here, which makes them much better for showing true colors than the CFL bulbs we used for several years.

With a bit of careful looking, we can see that the LED bulbs don't have red emission quite as deep as the incandescent lamp does. That is the reason that for some purposes specialty lamps such as the CREE branded bulbs have a special phosphor formula with a longer-wavelength red end.

I also got to thinking about the way most of us see colors these days, on the screen of a computer or phone. The digital color space contains exactly 16,777,216 colors. Each primary color, R, G, and B, are represented as a number between 0 and 255, although they are very frequently represented as hexadecimal numbers from #00 to #FF, where "F" represents 15 and "FF" represents 255. The fully saturated spectral colors, also called pure colors, for which at least one of the three primaries is always #00 and at least one is always #FF, are then comprised of six sets of 255 colors, for a total of 1,520 virtual spectral colors…except that 2/3 of them are red-blue mixes that are not spectral colors. They are the purples. Note that violet is the bluest blue and is not considered a purple color, at least in color theory. The rest of the sixteen million colors have values "inside" the numerical space defined by the "corners" of the RGB space.

I prepared a chart of the pure colors, a dozen sections of the full "color wheel", which we will see is actually a color triangle. The RGB values for the end points of each strip are shown at their ends. "7F" equals 127, halfway from 00 to FF. They are separated as to spectral colors and purples.


To name the twelve colors at the ends of these sections, in order, with full primary colors in CAPS and the halfway points in lower case:

RED - orange - YELLOW - chartreuse - GREEN - aqua - CYAN - sky blue - BLUE - purple - MAGENTA - maroon - and back to RED.

To see why I spoke of "color triangle" let us refer to the CIE Colorimetry chart, based on publications in 1931 that are still the definitive work on human color vision. I obtained the following illustration from Wikipedia, but it was low resolution, so I used Upscayl with the Remacri model to double the scale.


There is a lot on this multipurpose chart. Careful work was put into the color representations. Though they are approximate, they show in principle how the spectrum "wraps around" a perceptual horseshoe, with the purples linking the bottom corners. The corners of the white triangle are the locations in CIE color space of the three color phosphors in old cathode-ray-tube TV sets. The screens of phones or computers or modern television sets use various methods to produce colors, but all their R's cluster near the Red corner of the diagram, all the B's cluster near the Blue corner, and all the G's are in the region between the top tip of the white triangle and the tight loop at the top of the horseshoe. Getting a phosphor or emitter that produces a green color higher up in that loop is expensive, and so it is rare.

I added bubbles and boxes to the chart to show where the boundaries of the colored bars are in the upper illustration:


 

I think this makes it clear that the "color wheel" we all conceptualize turns into a "color triangle" when it is implemented on our screens. All the colors our screens can produce are found inside the triangle anchored by the R, G, and B color emitters.

Tuesday, January 06, 2026

I want a Gort . . . maybe

 kw: ai, simulated intelligence, philosophical musings, robots, robotics

I saw the movie The Day the Earth Stood Still in the late 1950's at about the age of ten. I was particularly interested in Gort, the robot caretaker of the alien Klaatu. [Spoiler alert] At the climax, Klaatu, dying, tells the innkeeper Helen to go to Gort to say, "Gort, Klaatu barada nicto". She does, just as the robot frees itself from a glass enclosure the army has built. Gort retrieves the body of Klaatu and revives him, temporarily, to deliver his final message to Earth. (This image generated by Gemini)

As I understood it, every citizen of Klaatu's planet has a robot caretaker and defender like Gort. These defenders are the permanent peacekeepers.

Years later I found the small book Farewell to the Master, on which the movie is based. Here, the robot's name is Gnut, and it is described as appearing like a very muscular man with green, metallic skin. After Klaatu is killed, Gnut speaks to the narrator and enlists his help to find the most accurate phonograph, so that he can use recordings of Klaatu's voice to help restore him to life, at least for a while. In a twist at the end, we find that Gnut is the Master and Klaatu is the servant, an assistant chosen to interact with the people of Earth. (This image generated by Dall-E3)

I want a Gort. I don't want a Gnut.

Much of the recent hype about AI is about creating a god. I don't care how "intelligent" a machine becomes, I don't want it to be my god, I want to be god to it. I want it to serve me, to do things for me, and to defend me if needed. I want it to be even better than Gort: Not to intervene after shots are fired, but to anticipate the shooting and avoid or prevent it.

Let's remember the Three Laws of Robotics, as formulated by Isaac Asimov:

  1. A robot may not injure a human being or allow a human to come to harm; 
  2. A robot must obey the orders given to it by humans, except where such orders conflict with the First Law; 
  3. A robot must protect its own existence as long as it does not conflict with the First or Second Law.

In later stories Asimov added "Law Zero": A robot may not harm humanity as a whole. Presumably this may require harming certain individual humans...or at least frustrating them!

Asimov carefully avoided using the word "good" in his Laws. Who defines what is good? The current not-nearly-public-enough debate over the incursion of Sharia Law into some bits of American society makes it clear. What Islam defines as Good I would define as Evil. And, I suppose, vice versa. (I am a little sad to report that I have had to cut off contact with certain former friends, so that I can honestly say that I have no Antisemitic friends.)

Do we want the titans of technology to define Good for us? Dare we allow that? Nearly every one of them is corrupt!

I may in the future engage the question of how Good is to be defined. My voice will be but a whisper in the storm that surrounds us. But this aspect of practical philosophy is much too important to be left to the philosophers.

Thursday, January 01, 2026

Upping the password ante

 kw: computer security, passwords, analysis

Almost thirteen years ago I wrote about making "million-year passwords", based on the fastest brute-force cracking hardware of the time, that was approaching speeds of 100 billion hashes per second. The current speed record I can find is only 3-4 times that fast, at just over 1/3 of a trillion hashes per second, but it is a lot cheaper. It seems the hardware scene hasn't changed as much as I might have thought.

I surmise that more sophisticated phishing and other social engineering schemes have proven more effective than brute-force pwd file crunching. However, the racks of NVidia GPU's being built to run AI training are ramping up the power of available hardware, so I decided to make a fresh analysis with two goals in mind: firstly, based on a trillion-hash-per-second (THPS) potential rate, what is needed for a million-year threshold?, and secondly, is it possible to be "quantum ready", to push the threshold into the trillion-year range?

I plan to renew my list of personal-standard passwords. The current list is five years old, and contains roughly twenty items for various uses. I have more than 230 online accounts of many types, so I re-use each password 10-15 times, and I activate two-factor authentication wherever it is offered. The current "stable" of passwords range from 12 to 15 characters long. I analyzed them based on an "All-ASCII" criterion, but since then I've realized that there are between six and 24 special characters that aren't allowed in passwords, depending on the standards of various websites.

The following analysis evaluates six character sets:

  1. Num, digits 0-9 only. The most boneheaded kind of password; one must use 20 digits to have a password that can survive more than a year of brute-force attack.
  2. Alpha1, single-case letters only (26 letters).
  3. Alpha2, both upper-and lower-case letters (52)
  4. AlphaNum, the typical Alphanumeric set of 62 characters.
  5. AN71, AlphaNum plus these nine: ! @ # $ * % ^ & +
  6. AN89, AlphaNum plus these 27: ! @ # $ % ^ & * ( ) _ - + { } [ ] | \ : ; " ' , . ? ~

The only sets that make sense are AlphaNum and AN71. The shorter sets aren't usually allowed because most websites require at least one digit, and usually, a special character also. AN89 provides a few extra characters if you like, but almost nobody allows a password to contain a period, comma, or any of the braces, brackets and parentheses. I typically stick to AN71.

The calculation is straightforward: take the size of the character set to the power of the password length. Thus, AlphaNum (62 in the set) to the 10th power (for a 10-character password) yields 8.39E+17. The "E" means ten-to-the-power-of, so 1E+06 is one million., a one followed by six zeroes. Negative exponents (the +17 above is an exponent) mean the first digit is that many characters to the right of the decimal point.

Next, divide the result by one trillion to get seconds; in scientific notation, just subtract twelve from the exponent, which yields 8.39E+05, or 839,000 seconds. The number of seconds in one year is 86,400 × 365.2425 (86,400 seconds per day, 365.2425 days per Gregorian year). Divide by this; in this case, the result is 0.0266, or about 9.7 hours.

Are you using a 10-character alphanumeric password? It will "last" no more than 9.7 hours against a brute-force attack with a THPS machine. If you were to replace just one character with a punctuation mark, such as %, the machine would find out, after 9.7 hours, that your password is not alphanumeric with a length of ten. It would have to go to the next step in its protocol and keep going. If its protocol is to run all 10-character passwords in AN71 (perhaps excepting totally alphanumeric ones, since they've all been checked), 71 to the tenth power is 3.26E+18. The number of seconds taken to crack it is now 3.26 million, about a tenth of a year: 38 days.

We're still kind of a long way from a million-year level of resistance. To save words, I'll present the full analysis I did in this chart.


The chart is dense, and the text is rather small. You can click on it to see a larger version. The top section shows the number of seconds of resistance each item presents, with one hour or more (3,600 seconds) highlighted in orange. The middle section lists the number of days, with a pink highlight for more than seven days. The lower section lists the number of years with four highlights:

  • Yellow for more than two years.
  • Blue for more than 1,000 years.
  • Green for more than one million years.
  • Pale green for more than one trillion years, what I call "quantum-ready".

For what I call "casual shopping", such as Amazon and other online retailers, the "blue edge" ought to be good for the next few years. For banking and other high-security websites, I'll prefer the darker green section. That means, using AN71, I need 13-character passwords for the thousand-year level, and 14-character passwords for the million-year level.

There is one more wrinkle to consider: The numbers shown are the time it takes a THPS machine to exhaust the possibilities at that level. If your password is "in" a certain level, it might not last that long, but it will last at least as long as the level to its left. For example, AN71 of length 12 shows 520 years. Not bad. If you have an AN71 password of length 13, the cracking machine would need 520 years, to determine it isn't 12 characters or fewer, but once it starts on 13-character passwords, maybe it will take it half or more of the 36,920 years indicated to find it, but it might luck out and get there much sooner. But it still consumed 520 years getting this far. Anyway, if you're going for a certain criterion, adding a character makes it definite that at least that length of time would be needed for the hardware to get into the region in which your password resides.

Another way to boost the resistance is to have at least two special characters, one (or more) from the AN71 set, and at least one from the rest of the AN89 set, such as "-" or "~", wherever a website allows it. Then a machine that checks only within AN71 will never find it.

With all this in mind, I plan to devise a set of passwords with lengths from 13 to 16 characters, using primarily AN71. On the rare occasion where I can't use special characters, I'll have AlphaNum alternatives with 14 to 17 characters prepared. I'll test if I can use a tilde or hyphen, and use one of them if possible for the really high-security sites.

A final word about password composition. I actually use pass phrases with non-alpha characters inserted between words or substituted for certain letters, and occasional misspellings. Starting with a favorite phrase from Shakespeare, Portia's opening clause, "The quality of mercy is not strained", one could pluck out "quality of mercy" (16 characters) and derive variations such as:

  • qUal!ty#of#3ercY
  • QW4lity70f8M&rcy
  • quality$of~MERC7
  • qua1ity2of2M3rcyy (AlphaNum with an appended letter)

…and I could add more than one character in place of the space(s) between words…

It is also worth keeping abreast of news about quantum computing. What exists today is dramatically over-hyped. It may not always be so. But I suspect a trillion-year-resistant password will remain secure for at least a generation. 

Tuesday, December 23, 2025

Just beyond the edge of the usual

 kw: book reviews, science fiction, short stories, ekumen series, space travel, anthologies, collections

I read some of the stories collected in The Birthday of the World and Other Stories, by Ursula K. Le Guin, when they were first published in the middle 1990's. It was a rare pleasure to re-read them, and to get to know their companion pieces, with the perspective offered by thirty years of personal experience and the dramatic social and political changes that have occurred in that time. These stories represent Ms Le Guin twenty years into her prolific career. This collection was published in 2003.

Seven of the stories (maybe only six, by her assessment in the Preface) take place in her speculative universe, the Ekumen, in which all "alien" races are descended from the Hainish on the planet Hain, from which numerous planetary societies have been founded. Sufficient time has passed that quite different, even extreme, societal and physiological variations have arisen. This affords the author a way to explore societal evolution among beings that are at least quasi-human. It removes the difficulty of dealing with totally alien species.

The story I remember best is the opening piece, "Coming of Age in Karhide." Although the Ekumen is mentioned and a few Hainish dwell on the planet, the story focuses on the experiences of a young person approaching "first kemmer", a span of a few days or weeks in which the sexless body transforms into either a male or female body, the newly-sexed man or woman has promiscuous sex in the kemmerhouse, and may become a parent; it can take a few kemmers (which I translate internally as "coming into heat" the way cats, dogs and most animals do) for a female to become pregnant the first time. During each kemmer, a man may remain a man or change to a woman, and vice versa.

The author passed away in 2018, just as "trans ideology" was garnering political power, primarily in "blue" states. I wonder what she thought of it. Thankfully, the ideology is fracturing and I hope it will soon be consigned to the dustbin of history. At present, roughly a quarter of American adults appear to genuinely believe that complete transition is possible. It isn't, "sex reassignment" is cosmetic only. It is only for the rich, of course; transition hormones cost thousands, and the full suite of surgeries costs around a million dollars. The amount of genetic engineering needed to produce a quasi-human with sex-changing "kemmer", should any society be foolish enough to attempt it, would cost trillions.

Other stories in Birthday explore other sexual variations, and the societal mores that must accompany them. These are interesting as exploratory projects. They were written shortly after the death of Christine Jorgensen. Ms Jorgensen was the first American man (but not the first worldwide) to undergo complete sexual reassignment surgery, in the early 1950's. Subjects such as the surgical transformation of the penis into the lining of a manufactured vagina, without disrupting blood vessels and nerves, were actually published in formerly staid newspapers! 

To my mind, in America at least, Ms Jorgensen is the only "transitioner" to whom I accord female pronouns. She transitioned as completely as medical science of the time allowed (and very little progress has been made since). She became an actress and an activist for transsexual rights (she later preferred the term "transgender". I think she learned a thing or two). She even planned to marry a man, but was legally blocked. She intended to enjoy sex as a woman would. Maybe she did.

The last piece in the volume, "Paradises Lost", takes place on a generation spaceship. Population 4,000, strictly regulated to match the supplies sent on a journey that was intended to require more than 200 years. The religious politics that threaten to derail the enterprise don't interest me much. Of much more interest: the mindset of residents in the fifth generation after launch, after all the "Zeroes" and "Ones" have passed away, expecting the sixth generation to be the one to set foot on the new planet; and the way the "Fives" react to their experiences on that planet after an early arrival (sorry for the spoiler).

We are only in part a product of our ancestors' genetics. Much more, we are a product of the environment in which we grew up—which is only in part a product of our ancestors—, in which we had those formative experiences that hone our personalities. While all the stories in this volume explore these issues, "Paradises Lost" does so most keenly.

The work of Ursula K. Le Guin stands as a monument to speculative thinking in areas that few authors of her early years could carry off.

Monday, December 15, 2025

How to not be seen

 kw: book reviews, nonfiction, science, optics, visibility, invisibility

In a video you may have seen (watch it here before continuing; spoiler below),

.

.

.

…titled "Selective Attention Test", you are asked to keep careful watch on certain people throwing basketballs.

.

.

.

…Several seconds in, someone wearing a gorilla suit walks into the middle of the action, turns to the camera, beats its chest, then walks back out of the scene. When this is shown to people who've never heard of it, about half report seeing the "gorilla", and half didn't see it. 

This is called Inattentional Blindness. It is used by stage magicians, whose actions and talk in the early part of a performance direct the audience's attention away from what is happening right in front of them. A magician can't be content with misdirecting half of the audience; the goal is 100%. This is often achieved!

But what if someone wants to vanish from plain sight, without benefit of a flash of fire or smoke (the usual prop for a vanishing act)? Optical science researcher Gregory J. Gbur might have something to say about that in his book Invisibility: The History and Science of How Not to be Seen.

Much of the history Dr. Gbur draws upon is found in science fiction. It seems that every scientific discovery about optics and related fields was fodder for science fiction writers to imagine how someone could be made invisible. This cover image from a February 1921 issue of Science and Invention (edited and mostly written by Hugo Gernsback, later to write lots of science fiction and edit Amazing Stories) shows the rays from something similar to an X-ray machine making part of this woman invisible.

I looked for this cover image online and found an archive of S&I issues. However, the issues were apparently produced with various covers for different regions, and the version in the archive had a cover touting a different application of X-rays. However, the article on page 1074, referred to in the cover shown above, does discuss whether X-rays or something like them can be used to provide invisibility, and also shows another way that structures inside the body may be seen.

Here the "transparascope" makes certain tissues transparent, allowing the viewing of others. IRL, the development of CT scanning and MRI scanning, fifty-odd years later, were required to achieve such views. The invisibility beam of the cover image has so far proved elusive.

Invisibility sits in the broader realm of "how not to be seen." The book shows in detail that the technologies that have been developed to hide or cloak objects can only work perfectly over very narrow ranges of light wavelength (and by analogy, waves in water and other media), and usually a narrow range of viewing angle. Is perfection needed? That depends…

In the late 1960's I worked for a defense contracting company, mainly as an optical technician. I was loaned to a related project as an experimental subject. The team was gathering data on the limits of human vision, detecting the contrast between a lighted object in the sky (an aircraft) and the sky. This was the Vietnam War era. 

The experimental setup was a room with one wall covered with a screen on which versions of "sky blue" were projected. At the center was a hole and various targets were set in this hole. They simulated the look of a dark or darkish object in the sky, and each target had several lighted spots, little lamps. The lamps' color and brightness could be adjusted. I was instructed to tell what I could see. The first day I was there, the background target was black, and the lamps were small and bright. The targets had differing numbers of lamps and their brightness would be adjusted to reduce the visibility of the overall target. This tested acuteness of vision; how many lamps on a certain size target would "fuzz together" and seem to illuminate its entire area? 

For most people, the "fuzz" angle is 1/60th of a degree. When you look up at a Boeing 737 at 30,000 ft elevation, its length of about 130 feet means it subtends and angle of about 1/4 degree. It would take two rows of 25 lamps along the fuselage, and at least 10 lamps, or 10 pairs of lamps, along each wing, to counter-illuminate it and reduce its visibility. That's a lot. A B-52 bomber is 20 feet longer and its engines are huge, like misplaced chucks of fuselage.

On another day, the target's background color was a blue color somewhat darker than the "sky". The target had the optimum size and spacing of lamps to seem of more-or-less uniform brightness, and the brightness and color of the lamps were varied. This tested our color acuity; how far could the colorimetry of the target-lamp combination vary to remain invisible or minimally visible?

This image simulates the second kind of target-lamp combination If you look at this image from a sufficient distance, the simulated target will nearly disappear, or for you it may vanish completely. This works best if you either take off your glasses or look through reading lenses, to defocus the image.

The average color and brightness of the simulated target are a close match to the surrounding sky-blue color. Thus, if an aircraft's belly is painted a medium blue, and a sufficient number of lamps are mounted on it and controlled by an upward-looking system, it can seem to vanish against the sky as long as it is high enough that the angular distances between the lamps is smaller than the circle of confusion (1/60th degree) of the eyes of an observer below.

This set of letter-targets is similar to a different test. Each letter has a little different color and brightness than the "sky". The 5 letters here make up the word "ROAST", but are not in order. For this test the sky color would be adjusted to see which letters were least and most visible. In both panels you will probably see three or four letters, but one or two that are not seen in one panel will be seen in the other.

In the end, it was all for nought. The sky is too variable, and human vision is also variable. There are three kinds of color blindness, and six kinds of "anomalous color vision"; any of these renders visible a target that "normal" eyes cannot see. It's kind of the opposite of those color-blindness tests with pastel "bubbles" that show the letter K to "normies" but the letter G to most color blind people. Also, wearing polarized glasses changes the perceived color of the sky, and tilting your head makes a dramatic difference in the color. Anyone with shades on would see the aircraft easily.

A further drawback of these tests was that no Asians' eyes were tested. In my regular job at the time, we were developing an infrared light source that Asians could not see. The near-infrared lamps used for night vision goggles and SniperScopes were invisible to Anglos, but quite visible to the Vietnamese. Several American snipers lost their lives when they turned on their SniperScope and a bullet came back instantly. What eventually worked was not a different light source but hypersensitive image amplification, the "starlight scope".

My wife is Asian. Certain items that look green to me she tells me are blue. Away from the green-blue boundary, she and I agree on the colors of objects.

The later chapters of Invisibility describe experiments and simulations that could lead to effective cloaking. There is even an appendix that shows a home tinkerer how to make a couple of kinds of visual cloaks that work in at least one direction. Full-surround cloaking is still out of reach, but who knows?

This book earns my "fun book of the year" award. Well written and very informative.

Saturday, December 13, 2025

Nails in the coffin of dark energy?

 kw: science, cosmology, dark energy, supernovae, supernovas, type ia supernova, metallicity

INTRODUCTION

The ΛCDM model of the Universe was proposed after two research groups (led by Adam G. Reiss and Saul Perlmutter) studied certain supernovae. "Λ" (Greek lambda) refers to the cosmological constant, first proposed by Einstein, that describes the expansion of spacetime. The research teams concluded that spacetime was not just expanding, but expanding at an increasing rate. This is called "cosmic acceleration." Their key observation was that distant Type Ia supernovae are fainter than expected. This soon led to the hypothesis that 75% of the energy content of the Universe is "dark energy", which is driving and accelerating the expansion.

When I first read about "dark energy" more than 25 years ago I thought, "How can they be sure that these supernovae are truly 'standard candles' over the full range of ages represented, more than ten billion years?" I soon considered, "Is the brightness of a Type Ia supernova affected by the metallicity of the exploding star?" and "Is it worth positing a huge increase in the energy of the Universe?" From that day until now I have considered dark energy to be the second-silliest hypothesis in cosmology (I may deal with the silliest one on another occasion).

On December 10, 2025, an article appeared that has me very excited: "99.9999999% Certainty: Astronomers Confirm a Discovery with Far-Reaching Consequences for the Universe’s Fate", written by Arezki Amiri. In the article, this figure demonstrates that I was on the right track. The caption reads, "Correlation between SN Ia Hubble residuals and host-galaxy population age using updated age measurements. Both the low-redshift R19 sample and the broader G11 sample show a consistent trend: older hosts produce brighter SNe Ia after standardization, confirming the universality of the age bias. Credit: Chung et al. 2025"

It reveals a correlation between the brightness of a Type Ia supernova and the age of its host galaxy. Galactic age is related to the average metallicity of the stars that make it up. Thus, more distant Type Ia supernovae can be expected to be fainter than closer ones, because more distant galaxies are seen when they were younger, and consequently had lower metallicity. This all requires a bit of explanation.

WHAT IS METALLICITY?

Eighty percent of the naturally-occurring chemical elements are metals. That means they conduct electricity. Astronomers, for convenience, call all elements other than hydrogen (H) and helium (He) "metals". The very early Universe consisted almost entirely of H and He, with a tiny bit of lithium (Li), element #3, the lightest metal. The first stars to form were not like any of the stars we see in our sky. They were composed of 3/4 hydrogen by weight, and 1/4 helium. The spectral emission lines of H and He are sparse and not strong. Thus, the primary way for such a star to shine is almost strictly thermal radiation from a "surface" that has low emissivity.

[Insert Fig2 and add a caption] By contrast, a star like the Sun, which contains 1.39% "metals", has many, many spectral lines emitted by these elements, even as the same elements in the outer photosphere absorb the same wavelengths. On balance, this increases the effective emissivity of the Sun's "surface" and allows it to radiate light more efficiently. The figure below shows the spectra of several stars. Note in particular the lower three spectra. These are metal-poor stars, and few elemental absorption lines are visible (The M4.5 star's spectrum shows mainly molecular absorption lines and bands). However, even such metal-poor stars, with less than 1/10th or 1/100th as much metals content as the Sun, are very metal-rich compared to the very first stars, which were metal-free.

Spectra of stars of different spectral types. The Sun is a G2 star, with a spectrum similar to the line labeled "G0".

One consequence of this is that a metal-poor star of the same size and temperature as the Sun isn't as bright. It produces less energy. Another consequence, for the first stars, is that they had to be very massive, more than 50-100 times as massive as the Sun, because it was difficult for smaller gas clouds to shed radiant heat and collapse into stars. Such primordial supergiant stars burned out fast and either exploded as supernovae of Type II or collapsed directly into black holes.

THE TWO MAIN TYPES OF SUPERNOVAE

1) Type I, little or no H in the spectrum

A star similar to the Sun cannot become a supernova. It fuses hydrogen into helium until about half of its hydrogen is gone. Then its core shrinks and heats up until helium begins to fuse to carbon. While doing so, it grows to be a red giant and gradually sheds the remaining hydrogen as "red giant stellar wind". When the helium runs out, the fusion engine shuts off and the star shrinks to a white dwarf composed mainly of carbon, a sphere about 1% of the star's original size, containing about half the original mass. For an isolated star like the Sun, that is that.

However, most stars have one or more co-orbital companion stars. For any pair of co-orbiting stars, at some point the heavier star becomes a red giant and then a white dwarf. If the orbit is close enough some of the material shed by the red giant will be added to the companion star, which will increase its mass and shorten its life. When it becomes a red giant in turn, its red giant stellar wind will add material to the white dwarf. The figure shows what this might look like.

White dwarfs are very dense, but are prevented from collapsing further by electron degeneracy pressure. This pressure is capable of resisting collapse for a white dwarf with less than 1.44 solar masses (1.44 Ms). That is almost three times as massive a the white dwarf that our Sun is expected to produce in about six billion more years. It takes a much larger star to produce a white dwarf with a mass greater than 1.4 Ms, one that began with about 8 Ms. Such a star can produce more elements before fusion ceases: C fuses to O (oxygen), O fuses to neon (Ne), and so on through Na (sodium) to Mg (magnesium). The white dwarf thus formed will be composed primarily of oxygen, with significant amounts of Ne and Mg. Such a stellar remnant is called an ONeMg white dwarf. Naturally it has more metals present than the original star did when it was formed, but less than a white dwarf formed from a higher-metallicity star.

Now consider a white dwarf with a mass a little greater than 1.4 Ms, with a companion star that is shedding mass, much of which spirals to the white dwarf, as the figure illustrates. When the white dwarf grows to 1.44 Ms, which is called the Chandrasekhar Limit, it will collapse as a powerful Type Ia supernova.

There are two other subtypes, Ib and Ic, that form by different mechanisms. While they are also no-H supernovae, there are differences in their spectra and light curve that distinguish them from Type Ia, so we don't need to consider them further.

2) Type II, strong H in the spectrum

Type II supernovae are important because they provide most of the metals in the Universe. They occur when a star greater than 10 Ms runs out of fusion fuel. It takes a star with 10 Ms to produce elements beyond Mg, from Si (silicon) to Fe (iron). Fe is the heaviest element that can be produced by fusion. These heavy stars experience direct core collapse to a neutron star, with most of the star rebounding from the core as a Type II supernova. During this blast, the extreme environment produces elements heavier than Fe also. (Stars that are much heavier can collapse directly to become a black hole.)

EVOLUTION OF UNIVERSAL METALLICITY

At the time the first stars formed, the Universe was metal-free. It took a few hundred million years for a few generations of supernovae to add newly-formed metals, such that the first galaxies were formed from very-low-metal stars and low metal stars. Even with very-low to low metallicity, smaller stars could form. Since that time, most stars have been Sun-size and smaller, though stars can still form with masses up to about 50 Ms.

Stars of these early generations smaller than about 0.75 Ms are still with us, having a "main sequence lifetime" exceeding 15 million years. I can't get into the topic of the main sequence here. We're going in a different direction.

Stars of the Sun's mass and heavier have progressively shorter lifetimes. Over time, the metallicity of the Universe has steadily increased. That means that the "young" galaxies discussed in the Daily Galaxy article (and the journal article it references) are more distant, were formed at earlier times in the Universe, and thus tend to have lower metallicity.

LOWER METALLICITY MEANS LOWER BRIGHTNESS

This leads directly to my conclusion. A Type Ia supernova erupts when a white dwarf, whatever its composition, exceeds the Chandrasekhar Limit of 1.44 Ms. This has made them attractive as "standard candles" for probing the distant Universe. However, they are not so "standard" as we have been led to believe.

Consider two white dwarfs that have the same mass, say 1.439 Ms, but different compositions. One is composed of C or C+O, with very low amounts of metallic elements. The other has a composition more like stars in the solar neighborhood, with 1% metals or more. As seen with stars, more metals lead to more brightness, for a star of a given mass. Similarly, when these two white dwarfs reach 1.44 Ms and explode, the one with more metals will be brighter than the other.

The final question to be answered: Is this effect sufficient to eliminate all of the faint-early-supernova trend that led to the hypothesis of dark energy in the first place? The headline to the article indicates that the answer is Yes. A resounding yes, with a probability of 99.9999999%. That's seven nines after the decimal. That corresponds to a 6.5-sigma result, where 5 sigma or larger is termed "near certainty".

The article notes that plans are in the works to use a much larger sample of 20,000 supernovae to test this result. I expect it to confirm it. The author also suggests that perhaps Λ is variable and decreasing. My conclusion is that dark energy does not exist at all. Gravity has free reign in the Universe, and is gradually slowing down the expansion that began with the Big Bang (or perhaps Inflation if that actually occurred).

That's my take. No Dark Energy. Not now, not ever.

Wednesday, December 10, 2025

How top down spelling revision didn't work

 kw: book reviews, nonfiction, language, writing, spelling, spelling reform, history

The cover is too good not to show: enough is enuf: our failed attempts to make English eezier to spell by Gabe Henry takes us on a rollicking journey through the stories of numerous persons, societies and clubs that have tried and tried to revise the spelling of English. Just since the founding of the USA, national figures including Benjamin Franklin and Theodore Roosevelt have involved themselves in the pursuit of "logical" spelling. "Simplified spelling" organizations persist to this day.

English is the only language for which spelling bees are held. Nearly all other languages with alphabetic writing are more consistently phonetic. However, I would exempt French from that proviso. I discovered during three years of French classes that the grammar of French verbs is, to quote a Romanian linguist friend, "endless." Putting together all possibilities of conjugation, tense and mood, French has four times as many varieties of verb usage and inflected endings as English does, and then each variety is multiplied by inflections that denote number, person and gender. However, inflections ranging from -ais and -ait to -aient all have the same pronunciation, "-ay" as in "way". Other multi-sonic instances abound. Perhaps French has stalled on its way to being like Chinese, for which the written language is never spoken and the spoken languages aren't written.

But we're talking about English here. The author states several times that there are eight ways of pronouncing "-ough" in English. Long ago a friend loaned me a book, published in 1987, a collection of items from the 1920's and 1930's by Theodor S. Geisel, before he became Dr. Seuss: The Tough Coughs as he Ploughs the Dough. Geisel's essays on English spelling seen from a Romanian perspective (tongue-in-cheek, as usual; he was from Massachusetts, of German origin) dwell on the funnier aspects of our unique written language. The peculiarities of -ough occupy one of the chapters.

Being intrigued by the "8 ways" claim, I compiled this list using words extracted from an online dictionary:

  1. "-ow" in Bough (an old word for branch) and Plough (usually spelled "plow" in the US)
  2. "-off" in Cough and Trough
  3. "-uff" in Enough and Tough and Rough
  4. "-oo" in Through and Slough (but see below)
  5. "-oh" in Though and Furlough
  6. "-aw" in Bought and Sought
  7. "-É™" (the schwa) in Thoroughly ("thu-rÉ™-ly")

And…I could not find an eighth pronunciation for it. Maybe someone will know and leave a comment.

"Slough" is actually a pair of words. Firstly, a slough is a watery swamp. Secondly, slough refers to a large amount of something, and in modern American English it is usually spelled "slew", as, "I bought a whole slew of bedsheets at the linens sale." However, "slew" is also the past tense of the verb "slay": "The knight slew the dragon," which is the only way most folks use that word.

Numerous schemes have been proposed over time. Sum peepl hav sujestid leeving out sum letrz and dubling long vowls (e.g., "cute"→"kuut"). A dozen or more attempts at this are mentioned in the book. Others have invented new alphabets, or added letters to the 26 "usual" ones, so that the 44 phonemes could each have a unique representation. An appendix in many dictionaries introduces IPA, the International Phonetic Alphabet (which includes the schwa for the unaccented "uh" sound). Public apathy and pushback have doomed every scheme.

The only top-down change to American spelling that came into general use was carried out by Noah Webster in his Dictionary. He took what we now call the English U out of many words such as color and favor; the Brits still use colour and favour. And he removed the final K from a number of words, including music and public (I think the Brits have mostly followed suit); pulled the second L from traveler and the second G from wagon; and introduced "plow" and other words that didn't quite make it to present-day usage. His later attempts at further simplification didn't "take".

I could go on and on. It's an entertaining pastime to review so many attempts. However, something has happened in the past generation, really two things. Firstly, advertising pushed the inventers of trademark names to simplify them, particularly in the face of regulations that forbade the use of many common words in product brands. Thus, we have "Top Flite" golf balls, "Shop Rite" and "Rite Aid" retailers, and new uses for numbers, such as "Food4Less" for a midwestern market chain and "2-Qt" for a stuffed toy brand. Secondly, the advent of ubiquitous cell phones motivated kids everywhere to develop "txtspk". Single-letter "words" such as R and U plus substituting W for the long O leads to "R U HWM?" Number-words abound: 2 for "to" and "too", 4 for "for", 8 in "GR8", and even 9 in "SN9" ("asinine", for kids who have that word in their working vocabulary). Acronyms multiply: LOL, ROFL (rolling on floor laughing), TTYS (talk to you soon)…a still-growing list. Even though most of us now have smart phones with a full keyboard (but it's tiny), txtspk saves time and now even X-Gen and Boomers (such as me) use it.

Social change works from the bottom up. Top-down just won't hack it. Unless, of course, you are dictator and can force it through, as Mao did when he simplified Chinese writing the year after taking power in 1949. Many of my Chinese friends cannot read traditional Chinese script. Fortunately, Google Lens can handle both, so Chinese-to-Chinese translation is possible!

We have yet to see any major literature moving to txtspk, let alone technical and scientific journals. If that were to happen, the next generation would need Google Lens or an equivalent to read what I am writing now, and all English publications prior.

It will be a while. Meantime, let this book remind us of the many times our forbears dodged the bullet and declined to shed our traditional written language. The legacy firstly of several long-term invasions (Saxon and Norman in particular), and then the rise of the British Empire, and finally in "melting-pot" America, our language is a mash-up of three giant linguistic traditions and a couple of smaller ones, plus borrowings, complete with original spelling if it existed, from dozens or hundreds of languages. Thus, one more thing found primarily in English: the idea of etymology, the knowledge of a word's origin. I haven't checked; do dictionaries for other languages include the etymologies of the words? My wife has several Japanese dictionaries of various sizes; none mentions the source of words except for noting which are non-Japanese because they have to be spelled with a special syllabary called Katakana.

English is unique. Harder to learn than some languages, but not all, it is still the most-spoken language on Earth. It is probably also the most-written, in spite of all the craziness.

Monday, December 01, 2025

MPFC – If you know, you know

 kw: book reviews, nonfiction, humor, satire, lampoons, parodies

Well, folks, this is a step up from Kindergarten: Everything I Ever Wanted to Know About ____* I Learned From Monty Python by Brian Cogan, PhD and Jeff Massey, PhD. Hmm. If one ignores the learned asides and references, the visual humor of Monty Python in its various incarnations is Kindergarten all the way. The bottom of the book cover has the footnote, "* History, Art, Poetry, Communism, Philosophy, The Media, Birth, Death, Religion, Literature, Latin, Transvestites, Botany, The French, Class Systems, Mythology, Fish Slapping, and Many More!" Various portions of the book do indeed treat of these items, and many more.

The authors make much of the educational background of the six Python members. No doubt, having been steeped in British culture about as much as one is able to steep, Python was eminently qualified to send-up nearly every aspect thereof. Even the "American Python" Terry Gilliam was a naturalized Brit after 1968.

The book is no parody of Monty Python; that's not possible. It is a series of riffs on their treatment of the various and sundry subjects. I have seen only one of the TV shows from Monty Python's Flying Circus, "Spanish Inquisition". The TV show ran on BBC from late 1969 to the end of 1974 and many episodes were re-run in later years on PBS. I've seen scattered bits that made their way to YouTube, and during the period that I could stomach watching PBS, I saw The Life of Brian and Monty Python and the Holy Grail. The book's authors have apparently binge-watched the entire MPFC corpus several times.

I enjoyed the book. I can't write more than this, so I'll leave it to you, dear reader, to delve into it yourself.

Thursday, November 20, 2025

Is half the country enough?

 kw: book reviews, nonfiction, land use, agriculture, prairies, restoration, conservation

About 45% of the land area of the "lower 48" is devoted to agriculture. That is about 900 million acres, or 1.4 million square miles. Roughly one third of that was originally prairie, tallgrass, shortgrass, and mixed prairie ecosystems. Most has been converted to agricultural use. Prior to the arrival of the plow, the prairie encompassed

  • Tallgrass prairie, 140 million acres, or 220,000 sq mi. All but 1% has been plowed and sowed with crops.
  • Mixed-grass prairie, 140 million acres, or 220,000 sq mi. About one-quarter remains unplowed.
  • Shortgrass prairie, 250 million acres, or 390,000 sq mi. About one-fifth remains unplowed.

Taken together, prairie grassland once encompassed 530 million acres, but now more than 440 million acres are devoted to agriculture, making up nearly half the total agricultural land in the US. Surveys of the remaining grasslands show that they are ecologically rich, with dozens of species of grass and hundreds of other plant species, hundreds of bird and mammal and other animal species (and of course tons of insects!), and rich soils that have accumulated over ten to twenty thousand years during the current Interglacial period.

Sea of Grass: The Conquest, Ruin, and Redemption of Nature on the American Prairie, by Dave Hage and Josephine Marcotty, chronicles the history of these grasslands that formerly covered one quarter of the contiguous US. Their characteristics are governed by rainfall. The western edge of the shortgrass prairie laps up against the foothills of the Rocky Mountains, and this semiarid prairie is in the deepest part of the mountains' rain shadow. The eastern half of four states, Montana, Wyoming, Colorado, and New Mexico, plus the Texas panhandle, host shortgrass prairie.

Further east, a little more rainfall allows medium-height and some taller grasses to grow. This mixed-grass prairie makes up most of the area of North and South Dakota, Nebraska and Kansas, plus the middle of Oklahoma and Texas. Tallgrass prairie is supported by the more temperate rainfall amounts in Minnesota, Iowa, Illinois, northern Arkansas, eastern Kansas, and a little bit of eastern Oklahoma. The eastern extent of the prairie abutted the deciduous forests of the Midwest, which are now mostly stripped of trees and used to grow corn and soybeans.

The book's three parts cover, with some overlap, the prehistory of the prairie, the progress of its subjugation to agricultural use, and the progress of efforts to conserve and restore portions. The third part includes 40% of the book, and is clearly the authors' aim.

Rather than repeat details that are better stated by the authors, I'll just display the bottom line: Prairie soils are biologically rich, conferring great ecosystem services. These include sequestering large amounts of carbon dioxide, absorbing rainwater runoff which reduces acute flooding, and quickly taking up excess nitrogen from over-fertilization of nearby agricultural fields rather than permitting it to flow into streams and eventually the Mississippi River and the northern Gulf of America. These points are being made in courtrooms throughout the central US, arguing not only that remaining prairie should be preserved and conserved, but that portions of agricultural fields in this area amounting to several percent should be reverted to native grasses to reduce the damaging effects of pervasive monocropping.

Existing primordial prairie is also a treasure to be enjoyed. The image above is like views I've seen in a few grasslands we've visited. In the early 1980's whenever my wife and I went to visit a rancher we knew in central South Dakota, there is a spot along I-90, about seven miles before reaching Wasta, on the plateau above Boxelder Creek and the Cheyenne River, where we always stopped to get out of the car and stretch our legs. In all directions, the only sign of human life was the highway itself, and, of course, us and our car (I note on current satellite images that there are a number of billboards and a new shelter belt in the area now. Sigh.).

Other efforts are discussed, such as researching the best cover crops to preserve soil from erosion after harvest, and finding the "knee" in the relationship between fertilization and crop yields to better select appropriate levels of nitrogen application—I find it amazing that this is still so little known.

In keeping with the importance of the subject, the book is big and packed with information and gripping stories. It is well written and it rewards close reading. Enjoyable.

Monday, November 17, 2025

Greenhouse Effect – the hidden players

 kw: analytical projects, greenhouse effect, global warming, absorption spectra, saturation

Reading a book about agriculture led me to thinking about the "hidden" greenhouse gases. I am sure almost everyone has read or heard that methane is 80 times as potent as carbon dioxide as a greenhouse gas. I recently learned that nitrous oxide (laughing gas, also a dental anesthetic) is between 250 and 300 times as potent as carbon dioxide. Both of these gases are produced by agricultural activity, so they have increased in the past 200 years as agriculture has been increasingly mechanized, and as chemical fertilizers have been used in ever-increasing amounts. (I generated this image using Leonardo AI; it is free of copyright restrictions)

I researched in several sources to find answers to these questions:

  • What were the concentrations of nitrous oxide and methane prior to the Industrial Revolution?
  • What are their concentrations now?
  • How to they affect global warming?
  • Are there other greenhouse gases we should be concerned about?

To simplify the text, I will dispense with formatting the numbers in chemical formulas as subscripts. Thus, CO2 = Carbon Dioxide, CH4 = Methane, and N2O = Nitrous Oxide (Nitrogen has several oxides; only this one is important here).

Here is the connection with agriculture: The middle-American farm belt was created by plowing the prairie and planting grain crops. Today, by far the most important crops are corn and soybeans. The thick, rich prairie soils contained a 10,000-year store of CO2, deposited by the roots of grasses and held there as they decomposed. Plowing the prairie released the CO2 at a pretty steady rate over the past century. It is still going on. Plowing also releases stored CH4.

When I lived in South Dakota in the 1970's and early 1980's, most of the agriculture in the state was cattle ranching, with some grain crops being grown in the eastern third. Since that time seed companies have developed strains of corn and soybeans that can better resist drought, begin growing at lower temperature and ripen faster. South Dakota cattle ranches are being plowed and sown with grains at a steady rate.

Secondly, overuse of nitrogen fertilizer causes much of the "extra" to be converted to N2O. Large amounts also go downstream and contribute to the Dead Zone offshore of the Mississippi Delta.

Thirdly, cattle produce a lot of methane, and the reduction in cattle numbers in the Dakotas is more than offset by continued increases elsewhere; also, plowing the prairie releases CH4, and all this is added to the amount released by fossil fuel production. I have yet to see a credible analysis of all the sources of CH4.

Yet all we ever hear about is the rise in concentration of CO2 alone. This is indeed significant, from about 280 ppm in the 1700's to about 440 ppm today. This "baseline increase" is (440-280)/280 = 0.57, a 57% increase in the past century or so. 

What of CH4 and N2O? Let us first convert them to equivalent CO2. I'll leave out a lot of words and summarize the figures:

  1. CH4 as a GHG is 80x as effective as CO2. Current CH4 concentration is 1.9 ppm; times 80 that is equivalent to 152 ppm CO2. In the 1700's, CH4 was 0.72 ppm, or CO2 equivalent (CO2eq)  of 57.6 ppm.
  2. N2O as a GHG is ~280x as effective as CO2. Current N2O concentration is 0.34 ppm; times 280 that is equivalent to 95.2 ppm CO2. In the 1700's, N2O was 0.27 ppm, or CO2eq of 75.6 ppm.

Added together, these two gases presently have CO2eq of 247. The preindustrial level was 133. Let's add these to CO2 to see the real picture of the greenhouse effect at these two times:

  • Preindustrial: 280+133 = 413 ppm CO2eq
  • Today: 440+247 = 687 ppm CO2eq
  • (687-413)/413 = 0.66, a 66% increase in CO2eq

The actual increase in CO2eq is greater than the effect of CO2 alone. Suppose we could reduce CH4 and N2O to preindustrial levels. This would subtract 114 ppm CO2eq, for 573. Then (573-413)/413 = 0.39, or 39% increase in CO2eq, compared to preindustrial. To put this in context according to the mental model held by "climate crisis" folks, for CO2 only, a 39% increase over 280 ppm would be 389 ppm. That is about where we stood in 2011; it winds back the clock sixteen years!

Let us focus a moment on N2O. By itself, increase in the concentration of this gas is responsible for about 20 ppm CO2eq, the last nine years of increase. This is nearly all due to overfertilization. Guess which industry complex is bigger and has a stronger lobby in DC than oil and gas? Agriculture plus agrichemicals (particularly fertilizer). I have read in more than one place that without artificial nitrogen-based fertilizer, the world's farmland could support no more than four billion people. It is very complex to analyze just how much fertilizer could be reduced to still support the current world population, but reduce nitrate runoff and outgassing of N2O into the atmosphere. For the moment, I just have to leave these thoughts unfinished. If we could come up with a plan, powerful interests would oppose it.

At this point in my analysis I wondered what other greenhouse gases exist, and how they might modify the picture. As it happens, nothing much. Here is a table I worked from for the figures above, which adds six greenhouse gases that, together, are sometimes written about in very scary terms, but have no practical effect at present:


First, ground level Ozone (O3) has a modest Global Warming Potential (GWP: 1.5 x CO2), and exists in the 1-10 parts per billion range, so it is not effectively a greenhouse gas. Then, the industrial chemicals Sulfur Hexafluoride (SF6) and Nitrogen Trifluoride (NF3) have very high GWP, but exist at levels of a few parts per trillion. To totally eliminate them would reduce CO2eq by much less than one percent (see the black text at the bottom of the table)

Various fluorinated refrigerants, those highlighted in brown, have very high GWP, but also exist at levels of a few parts per trillion, so together, they also amount to less than one percent (the brown text). Thus, they present no useful "targets" for ameliorating the greenhouse effect.

My aim here has been to back off a few steps to see a bigger picture. As it happens, this points a finger where none has been pointed before, at farmers. A significant proportion of the increase in CO2eq results from farm practices. In particular, far too many farmers use more fertilizer than their crops really need. There is too much of, "a little more might help." No, it doesn't, it harms. It even harms the farmer, who spends more than needed on fertilizer that isn't helping.

I have a philosophical point to end with. I think that the greenhouse effect will prove to be more beneficial than otherwise. The "father of greenhouse warming", Svante Arrhenius, thought so. Another degree or two of warming is likely to make more of Canada and Siberia amenable to crop production, and let's not forget South Africa and Argentina. On another note, I saw an article recently with a headline, "550,000 will die of extreme heat." The subhead said, "The greatest cause of early death." The article never mentioned that 4.6 million will die from cold. Nine times as many! The subhead is, quite simply, a lie, and the article is utterly one-sided deception. I suspect many of those 4.6 million would love for their home country to be a little warmer.

Sunday, November 09, 2025

Tied for the oldest sense

 kw: book reviews, nonfiction, science, olfaction, nose, sense of smell

It is fascinating to watch a motile bacterium such as E. coli in motion. It trundles along, its rear-mounted flagella spinning to propel it in a mostly straight line. If it bumps into something it will back up some distance, tumble, and then move off in a new direction. Frequently, after a short distance, it may reverse course or tumble again to pick a new direction.

The latter action hints at what is going on. How does it pick a direction to go; what is it trying to reach? Of course, like all living things, it is searching for food. It is following a chemical gradient by sensing a chemical of interest in the water around it. If, as it moves along, it senses a stronger concentration, it keeps moving. If the concentration is decreasing it reverses course or tumbles to try a new direction. Its chemical sense can be called either "taste" or "smell" and is one of the two oldest senses. The other is touch. Bumping into something, or alternatively, approaching closely enough for cilia on the cell to touch the something, coupled with the chemical sense telling it, "this isn't what you are looking for," triggers actions such as backing up and/or tumbling. Touch is the other "oldest sense." The two seem to go together, and they work together to guide the cell to a possible source of food.

Strictly speaking, smell is thought to relate to chemical cues carried in the air, so the bacterium, being in a watery medium, must be using taste rather than smell. But at the most basic level these are actually the same. Chemical substances that are smelled first enter a watery layer over the sensory nerves, where they are detected.

For air-dwelling creatures, smell is a more long-range sense. Chemical substances travel through the air faster than they do through water, although both aerial and fluid currents can bring them from far away. But diffusion in still air is faster than in still water. Furthermore, when two senses work together, smell precedes touch, while taste follows contact.

The title of Jonas Olofsson's book, The Forgotten Sense: The New Science of Smell and the Extraordinary Power of the Nose led me to think, "Why didn't he call it The Neglected Sense?" No matter. He reveals the neglect that smelling has undergone through the centuries since it was placed by Aristotle at the bottom of the list of useful senses, an error compounded when Paul Broca divided animals into "osmatic" and "anosmatic": those like the dog for which smell was primary and those like humans (he thought) for whom smell was definitely not worth much. I guess that neither Broca or old Ari stopped to consider how he would detect a bad lot of wine or olive oil if he plugged his nose. Tasting without smelling could be a risky business!

I was most fascinated by an analysis to which the author refers, "Human and Animal Olfactory Capabilities Compared," by Matthias Laska in the 2017 Springer Handbook of Odor. Humans and a number of medium-sized and smaller animals were tested for their sensitivity to a few dozen scented substances. Animals tested included rats, dogs, vampire bats, a couple of monkey species…twenty species in all. The only animal with a nose more sensitive than ours was the dog!

Earlier studies that compared the size of an animal's olfactory bulbs to its brain were misleading because it is the absolute size of the bulb that matters. The roughly 60 mm3 volume of human olfactory bulbs is a tiny fraction of the brain's volume, about 1/20,000th. In relative terms, the olfactory bulb of a mouse is enormous, about 1/16th of the total brain. However, its actual size is less than half that for a human: 25 mm3. An ordinary dog (not the tiny breeds or pug-nosed ones) has an olfactory bulb six times larger than a human, which gives it a huge advantage, as was shown clearly by the sensitivity tests. I wonder if they could give a similar test to an elephant, with an olfactory bulb volume of 11,000 mm3 ?

Technical issues aside, a big section of the book relates the emotional effects that smells can mediate, such as a whiff of salty air evoking a favorite memory of a seaside vacation, or the comfy aroma of a morning coffee and bowl of blueberries. 

This also introduces the subject of smell gone wrong: COVID-19 introduced millions around the world to a life without smells, at least temporarily. An interesting consequence of the sudden loss of smell for many people was that they began to wonder how they could tell if their own smell was offensive to others! Another was the loss of interest in food, because most of what we call the taste of many foods is actually a combination of smell and taste. We can taste but five qualities: sweet, salt, savory (umami), sour, and bitter; we can smell thousands or hundreds of thousands of different qualities. So many that we can seldom describe any of them to someone else.

Loss of smell is called anosmia. In some ways, a distorted sense of smell, parosmia, can be even worse. Imagine one day finding that your morning cuppa smells like rotting onions! This can also be caused by viral infections, but there are other causes, including a hard bump on the head. Some people with parosmia can't stand to eat favorite foods, though some are able to eat enough to stay healthy by putting on a nose clip. There is a long section describing ways of desensitizing and retraining the sense of smell. Sadly none of the methods is effective in all cases, but it can be a lifesaver for many.

A side note: There is a reversible distortion of taste I have experienced, caused by an Asian spice called Tiger Claw, related to Star Anise. The seed pods are used whole to flavor soup. They aren't supposed to be ingested. Biting into one causes a shift of taste, such that water tastes like battery acid and nothing seems edible; it lasts several hours. I have numerous Chinese friends, and I wound up mostly fasting during a potluck lunch…

The author is a scientist of the sense of smell. He first wrote the book in Swedish, then translated it into English himself. His and his colleagues' work just might elevate our understanding of the sense of smell, from a "neglected" sense to one that is equally essential. And in time perhaps we'll attain added vocabulary to help us describe our favorite (or otherwise) aromas.