Monday, September 30, 2024

Had a good chat with your houseplant today?

 kw: book reviews, nonfiction, science, botany, research, plant consciousness, communication, signaling, plant movement, plant intelligence

In the human realm, "talking to the animals" like Dr. Doolittle is fictional. In the plant realm, it may be commonplace. What constitutes "communication"? There is more philosophy than science wrapped up in any attempt to answer that question. Even more so for the words "consciousness" and "intelligence". We may not have definitive answers in the next few decades, and perhaps we never will. In The Light Eaters: How the Unseen World of Plant Intelligence Offers a New Understanding of Life on Earth, author Zoë Schlanger doesn't provide the answers, though some of those she interviewed offered nascent attempts at doing so.

The eleven chapters of Light Eaters delve into several strains of research that are on the verge of redefining what a "plant" really is and re-settling our understanding of the rôles plants play in the biosphere. As we find from numerous lines of research, plants have several routes of plant-to-plant signaling: chemical, electrical, acoustic, and possibly bacterio-genetic. Plants discriminate. They are found to send differing signals to siblings versus non-siblings of the same species; plants of one species can also "eavesdrop" on signals of another species. Furthermore, plants send signals intended for animal species! An example of the latter is the plants that emit a pheromone that attracts a certain species of parasitic wasp when a caterpillar that the wasp parasitizes begins chewing on the plant's leaves. It's rather like a youngster who gets attacked and calls on his older brother for help, but in this case the "older brother" is a different species. Plants getting chewed on also emit other volatile chemicals that alert nearby plants, which respond by altering the chemistry of their leaves to be distasteful or even toxic to the caterpillar.

These are examples of chemical signaling. Other stressors such as drought result in plants making tiny clicking noises as low-pressure bubbles collapse; it is similar to the popping knuckles most people engage in. It's hard for me to determine what kind of research showed other plants responding to these barely audible sounds, but Chapter 5, "An Ear to the Ground" presents the evidence. 

What about electrical signals? Within a plant, it has been found that cutting a leaf initiates a wave of electrical activity that sweeps through the plant. These images of a small plant leaf, taken just before a scissor cut, then one second after, and then seven more seconds later. The plant had been grown from seed containing engineered genes that cause the calcium channels (every cell has them) to trigger Green Fluorescent Protein (GFP) when they emit or pass an electrical signal. The electrochemical signal moves as a wave through the whole plant. These images were clipped from this video. If the video doesn't work (they can be ephemeral), search for "gfp plant signaling". This is just one of several.

As the narrator in the video explains, the signal moves through the whole plant in about a minute along the veins, and spreads from the veins throughout all the plant's tissues at a slower rate.

This got me thinking. We know that while an electrical signal in a metallic wire is very fast, roughly the speed of light, the electrochemical signals in animal nerves are much slower. I had the neural conduction speed measured in my arm one time, after an injury. It was 60 m/s, which is normal. The fastest neural conduction speed in mammals is about twice this, and some nerves, where speed is less critical, are as slow as the range 2-5 m/s. Plants don't have nerves; at least none that we can recognize. But the veins seem to have a similar function, albeit slower, in the range of about 1 mm/s. That is between 2,000 and 120,000 times slower than animal neurons.

Put that together with a statement later in the book. The author had a hint of an idea (one I was toying with as I read): What if we think of the entire plant as a brain? She asked one scientist, who said, "I think you're right. I just don't talk about it." Let's speculate a bit. If you get jabbed in the leg with a pin, you'll react within about a quarter of a second. In the little plant shown in the video, which is about 10 cm across, the signal "I've been cut!" reaches the whole plant in less than two minutes. The "reaction" of the plant is to begin to synthesize noxious chemicals in the leaves, which takes a few hours. From this we can extract a couple of ratios:

  1. We can infer the signaling time between your leg and brain as about 1/30 second. If signaling through the plant took 100 seconds, the ratio is 3,000:1.
  2. Your physical flinch and "Ouch!" begin after about 1/4 second, while chemical synthesis in the plant gets underway in an hour (3,600 sec), for a ratio of 14,400:1.

If, then, the whole plant is, or contains, a distributed brain, it runs several thousand times more slowly than an animal brain. This is in accord with the rate that twigs grow on many woody plants, compared with the rate of animal motions. Animals move at about "the speed of gravity", by which I mean that rapid animal motions, such as swatting at a fly, happen at speeds similar to that of an object dropped a meter or so. Time-lapse videos of plants either growing or "doing" various things, such as the "reaching" of bean tendrils for something to cling to, show their motions to be hundreds to thousands of times slower than animal motions. It seems plausible that, if plants "think", they do so correspondingly slowly. While we cannot consider plants to have a nervous system, perhaps a term such as "signal conduction system" or "signal transduction system" can be used.

Do they think? Plant "intelligence" has been a fiercely contentious issue for decades, and that doesn't seem to be slowing down. Focusing on just three things: speed of motion, speed of communication, and speed of reaction, I (and, I think, Ms Schlanger) consider plants to be doing most things animals do, but on a time scale around 10,000 times slower. If we learn to talk to plants, and to hearken and understand what they are saying, we'll need enormous patience. Perhaps a translating SI (simulated intelligence) application can craft a signal at a rate the plant can accept, patiently receive its reply, and signal a human (who is doing something else in the meantime, because it could be hours) to come "read" the response. Even if the human then requires several minutes to decide what to say next, to the plant, the signal coming back, through the app, seems to begin almost instantly.

Finally, do plants see? Plants that mimic neighboring species hint that this is so. How can a South American vine Boquila take on the appearance of at least a few dozen other plants, just by growing in the vicinity? Moreso, if part of this vine is near one kind of plant, and another part is near another, it mimics both! To a lesser extent Mistletoe plants do something similar. One researcher believes the "signal" received by a Boquila plant is not visual, but bacterio-genetic, some kind of genetic signal from the neighboring plant's cloud of symbiont bacteria. All animals and all plants are inhabited by and surrounded by their own microbiome. Each breath we exhale contains members of our microbiome. Your own bacterial "envelope" changes every time you make a new friend and begin spending lots of time with him or her. The author finds a visual hypothesis more parsimonious, and I agree. Plants do have photoreceptors; they are chloroplasts. There are also other colored bodies in plants, in colors other than green. They may also receive light as well as reflect it, or they may provide color filters for chloroplasts to detect colored light. The author points out that this is similar to cuttlefish, which have color-blind eyes, yet they can still mimic the patterns and colors of the surface they are sitting on, probably because their whole skin surface is covered with photoreceptors that must provide the color signal.

I suggest a "red hat" experiment. Start with a number (12 at least) of plants that are wired to detect stress. Once they have recovered from being wired the experiment begins. Whenever the person who cares for the plants wears a red hat, that person also takes a snip from the end of one leaf of half the plants, chosen by a prearranged formula, and let some of the plants never be snipped. Let the interval between snipping incidents be a few days. I conjecture that after a few weeks at most, the plants will all react whenever the caretaker enters wearing the hat, before any snipping is done. This should indicate something visual on the part of the plants. It is likely that the never-snipped plants will react differently from the others. However, it is always possible that the caretaker is in a different mental state on "snipping days", and this causes an airborne chemical signal that the plants can detect and react to. I am not sure how to control for that.

Plants are fascinating, even more so now to me, after reading this book. What a great read!

-----------------------

A couple of quibbles and contentions:

  1. On p 39 the author coined the adjective "Descartian", referring to René Descartes. The adjective "Cartesian" already exists and is easier to say.
  2. On p 156 we read, "In the United States alone, as many as 11,000 farmworkers are fatally poisoned by pesticides each year, and another 385 million are severely poisoned…". 'Scuse me, but the entire US population is about 360 million, of whom two million are farmworkers. The CDC states 10,000-20,000 "poisonings" without saying how many are fatal. Sundry reports are all over the place. One appears to be the author's source for 11,000 fatalities yearly, while another states that 60,000 nonfatal incidents occurred in five years, or 12,000 per year. The author needs to dig a bit deeper.

Tuesday, September 24, 2024

American spiders now?

 kw: blogs, blogging, spider scanning

In recent weeks the number of daily views of this blog settled back to fewer than 100, typically 70-80. Today I noticed a big bump. Checking the 24-hour view, I see that whatever is going on is continuing:


It's about 10pm just now. Things started taking off at 4am. Here are the locations:


The top source is the US! How about that!! For the past year or so, when views skyrocketed, either Singapore or Hong Kong led the charge. Not this time. I wonder if the spider source is using a VPN…






Monday, September 23, 2024

Readable science writing

 kw: book reviews, nonfiction, science, nature, science writing, nature writing, anthologies

Science and technical writing are notable for being unreadable. Fortunately, some articles are well written and readable. Some. For The Best American Science and Nature Writing 2023, edited by Carl Zimmer, the editor and his helpers managed to find twenty articles and essays that are readable and informative, in a range of technical disciplines.

I don't intend to survey the whole book. This is more of an accolade than a full review. I used Dall-E3 to generate a number of images intended to evoke "science writer." I like this one best. For the sake of this volume, I chose a female writer because 13 of the 20 articles are by females, although one is amidst a transition to outward maleness.

In the Foreword the editor makes note that more and more of modern science and nature writing has a political flavor. This is not surprising. Many of the articles in this volume are based in research related to climate change, which has been overly politicized to the point that few granting entities are willing to fund work that is not explicitly supportive of the "mainstream view," no matter that it is very far from being the "settled science" that certain loud voices claim. While I am at it, I must say that two of the articles are not about science at all but are personal testimonies regarding the consequences of the political climate, related primarily to issues of gender and morality. While they are voices that deserve to be heard, they have no place in this volume.

My second favorite generated image is this one, of a naturalist in the field. Boots on the ground are sorely needed to winkle out all the effects of the increase of both temperature and atmospheric volatility. Articles about the changing populations of certain butterfly or frog species evoke the scientific spirit: "Why has this changed? and how?" 

Science is conservative, in the sense that new things, new ideas, new hypotheses must prove themselves. It is unscientific to accept a new "explanation" that lacks solid evidence. Environmental stewardship is also a conservative value. Social conservatism is different from political conservatism. Let us remember that prior to the Twentieth Century the conservatives (for example, the Tories in England and the Democrats in the US) were pro-royalty, pro-aristocracy, pro-feudalism, and pro-slavery. The constitutional liberals (who these days are called "conservatives" by leftists), particularly the leading Christians, led the way to abolish slavery. Their contention that all the "races" were equally human was based on scientific studies, on scientific conservatism, as much as on religious grounds. We are amidst a struggle to determine where the boundaries lie between scientific study of climate and political proclamations based on sketchy understanding of science.

Scientists would do well to learn better writing. For the moment, those who write well deserve appreciation, and this volume provides a little of that.

Wednesday, September 18, 2024

Not exactly a poison primer

 kw: book reviews, nonfiction, poisons, poisoners, women, history, short biographies

After reading The League of Lady Poisoners: Illustrated True Stories of Dangerous Women, written and illustrated by Lisa Perrin, I've become a little leery of having tea with a new acquaintance. These days there are more toxins than ever before that are odorless and tasteless, and some have long-delayed effects.

This book is a Wild Card selection. Although I usually peruse the Science section and a few others, including Science Fiction and Short Stories, I'll often poke around other Nonfiction areas of the library to see what might be interesting. I couldn't pass this one up.

Poison is considered a woman's weapon, but more poisonings are actually committed by men. However, since by far the most murders are committed by men, it stands to reason that even a less-common method men use would outnumber the primary method women use to be rid of an abusive spouse, terrifying neighbor, or inconvenient relative.

Most of the women presented here can be counted as serial killers. Having succeeded in one murder, a person finds that a line has been crossed, and follow-on killings result, often to cover up the first. One of the earliest was Cleopatra, who poisoned one of her brothers, and who tested poisons' effects on prisoners. She eventually poisoned herself and her two handmaids (the asp story is a fabrication; death by snakebite is agonizing. She would have mixed a poison containing lots of opium, to die painlessly).

Much more recently, possibly the most prolific poisoner was "Jolly Jane" Toppan. She confessed to 34 killings, but it is likely the total number of her victims exceeds 100. She worked as a nurse, which gave her access to her favorite toxins. She was almost unique in her obsession with watching others die by poisoning. Others were prolific as providers of poisons to others, such as a midwife in Nagyrév, Hungary; in the early 1900's she supplied arsenic to abused women, who collectively were called the Angel-Makers in later news accounts. Forty men are known to have died, making this practically an epidemic in the village. There may have been more.

Modern forensic methods can detect almost all known toxins. Arsenic is easy to test for. Fortunately, so is Fentanyl, which is taking tens of thousands of lives yearly in the US, nearly all by overdose. It is not known (and hardly anyone is looking) how many Fentanyl deaths are more deliberate.

Although one section describes common poisons and common poisonous plants and animals, nobody will gain knowledge how to use poisons from this book. The author's aim is the stories of the women themselves, and the sociology of their surroundings. In many cases it is easy to conclude, "Yes, so-and-so deserved to be killed." All too frequently, however, a first killing led to others. Killing is the ultimate slippery slope.

The author is a gifted illustrator. I've included just one random bit of her art. The main color used throughout is a sickly green color, the classical hue of poison. Most of the stories are illustrated by a full-page portrait. These are large and detailed (the pages are 7"x10"), so reproducing one here would probably violate the principle of fair use.

You can see much more of her art at her website.

As good as the writing is, and as fascinating as the stories are, with time I expect the lingering dread to fade. I don't want to go through life fearing new acquaintances.

Saturday, September 14, 2024

Will our children become cyborgs?

 kw: book reviews, nonfiction, futurism, artificial intelligence, the singularity

I started these book reviews just after The Singularity is Near, by Ray Kurzweil, was published. I am not sure I read the book; I think I read some reviews, and an article or two by Kurzweil about the subject. Nineteen years have passed, and Ray Kurzweil has doubled down on his forecasts with The Singularity is Nearer: When We Merge With AI.

As I recall, in 2005 The Singularity referred to a time when the author expected certain technological and sociological trends to force a merging of human and machine intelligences, and he forecast this to occur about the year 2045. The new book tinkers with the dates a bit, but not by much. One notable change: in 2005 he considered the compute capacity of the human brain to be 1016 calculations per second, with memory about 1013 bits (~100 GBytes: woefully small by modern estimates). His current estimate is 1014 calculations per second, and I don't recall that he mentioned memory capacity at all.

I am encouraged that Kurzweil doesn't see us at odds with AI, or AI at odds with us, but as eternal collaborators. I'll be 98 in 2045, and I am likely to be still alive. Time will tell.

I am an algorithmicist. I built much of my career on improving the efficiency of computer code to squeeze out maximum calculations-per-second from a piece of hardware. But a "calculation" is a pretty slippery item. In the 1960's and 1970's mainframe computers were rated in MIPS, Millions of Instructions Per Second. Various benchmark programs were used to "exercise" a machine to measure this, because it doesn't correlate cleanly with cycle time. Some instructions (e.g., "Move Word#348762 to Register 1") might consume one clock cycle, while others (e.g., "Add Register 1 to Register 2 and store the result in Register 3") might require six cycles; and the calculation wasn't really finished until another instruction put the result from the Register back in a memory location. The 1970's saw a changeover from MIPS to MFLOPS, or Millions of FLoating-point Operations Per Second, to measure machine power. Supercomputers of the day, such as the CDC Cyber 6600 and the Cray-1, could perform "math" operations such as addition, in a single cycle, so a machine with a cycle time of 1 MHz could approach a rate of 1 MFLOPS (Note: The Cray-1 and later Cray machines used "pipeline processors", a limited kind of parallel processing, to finesse a rate of 1-FLOP-per-cycle. The Cray-1 achieved 100 MFLOPS).

The middle of the book is filled with little charts explaining all the trends that Kurzweil sees coming together. This is the central chart:


This is from page 165. Note that the vertical scale is logarithmic; each scale division is 10x as great at the one below. The units are FLOPS/$, but spelled out longer, because before 1970 the FLOPS rate had to be estimated from MIPS ratings. Also, the last two points are for the Google TPU (Tensor Processing Unit), a takeoff of the GPU (Graphics Processing Unit), which is specialized for extremely broad-scale massively parallel learning needed to train programs such as ChatGPT or Gemini. One cannot own a TPU, they can only be leased, so some figuration had to be done to make the data points "sit" in a reasonable spot on the chart. The dollars are all normalized to 2023.

The trend I get from these points (an exponential line from the second point through the next-to-last), is 52.23 doublings in 80 years, or a doubling each 18.4 months. It is also a factor of ten each five years plus a month (61 months). Of course, the jitter of the charted line indicates that progress isn't all that smooth, but the idea is clear. Whatever is happening today, can be done about ten times as fast in five years, or one can do ten times as much in the same time, five years from now.

When I was in graduate school, about 1980 (I was several years older than my classmates) we were asked to write an essay on how we would determine the progress of plate tectonics back in time "about 2 billion years", and whether computer modeling could help. I outlined the scale of simulation that would be needed, and stated that running the model to simulate Earth history for a couple billion years would take at least ten years of computer time on the best processors of the day. I suggested that it would be best to take our time to prepare a good piece of simulation software, but to "wait ten years, until a machine will be available that is able to run 100 times as fast for an economical cost". I didn't get a good grade. As it turned out, from 1978 to 1988 the trend of "fastest machine" is seen to be flat on the chart above! It took another five or six years for the trend to catch up after that period of doldrums. Now you can view a video of the motions of tectonic plates around the globe over the past 1.8 billion years, and the simulation can be run on most laptops.

So, I get Kurzweil's point. Machines are getting faster and cheaper and perhaps one day there will be a computer system that is smaller than the average Target store, which can hold, not just the simulation of one person's brain, but the whole human race. However, as I said, I am an algorithmicist. What is the algorithm of consciousness? Kurzweil says at least a couple of times that if 1014 calculations per second per brain turn out not to be enough, "soon" after that there will be enough compute capacity to simulate the protein activity in every neuron in a brain, and later on enough to simulate all of humanity, so that the whole human race could become a brain in a box.

Of course, that isn't the future he envisions for us. He prefers that we not be replaced by hardware, but augmented. Brain-to-machine interfaces are coming. He has no more clue than I do what scale of intervention is needed in a brain so it can handle the bandwidth of data transfer needed to, say, double the "native" compute capacity of a person, let along increase it by a factor of 10, 100, ... or a billion. At what point does the presence of a human brain in the machine even matter? I suspect even a doubling of our compute capacity is not possible.

Let's step back a bit. In an early chapter we learn a little about the cerebellum, which contains 3/4 of our neurons and more than 3/4 of the neuron-to-neuron connectivity of the total brain, all in 10% of its volume. With hardly a by-your-leave, Kurzweil goes on to other things, but I think this is utterly critical. The cerebellum allows our brain to interact with the world. It runs the body and mediates all our senses. Not just the "classic 5", but all of the 20 or more senses that are needed to keep a human body running smoothly. Further, I see nothing about the limbic system; it's only 1% of the brain, but without it we cannot decide anything. It is the core of what it "feels like to be human," among other crucial functions. Everything we do and everything we experience has an emotional component.

Until we fully understand what the 10% and the 1% are doing, it makes little sense to model the other 89% of the brain's mass. Can AI help us understand consciousness? I claim, no, Hell no, never in a million years. It will take a lot of HUMAN work to crack that nut. AI is just a tool. It cannot transcend its training databank.

At this point, I'll end by saying, I find Ray Kurzweil's writing very engaging but not compelling. I enjoyed reading the book, and not just so I could pooh-pooh things. His ideas are worth considering, and taking note of. Some of his forecasts could be right on. But I suspect the ultimate one, of actually merging with AI, of all of us becoming effectively cyborgs?… No way.

Wednesday, September 04, 2024

If it is artificial, is it intelligence?

 kw: book reviews, nonfiction, computer science, artificial intelligence, simulated intelligence, surveys

Before "hacker" meant "computer-using criminal", it meant "enthusiast". Many early hackers spent their time obsessively doing one of two things: either writing a new operating system, or trying to write software to do "human stuff". I have been hearing about "AI", "artificial intelligence" since the term was coined by Claude Shannon when I was nine years old. Two years later the third book in the Danny Dunn series was Danny Dunn and the Homework Machine (by Abrashkin and Williams). It featured a desk-sized computer with a novel design, created by a family friend, Professor Bullfinch. A decade later (1968) I had the chance to learn FORTRAN, which kicked off a lifelong hobby-turned-profession. The computer I learned on was the first desk-sized "minicomputer", the IBM 1130.

ENIAC and other early "elephants" were called "electronic brains" almost from the beginning. I learned how CPU's (central processing units) worked, and even though the operation of biological brains was not so well known yet, it was clear to me that computers worked in an utterly different way.

Fast-forward a few decades. Some 20 years ago a "last page" article in Scientific American described the newest supercomputer, claiming that it was equivalent to a human brain, in memory size, component count, and computing speed. Where it did not match the brain was the amount of power it needed: three million watts. Our brains use about 20 watts. It soon became evident that the metric of brain complexity is not the number of neurons, but the number of synapses, plus other connections between the neurons and the glia and other "support cells". In this, that supercomputer was woefully lacking. This is still true.

Tell me, does this illustration show one being or two?

My generation and all those following have been influenced by I, Robot, and by Forbidden Planet, and by other popular depictions of brainy machines. We think of a "robot" as a mechanical man, a humanoid mechanism that is self-contained.

In order to behave and respond the way one of the robots in I, Robot does, a humanoid mechanism would need an intimate connection with a room full of equipment like the supercomputer in the background of the image (that part of the image is real). When the Watson supercomputer played Jeopardy (and won) a few years ago, what the audience didn't see was the roomful of equipment offstage. And that was just what was running the trained "model" and its databases and the voice interface. The equipment used for training Watson was much larger, and kept a couple of hundred computer scientists, linguists, and other experts occupied for many months.

Assuming Moore's Law continues to double circuit complexity (per cubic cm) each two years, it will take sixteen doublings, or 32 years, to get the supercomputer shown into a unit that fits inside a robot of the size shown. And power requirements will have to drop from millions of watts to 100 watts or less. And this is still not a machine that has the brain power of a human. We don't know what that would take.

All this is to introduce a fascinating book, The Mind's Mirror: Risk and Reward in the Age of AI, by Daniela Rus and Gregory Mone. While Professor Rus is a strong proponent of AI and of its further development, she is more clear-headed than the authors of most books on the subject. In particular, she sees more clearly than most the risks, the dangers, of faddish over-promotion and of rushing blindly into an "AI Future".

At the outset, in Chapter 1, "Speed", she clearly emphasizes that products such as ChatGPT, DALL-E, and Gemini are tools, and particularly that their "expertise" is confined to the material that was used to train them. She writes that it might have been possible to get one of the LLM's (large language models) to write a chapter of the book, but it "would not really represent my ideas. It would be a carefully selected string of text modeled on trillions of words spread across the web. My goal is to share my knowledge, expertise, passion, and fears with regard to AI, not the global average of all such ideas. And if I want to do that, I cannot rely on an AI chat assistant." (p. 11) In a number of places she calls AI software "intelligent tools."

She continues the theme, writing of knowledge, insight, and creativity (Chapters 2 – 4), saying at one point, "They are masters of cliché." (p. 60) Critical analysis skills that we used to learn were based on following the progression from Data to Information to Knowledge, and then to Insight and Wisdom (does any school still teach this?) All of these together add up to comprehension. Does anyone have the slightest idea how to bring about Artificial Comprehension?

None of the software tools has shown the slightest ability to step outside the bounds of their training data. If ChatGPT "hallucinates", it is not rendering new knowledge, but remixing biased or deceptive content from its poorly curated training set, perhaps with a dollop of truthful "old news" in the mix. This illustration of a LinkedIn post I wrote last year shows the point.

The colors are significant:

  • Green = correct or true
  • Lighter orange = incomplete or outdated
  • Darker orange and red = false to malicious, even evil
  • Blue = AI training data, partly "good", partly "poor", partly "evil"—we hope not too evil
The three lavender blobs at right are varieties of human experience, including someone creating new "good stuff", poking out to the right and increasing the store of published knowledge. I kept those blobs far from the training data on purpose. Training is typically done with "old hat" material.

This book has a rare admission that "it's essential to remember that the nature and function of AI parameters and human synapses are vastly different." (p. 106) We don't know all that well the amount of processing going on within a neuron, nor even if a synapse is more than just a signal-passing "gate" or is something more capable. 

And though the matter of embodiment is touched upon, I was disappointed that there wasn't more on this. Perhaps you've heard that the human brain has "100 billion neurons". The actual number is 85-90 billion, and 80% of them are in the cerebellum, the "little brain" at the back, above the brain stem. We have a little inkling that the sorts of processing that cerebellar neurons perform are different from those in the cerebral neurons (the famous "gray matter"). Clearly, when 80% of the neurons make up only 10% of the brain's total volume, these neurons are smaller. The cerebellum "runs the body", and handles the traffic between the body and the cerebral cortex, the "upper brain", where thinking (most likely) occurs. Embodiment is clearly extremely important for a brain to function properly. It's being glossed over by most workers in AI.

A special chapter between 11 and 12 is "A Business Interlude: The AI Implementation Playbook". An entrepreneur or business leader who wants either to initiate a venture that strongly relies on these tools, or who wants to add them to the company bag of tricks would do well to extract these 19 pages from the book and dig into them. They include the key steps to take and the crucial questions to ask (including "Will AI be cost-effective compared to what I am doing now?"). A key component of any team tasked with making or transitioning a business to use AI is "bilinguals", people who are well versed in the business and also in computer science and AI in particular. This is analogous to a key period in my career (not with AI, though): Because I had studied all the sciences in college, and had a few degrees, and I also was a highly competent coder, I was a valued "bilingual", getting scientific software to work well for the scientists at the research facility where I worked. Bottom line: You need the right people to make appropriate use of AI tools in your company.

The book includes a major section on risks and the defenses we need. Whether some future AI system will "take over" or subjugate us is a far-off threat. It is not to be ignored, but the front-burner issues are what humans will do with AI tools that we need to be wary of. Something my mother said comes back to me. I was about to go into a new area of the desert for a solo hike. She said she was worried for my safety. I said, "Oh, I know how to avoid rattlesnakes." She replied, "I am worried about rattle-people!"

Let's keep that in mind. In my experience, rattle-people are a big risk whenever any new tool is created. What's one of the biggest uses of generative art-AI? Coupling it with Photoshop to produce deep fake pictures. Deep fake movies are a bit more difficult and costly just now, but just wait…and not very long! Soon, it will take a powerful AI tool to detect deep fake pix and vids, and how will we know that the AI detective tool is reliable and truthful?

A proverb from my coding days, "If we built houses the way most software is written, the next woodpecker to come along could destroy civilization." Most of us old-timers know a dozen ways to hack into a system, but the easiest is "social engineering," finding someone to trick into revealing login credentials or other information to help the hacker get into the system. Now social engineers are using AI tools to help them write convincing scripts to use to fool people, whether through scam phone calls, phishing emails or smishing SMS (or WhatsApp or Line or FB, etc.) messages.

[You can take this to the voting booth: Effective right now, any TV or radio political ad, particularly the attack ads, will have AI-generated content. If you want to know a candidate, go to the candidate's web site and look for actual policy statements (NOT promises!).]

A final matter I wish Professor Rus had included: Human decision making requires emotion. Persons who have suffered the kind of brain damage that "disconnects" their emotions become unable to make a decision. Somehow, we have to like something in order to choose it. Where "liking" comes from, we haven't a clue. But it is essential!

There is much more I could go into, but this is enough, I hope, to whet your appetite to get the book and read it.

A final word, that I didn't want to bring into the earlier discussions. I don't like the term Artificial Intelligence, nor AI. I much prefer Simulated Intelligence, abbreviated SI. It is unfortunate that, in the world of science, SI refers to System Internationale, the system of units such as meter, kilogram and second, used to define quantities in mathematical physics. Perhaps someone who reads this can come up with another moniker that makes it clear that machine intelligence isn't really intelligent yet.