Wednesday, September 18, 2024

Not exactly a poison primer

 kw: book reviews, nonfiction, poisons, poisoners, women, history, short biographies

After reading The League of Lady Poisoners: Illustrated True Stories of Dangerous Women, written and illustrated by Lisa Perrin, I've become a little leery of having tea with a new acquaintance. These days there are more toxins than ever before that are odorless and tasteless, and some have long-delayed effects.

This book is a Wild Card selection. Although I usually peruse the Science section and a few others, including Science Fiction and Short Stories, I'll often poke around other Nonfiction areas of the library to see what might be interesting. I couldn't pass this one up.

Poison is considered a woman's weapon, but more poisonings are actually committed by men. However, since by far the most murders are committed by men, it stands to reason that even a less-common method men use would outnumber the primary method women use to be rid of an abusive spouse, terrifying neighbor, or inconvenient relative.

Most of the women presented here can be counted as serial killers. Having succeeded in one murder, a person finds that a line has been crossed, and follow-on killings result, often to cover up the first. One of the earliest was Cleopatra, who poisoned one of her brothers, and who tested poisons' effects on prisoners. She eventually poisoned herself and her two handmaids (the asp story is a fabrication; death by snakebite is agonizing. She would have mixed a poison containing lots of opium, to die painlessly).

Much more recently, possibly the most prolific poisoner was "Jolly Jane" Toppan. She confessed to 34 killings, but it is likely the total number of her victims exceeds 100. She worked as a nurse, which gave her access to her favorite toxins. She was almost unique in her obsession with watching others die by poisoning. Others were prolific as providers of poisons to others, such as a midwife in Nagyrév, Hungary; in the early 1900's she supplied arsenic to abused women, who collectively were called the Angel-Makers in later news accounts. Forty men are known to have died, making this practically an epidemic in the village. There may have been more.

Modern forensic methods can detect almost all known toxins. Arsenic is easy to test for. Fortunately, so is Fentanyl, which is taking tens of thousands of lives yearly in the US, nearly all by overdose. It is not known (and hardly anyone is looking) how many Fentanyl deaths are more deliberate.

Although one section describes common poisons and common poisonous plants and animals, nobody will gain knowledge how to use poisons from this book. The author's aim is the stories of the women themselves, and the sociology of their surroundings. In many cases it is easy to conclude, "Yes, so-and-so deserved to be killed." All too frequently, however, a first killing led to others. Killing is the ultimate slippery slope.

The author is a gifted illustrator. I've included just one random bit of her art. The main color used throughout is a sickly green color, the classical hue of poison. Most of the stories are illustrated by a full-page portrait. These are large and detailed (the pages are 7"x10"), so reproducing one here would probably violate the principle of fair use.

You can see much more of her art at her website.

As good as the writing is, and as fascinating as the stories are, with time I expect the lingering dread to fade. I don't want to go through life fearing new acquaintances.

Saturday, September 14, 2024

Will our children become cyborgs?

 kw: book reviews, nonfiction, futurism, artificial intelligence, the singularity

I started these book reviews just after The Singularity is Near, by Ray Kurzweil, was published. I am not sure I read the book; I think I read some reviews, and an article or two by Kurzweil about the subject. Nineteen years have passed, and Ray Kurzweil has doubled down on his forecasts with The Singularity is Nearer: When We Merge With AI.

As I recall, in 2005 The Singularity referred to a time when the author expected certain technological and sociological trends to force a merging of human and machine intelligences, and he forecast this to occur about the year 2045. The new book tinkers with the dates a bit, but not by much. One notable change: in 2005 he considered the compute capacity of the human brain to be 1016 calculations per second, with memory about 1013 bits (~100 GBytes: woefully small by modern estimates). His current estimate is 1014 calculations per second, and I don't recall that he mentioned memory capacity at all.

I am encouraged that Kurzweil doesn't see us at odds with AI, or AI at odds with us, but as eternal collaborators. I'll be 98 in 2045, and I am likely to be still alive. Time will tell.

I am an algorithmicist. I built much of my career on improving the efficiency of computer code to squeeze out maximum calculations-per-second from a piece of hardware. But a "calculation" is a pretty slippery item. In the 1960's and 1970's mainframe computers were rated in MIPS, Millions of Instructions Per Second. Various benchmark programs were used to "exercise" a machine to measure this, because it doesn't correlate cleanly with cycle time. Some instructions (e.g., "Move Word#348762 to Register 1") might consume one clock cycle, while others (e.g., "Add Register 1 to Register 2 and store the result in Register 3") might require six cycles; and the calculation wasn't really finished until another instruction put the result from the Register back in a memory location. The 1970's saw a changeover from MIPS to MFLOPS, or Millions of FLoating-point Operations Per Second, to measure machine power. Supercomputers of the day, such as the CDC Cyber 6600 and the Cray-1, could perform "math" operations such as addition, in a single cycle, so a machine with a cycle time of 1 MHz could approach a rate of 1 MFLOPS (Note: The Cray-1 and later Cray machines used "pipeline processors", a limited kind of parallel processing, to finesse a rate of 1-FLOP-per-cycle. The Cray-1 achieved 100 MFLOPS).

The middle of the book is filled with little charts explaining all the trends that Kurzweil sees coming together. This is the central chart:


This is from page 165. Note that the vertical scale is logarithmic; each scale division is 10x as great at the one below. The units are FLOPS/$, but spelled out longer, because before 1970 the FLOPS rate had to be estimated from MIPS ratings. Also, the last two points are for the Google TPU (Tensor Processing Unit), a takeoff of the GPU (Graphics Processing Unit), which is specialized for extremely broad-scale massively parallel learning needed to train programs such as ChatGPT or Gemini. One cannot own a TPU, they can only be leased, so some figuration had to be done to make the data points "sit" in a reasonable spot on the chart. The dollars are all normalized to 2023.

The trend I get from these points (an exponential line from the second point through the next-to-last), is 52.23 doublings in 80 years, or a doubling each 18.4 months. It is also a factor of ten each five years plus a month (61 months). Of course, the jitter of the charted line indicates that progress isn't all that smooth, but the idea is clear. Whatever is happening today, can be done about ten times as fast in five years, or one can do ten times as much in the same time, five years from now.

When I was in graduate school, about 1980 (I was several years older than my classmates) we were asked to write an essay on how we would determine the progress of plate tectonics back in time "about 2 billion years", and whether computer modeling could help. I outlined the scale of simulation that would be needed, and stated that running the model to simulate Earth history for a couple billion years would take at least ten years of computer time on the best processors of the day. I suggested that it would be best to take our time to prepare a good piece of simulation software, but to "wait ten years, until a machine will be available that is able to run 100 times as fast for an economical cost". I didn't get a good grade. As it turned out, from 1978 to 1988 the trend of "fastest machine" is seen to be flat on the chart above! It took another five or six years for the trend to catch up after that period of doldrums. Now you can view a video of the motions of tectonic plates around the globe over the past 1.8 billion years, and the simulation can be run on most laptops.

So, I get Kurzweil's point. Machines are getting faster and cheaper and perhaps one day there will be a computer system that is smaller than the average Target store, which can hold, not just the simulation of one person's brain, but the whole human race. However, as I said, I am an algorithmicist. What is the algorithm of consciousness? Kurzweil says at least a couple of times that if 1014 calculations per second per brain turn out not to be enough, "soon" after that there will be enough compute capacity to simulate the protein activity in every neuron in a brain, and later on enough to simulate all of humanity, so that the whole human race could become a brain in a box.

Of course, that isn't the future he envisions for us. He prefers that we not be replaced by hardware, but augmented. Brain-to-machine interfaces are coming. He has no more clue than I do what scale of intervention is needed in a brain so it can handle the bandwidth of data transfer needed to, say, double the "native" compute capacity of a person, let along increase it by a factor of 10, 100, ... or a billion. At what point does the presence of a human brain in the machine even matter? I suspect even a doubling of our compute capacity is not possible.

Let's step back a bit. In an early chapter we learn a little about the cerebellum, which contains 3/4 of our neurons and more than 3/4 of the neuron-to-neuron connectivity of the total brain, all in 10% of its volume. With hardly a by-your-leave, Kurzweil goes on to other things, but I think this is utterly critical. The cerebellum allows our brain to interact with the world. It runs the body and mediates all our senses. Not just the "classic 5", but all of the 20 or more senses that are needed to keep a human body running smoothly. Further, I see nothing about the limbic system; it's only 1% of the brain, but without it we cannot decide anything. It is the core of what it "feels like to be human," among other crucial functions. Everything we do and everything we experience has an emotional component.

Until we fully understand what the 10% and the 1% are doing, it makes little sense to model the other 89% of the brain's mass. Can AI help us understand consciousness? I claim, no, Hell no, never in a million years. It will take a lot of HUMAN work to crack that nut. AI is just a tool. It cannot transcend its training databank.

At this point, I'll end by saying, I find Ray Kurzweil's writing very engaging but not compelling. I enjoyed reading the book, and not just so I could pooh-pooh things. His ideas are worth considering, and taking note of. Some of his forecasts could be right on. But I suspect the ultimate one, of actually merging with AI, of all of us becoming effectively cyborgs?… No way.

Wednesday, September 04, 2024

If it is artificial, is it intelligence?

 kw: book reviews, nonfiction, computer science, artificial intelligence, simulated intelligence, surveys

Before "hacker" meant "computer-using criminal", it meant "enthusiast". Many early hackers spent their time obsessively doing one of two things: either writing a new operating system, or trying to write software to do "human stuff". I have been hearing about "AI", "artificial intelligence" since the term was coined by Claude Shannon when I was nine years old. Two years later the third book in the Danny Dunn series was Danny Dunn and the Homework Machine (by Abrashkin and Williams). It featured a desk-sized computer with a novel design, created by a family friend, Professor Bullfinch. A decade later (1968) I had the chance to learn FORTRAN, which kicked off a lifelong hobby-turned-profession. The computer I learned on was the first desk-sized "minicomputer", the IBM 1130.

ENIAC and other early "elephants" were called "electronic brains" almost from the beginning. I learned how CPU's (central processing units) worked, and even though the operation of biological brains was not so well known yet, it was clear to me that computers worked in an utterly different way.

Fast-forward a few decades. Some 20 years ago a "last page" article in Scientific American described the newest supercomputer, claiming that it was equivalent to a human brain, in memory size, component count, and computing speed. Where it did not match the brain was the amount of power it needed: three million watts. Our brains use about 20 watts. It soon became evident that the metric of brain complexity is not the number of neurons, but the number of synapses, plus other connections between the neurons and the glia and other "support cells". In this, that supercomputer was woefully lacking. This is still true.

Tell me, does this illustration show one being or two?

My generation and all those following have been influenced by I, Robot, and by Forbidden Planet, and by other popular depictions of brainy machines. We think of a "robot" as a mechanical man, a humanoid mechanism that is self-contained.

In order to behave and respond the way one of the robots in I, Robot does, a humanoid mechanism would need an intimate connection with a room full of equipment like the supercomputer in the background of the image (that part of the image is real). When the Watson supercomputer played Jeopardy (and won) a few years ago, what the audience didn't see was the roomful of equipment offstage. And that was just what was running the trained "model" and its databases and the voice interface. The equipment used for training Watson was much larger, and kept a couple of hundred computer scientists, linguists, and other experts occupied for many months.

Assuming Moore's Law continues to double circuit complexity (per cubic cm) each two years, it will take sixteen doublings, or 32 years, to get the supercomputer shown into a unit that fits inside a robot of the size shown. And power requirements will have to drop from millions of watts to 100 watts or less. And this is still not a machine that has the brain power of a human. We don't know what that would take.

All this is to introduce a fascinating book, The Mind's Mirror: Risk and Reward in the Age of AI, by Daniela Rus and Gregory Mone. While Professor Rus is a strong proponent of AI and of its further development, she is more clear-headed than the authors of most books on the subject. In particular, she sees more clearly than most the risks, the dangers, of faddish over-promotion and of rushing blindly into an "AI Future".

At the outset, in Chapter 1, "Speed", she clearly emphasizes that products such as ChatGPT, DALL-E, and Gemini are tools, and particularly that their "expertise" is confined to the material that was used to train them. She writes that it might have been possible to get one of the LLM's (large language models) to write a chapter of the book, but it "would not really represent my ideas. It would be a carefully selected string of text modeled on trillions of words spread across the web. My goal is to share my knowledge, expertise, passion, and fears with regard to AI, not the global average of all such ideas. And if I want to do that, I cannot rely on an AI chat assistant." (p. 11) In a number of places she calls AI software "intelligent tools."

She continues the theme, writing of knowledge, insight, and creativity (Chapters 2 Р4), saying at one point, "They are masters of clich̩." (p. 60) Critical analysis skills that we used to learn were based on following the progression from Data to Information to Knowledge, and then to Insight and Wisdom (does any school still teach this?) All of these together add up to comprehension. Does anyone have the slightest idea how to bring about Artificial Comprehension?

None of the software tools has shown the slightest ability to step outside the bounds of their training data. If ChatGPT "hallucinates", it is not rendering new knowledge, but remixing biased or deceptive content from its poorly curated training set, perhaps with a dollop of truthful "old news" in the mix. This illustration of a LinkedIn post I wrote last year shows the point.

The colors are significant:

  • Green = correct or true
  • Lighter orange = incomplete or outdated
  • Darker orange and red = false to malicious, even evil
  • Blue = AI training data, partly "good", partly "poor", partly "evil"—we hope not too evil
The three lavender blobs at right are varieties of human experience, including someone creating new "good stuff", poking out to the right and increasing the store of published knowledge. I kept those blobs far from the training data on purpose. Training is typically done with "old hat" material.

This book has a rare admission that "it's essential to remember that the nature and function of AI parameters and human synapses are vastly different." (p. 106) We don't know all that well the amount of processing going on within a neuron, nor even if a synapse is more than just a signal-passing "gate" or is something more capable. 

And though the matter of embodiment is touched upon, I was disappointed that there wasn't more on this. Perhaps you've heard that the human brain has "100 billion neurons". The actual number is 85-90 billion, and 80% of them are in the cerebellum, the "little brain" at the back, above the brain stem. We have a little inkling that the sorts of processing that cerebellar neurons perform are different from those in the cerebral neurons (the famous "gray matter"). Clearly, when 80% of the neurons make up only 10% of the brain's total volume, these neurons are smaller. The cerebellum "runs the body", and handles the traffic between the body and the cerebral cortex, the "upper brain", where thinking (most likely) occurs. Embodiment is clearly extremely important for a brain to function properly. It's being glossed over by most workers in AI.

A special chapter between 11 and 12 is "A Business Interlude: The AI Implementation Playbook". An entrepreneur or business leader who wants either to initiate a venture that strongly relies on these tools, or who wants to add them to the company bag of tricks would do well to extract these 19 pages from the book and dig into them. They include the key steps to take and the crucial questions to ask (including "Will AI be cost-effective compared to what I am doing now?"). A key component of any team tasked with making or transitioning a business to use AI is "bilinguals", people who are well versed in the business and also in computer science and AI in particular. This is analogous to a key period in my career (not with AI, though): Because I had studied all the sciences in college, and had a few degrees, and I also was a highly competent coder, I was a valued "bilingual", getting scientific software to work well for the scientists at the research facility where I worked. Bottom line: You need the right people to make appropriate use of AI tools in your company.

The book includes a major section on risks and the defenses we need. Whether some future AI system will "take over" or subjugate us is a far-off threat. It is not to be ignored, but the front-burner issues are what humans will do with AI tools that we need to be wary of. Something my mother said comes back to me. I was about to go into a new area of the desert for a solo hike. She said she was worried for my safety. I said, "Oh, I know how to avoid rattlesnakes." She replied, "I am worried about rattle-people!"

Let's keep that in mind. In my experience, rattle-people are a big risk whenever any new tool is created. What's one of the biggest uses of generative art-AI? Coupling it with Photoshop to produce deep fake pictures. Deep fake movies are a bit more difficult and costly just now, but just wait…and not very long! Soon, it will take a powerful AI tool to detect deep fake pix and vids, and how will we know that the AI detective tool is reliable and truthful?

A proverb from my coding days, "If we built houses the way most software is written, the next woodpecker to come along could destroy civilization." Most of us old-timers know a dozen ways to hack into a system, but the easiest is "social engineering," finding someone to trick into revealing login credentials or other information to help the hacker get into the system. Now social engineers are using AI tools to help them write convincing scripts to use to fool people, whether through scam phone calls, phishing emails or smishing SMS (or WhatsApp or Line or FB, etc.) messages.

[You can take this to the voting booth: Effective right now, any TV or radio political ad, particularly the attack ads, will have AI-generated content. If you want to know a candidate, go to the candidate's web site and look for actual policy statements (NOT promises!).]

A final matter I wish Professor Rus had included: Human decision making requires emotion. Persons who have suffered the kind of brain damage that "disconnects" their emotions become unable to make a decision. Somehow, we have to like something in order to choose it. Where "liking" comes from, we haven't a clue. But it is essential!

There is much more I could go into, but this is enough, I hope, to whet your appetite to get the book and read it.

A final word, that I didn't want to bring into the earlier discussions. I don't like the term Artificial Intelligence, nor AI. I much prefer Simulated Intelligence, abbreviated SI. It is unfortunate that, in the world of science, SI refers to System Internationale, the system of units such as meter, kilogram and second, used to define quantities in mathematical physics. Perhaps someone who reads this can come up with another moniker that makes it clear that machine intelligence isn't really intelligent yet.