Wednesday, September 04, 2024

If it is artificial, is it intelligence?

 kw: book reviews, nonfiction, computer science, artificial intelligence, simulated intelligence, surveys

Before "hacker" meant "computer-using criminal", it meant "enthusiast". Many early hackers spent their time obsessively doing one of two things: either writing a new operating system, or trying to write software to do "human stuff". I have been hearing about "AI", "artificial intelligence" since the term was coined by Claude Shannon when I was nine years old. Two years later the third book in the Danny Dunn series was Danny Dunn and the Homework Machine (by Abrashkin and Williams). It featured a desk-sized computer with a novel design, created by a family friend, Professor Bullfinch. A decade later (1968) I had the chance to learn FORTRAN, which kicked off a lifelong hobby-turned-profession. The computer I learned on was the first desk-sized "minicomputer", the IBM 1130.

ENIAC and other early "elephants" were called "electronic brains" almost from the beginning. I learned how CPU's (central processing units) worked, and even though the operation of biological brains was not so well known yet, it was clear to me that computers worked in an utterly different way.

Fast-forward a few decades. Some 20 years ago a "last page" article in Scientific American described the newest supercomputer, claiming that it was equivalent to a human brain, in memory size, component count, and computing speed. Where it did not match the brain was the amount of power it needed: three million watts. Our brains use about 20 watts. It soon became evident that the metric of brain complexity is not the number of neurons, but the number of synapses, plus other connections between the neurons and the glia and other "support cells". In this, that supercomputer was woefully lacking. This is still true.

Tell me, does this illustration show one being or two?

My generation and all those following have been influenced by I, Robot, and by Forbidden Planet, and by other popular depictions of brainy machines. We think of a "robot" as a mechanical man, a humanoid mechanism that is self-contained.

In order to behave and respond the way one of the robots in I, Robot does, a humanoid mechanism would need an intimate connection with a room full of equipment like the supercomputer in the background of the image (that part of the image is real). When the Watson supercomputer played Jeopardy (and won) a few years ago, what the audience didn't see was the roomful of equipment offstage. And that was just what was running the trained "model" and its databases and the voice interface. The equipment used for training Watson was much larger, and kept a couple of hundred computer scientists, linguists, and other experts occupied for many months.

Assuming Moore's Law continues to double circuit complexity (per cubic cm) each two years, it will take sixteen doublings, or 32 years, to get the supercomputer shown into a unit that fits inside a robot of the size shown. And power requirements will have to drop from millions of watts to 100 watts or less. And this is still not a machine that has the brain power of a human. We don't know what that would take.

All this is to introduce a fascinating book, The Mind's Mirror: Risk and Reward in the Age of AI, by Daniela Rus and Gregory Mone. While Professor Rus is a strong proponent of AI and of its further development, she is more clear-headed than the authors of most books on the subject. In particular, she sees more clearly than most the risks, the dangers, of faddish over-promotion and of rushing blindly into an "AI Future".

At the outset, in Chapter 1, "Speed", she clearly emphasizes that products such as ChatGPT, DALL-E, and Gemini are tools, and particularly that their "expertise" is confined to the material that was used to train them. She writes that it might have been possible to get one of the LLM's (large language models) to write a chapter of the book, but it "would not really represent my ideas. It would be a carefully selected string of text modeled on trillions of words spread across the web. My goal is to share my knowledge, expertise, passion, and fears with regard to AI, not the global average of all such ideas. And if I want to do that, I cannot rely on an AI chat assistant." (p. 11) In a number of places she calls AI software "intelligent tools."

She continues the theme, writing of knowledge, insight, and creativity (Chapters 2 – 4), saying at one point, "They are masters of cliché." (p. 60) Critical analysis skills that we used to learn were based on following the progression from Data to Information to Knowledge, and then to Insight and Wisdom (does any school still teach this?) All of these together add up to comprehension. Does anyone have the slightest idea how to bring about Artificial Comprehension?

None of the software tools has shown the slightest ability to step outside the bounds of their training data. If ChatGPT "hallucinates", it is not rendering new knowledge, but remixing biased or deceptive content from its poorly curated training set, perhaps with a dollop of truthful "old news" in the mix. This illustration of a LinkedIn post I wrote last year shows the point.

The colors are significant:

  • Green = correct or true
  • Lighter orange = incomplete or outdated
  • Darker orange and red = false to malicious, even evil
  • Blue = AI training data, partly "good", partly "poor", partly "evil"—we hope not too evil
The three lavender blobs at right are varieties of human experience, including someone creating new "good stuff", poking out to the right and increasing the store of published knowledge. I kept those blobs far from the training data on purpose. Training is typically done with "old hat" material.

This book has a rare admission that "it's essential to remember that the nature and function of AI parameters and human synapses are vastly different." (p. 106) We don't know all that well the amount of processing going on within a neuron, nor even if a synapse is more than just a signal-passing "gate" or is something more capable. 

And though the matter of embodiment is touched upon, I was disappointed that there wasn't more on this. Perhaps you've heard that the human brain has "100 billion neurons". The actual number is 85-90 billion, and 80% of them are in the cerebellum, the "little brain" at the back, above the brain stem. We have a little inkling that the sorts of processing that cerebellar neurons perform are different from those in the cerebral neurons (the famous "gray matter"). Clearly, when 80% of the neurons make up only 10% of the brain's total volume, these neurons are smaller. The cerebellum "runs the body", and handles the traffic between the body and the cerebral cortex, the "upper brain", where thinking (most likely) occurs. Embodiment is clearly extremely important for a brain to function properly. It's being glossed over by most workers in AI.

A special chapter between 11 and 12 is "A Business Interlude: The AI Implementation Playbook". An entrepreneur or business leader who wants either to initiate a venture that strongly relies on these tools, or who wants to add them to the company bag of tricks would do well to extract these 19 pages from the book and dig into them. They include the key steps to take and the crucial questions to ask (including "Will AI be cost-effective compared to what I am doing now?"). A key component of any team tasked with making or transitioning a business to use AI is "bilinguals", people who are well versed in the business and also in computer science and AI in particular. This is analogous to a key period in my career (not with AI, though): Because I had studied all the sciences in college, and had a few degrees, and I also was a highly competent coder, I was a valued "bilingual", getting scientific software to work well for the scientists at the research facility where I worked. Bottom line: You need the right people to make appropriate use of AI tools in your company.

The book includes a major section on risks and the defenses we need. Whether some future AI system will "take over" or subjugate us is a far-off threat. It is not to be ignored, but the front-burner issues are what humans will do with AI tools that we need to be wary of. Something my mother said comes back to me. I was about to go into a new area of the desert for a solo hike. She said she was worried for my safety. I said, "Oh, I know how to avoid rattlesnakes." She replied, "I am worried about rattle-people!"

Let's keep that in mind. In my experience, rattle-people are a big risk whenever any new tool is created. What's one of the biggest uses of generative art-AI? Coupling it with Photoshop to produce deep fake pictures. Deep fake movies are a bit more difficult and costly just now, but just wait…and not very long! Soon, it will take a powerful AI tool to detect deep fake pix and vids, and how will we know that the AI detective tool is reliable and truthful?

A proverb from my coding days, "If we built houses the way most software is written, the next woodpecker to come along could destroy civilization." Most of us old-timers know a dozen ways to hack into a system, but the easiest is "social engineering," finding someone to trick into revealing login credentials or other information to help the hacker get into the system. Now social engineers are using AI tools to help them write convincing scripts to use to fool people, whether through scam phone calls, phishing emails or smishing SMS (or WhatsApp or Line or FB, etc.) messages.

[You can take this to the voting booth: Effective right now, any TV or radio political ad, particularly the attack ads, will have AI-generated content. If you want to know a candidate, go to the candidate's web site and look for actual policy statements (NOT promises!).]

A final matter I wish Professor Rus had included: Human decision making requires emotion. Persons who have suffered the kind of brain damage that "disconnects" their emotions become unable to make a decision. Somehow, we have to like something in order to choose it. Where "liking" comes from, we haven't a clue. But it is essential!

There is much more I could go into, but this is enough, I hope, to whet your appetite to get the book and read it.

A final word, that I didn't want to bring into the earlier discussions. I don't like the term Artificial Intelligence, nor AI. I much prefer Simulated Intelligence, abbreviated SI. It is unfortunate that, in the world of science, SI refers to System Internationale, the system of units such as meter, kilogram and second, used to define quantities in mathematical physics. Perhaps someone who reads this can come up with another moniker that makes it clear that machine intelligence isn't really intelligent yet.

No comments: