Tuesday, May 18, 2021

AI is misnamed

 kw: book reviews, nonfiction, artificial intelligence, polemics

Mechanical brains, they called them. Starting with ENIAC, dubbed the Giant Brain when it was made public in 1945, computing machinery has been compared to our brain, and computers were often called "thinking machines" in the first few decades of their existence. That's ironic. The one thing they most definitely do NOT do is Think! They are not thinkers, they are computers. They Compute.

ENIAC was indeed a giant: a row of cabinets 3 feet thick, 8 feet high, 98 feet long, wrapped around the walls of a big room. Running it took 150 kW. Much has been said about its speed: It performed ballistic calculations 2,400 times as fast as a human using pencil and paper, though I never saw it compared to a skilled operator of a Marchant mechanical calculator (my father's company had one of those; running a division problem made a wonderful noise!). Since that time, various computers have been compared to human speeds. Not once has anyone mentioned that arithmetical calculations are what the human brain is worst at! A very few people are "lightning calculators", able to use a bit of their brain to calculate rapidly. They are hundreds to thousands of times as fast as the rest of us. But even ENIAC was faster.

I began programming computers, initially using FORTRAN, in 1968. I've watched speeds increase, by a factor of a million between then and now, for single-processor machines, and much more than that for machines that incorporate stacks and stacks of processors, in the tens of thousands for the fastest weather-forecasting supercomputers. Before 1970 I read predictions that one day computers would be able to "program themselves". Nobody ever predicted how a computer would know what to program, unless a person "gave it the idea". To this day, no computer has "had an idea."

These are just a few of the reasons I look at all claims of "artificial intelligence" with a jaundiced eye. For more than 50 years, "human scale" AI has been touted as "no more than 10-20 years away." I always say, "Nonsense."

Finally, someone with a finger to the pulse of the AI community has written a book exposing the hype: Not only does the AI Emperor have no clothes, it has no mind either. computer scientist Erik J. Larson's book The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do proves that case, quite handily. His point is not that they "don't" think the way we do, but that they can't do so. Not now, and not ever.

Dr. Larson shows, in a stuffed handful of ways, that nearly all arguments in favor of AI becoming a superhuman superintelligence run the same way: This or that program can do deductive inferences, and such-and-such a program can do inductive inferences, and if we just put enough computing muscle behind some collection of such programs, they will "evolve" or "develop" into a conscious entity that can outthink any human. The few arguments that don't follow this schema are based on efforts like Blue Brain, which used a supercomputer to simulate the activities of neurons and all their connections in a segment of a mouse or rat brain. Such projects have uniformly failed, although their promoters have tried in vain to cover up the failure by waving their hands and talking about "making a pivot" (pick your buzzword for hyping a consolation prize) in some other direction.

The characteristic of both deductive inference and inductive inference (things we all do many times daily) is their reliance on a knowledge base. Whichever kind of inference is to be performed, the entity (brain or machine) making the inference is limited to the domain of knowledge stated in the initial problem, plus any related knowledge the entity can access. More succinctly: both brands of inference can look only backward, never forward, because there is nothing in the process of inference that anticipates the future.

A key word in this book: Abduction. Abductive inference is "what if" thinking. We call it "getting an idea." Also called "counterfactual", an abductive inference follows the exhortation of Jack Kennedy, "Others look at what is and say, 'Why?' Let us consider what might be and say, 'Why not?' " (a paraphrase from memory; it's been a long time since I heard him say it.)

Nobody knows how abduction works. Until they do, there is no theory we can use to develop a machine that can do it. Abduction isn't some automatic offspring of deduction plus induction. It is something different. Only abduction can look forward. Without it, there is no progress, no invention, and literally, "nothing new under the sun." (Solomon was wrong, and he'd be the first to admit it if he were to see a typewriter, an auto, or a smart phone. But he wasn't talking about technology anyway. Hi-tech for him was a spear head made of iron instead of bronze.)

Here is a side thought I had, on a different line from the book: A brain produces and consumes Meaning. Language (Dr. Larson's specialty) is based on what words Mean. A key concept is the difference between a word's Denotation (dictionary definition under a specific usage) and its Connotation (shade of meaning depending on context, tone of voice, and other tacit matters). I connect this thought with another: Computers excel at detecting Differences; Brains excel at detecting Similarities. Based primarily on this latter principle, I built a 40-year career as a computer programmer upon writing programs that did what brains do badly, while facilitating people's ability to use their brains to do what computers do badly.

I want to dig into this idea of simulating a brain's neurons. Many brain theorists use the "column" hypothesis to understand how our cerebral cortex works. The cortex apparently consists of 50-100 million "mini-columns" made up of 100-300 neurons each, arranged in larger columns that number around a million or so. What would it take to simulate the action of about 200 neurons, in real time?

The various synchronized pulses that we can detect with EEG range as high as 40 Hz, with a lot taking place in the 10-20 Hz range. Thus, whatever you simulate, the cycle time has to be in this range or faster.

A neuron is not a transistor. A transistor has three connections, and is basically an on-off switch. A neuron has between a few hundred and 10,000 inputs, and from thousands to as many as 100,000 outputs. The 200 neuron mini-column we want to simulate will thus have thousands of connections just within itself, and 2-5 million connections with other parts of the brain. Further, each neuron, under the influence of its inputs, emits a train of pulses to its thousands of outputs, at variable rates.

A base-level simulation must first proceed by assuming the column is isolated, so we can ignore a few million connections, except for perhaps one, to be used to trigger the system to see what happens.

To simulate a single neuron requires all the processing power of a modest microprocessor such as a single-core Pentium CPU (in the 1990's they ran with cycle times of about 50 MHz). These days, it is no problem to string together CPU power equivalent to 200 Pentiums. Collecting the data just from all the internal connections will generate about 100 kbytes, each 1/10th to 1/40th of a second. During "beta wave" heavy thinking, that's around 40 Mbytes/sec. That's not so enormous; it's about half the write speed of a magnetic disk. But now let's remember the 2-5 million outputs from, and an equal number of inputs to, these 200 neurons and other parts of the brain. You really have to consider the whole cortex, and a few other things, to begin to simulate a human brain. Each mini-column in apparently connected to at least 1-2% of the entire brain, a few million other mini-columns.

Bottom line: to simulate the entire cerebral cortex of about 16 billion neurons, with the tens of trillions of connections between them, requires the computing power of 16 billion Pentium-class CPU chips, or something like, by my estimation, 300 million processor cores running 4-5 GHz. They would need about 40 million hard disks to handle the throughput, or about 5 million solid state drives (SSD's). At least during setup and testing you need to record it all to see what is happening. Later less of it needs to be kept.

Will processing speed and disk speed and capacity "save the bacon" in another decade or so? Not really. Do you remember Moore's Law? As originally stated, the number of transistors on a CPU chip doubled twice every three years. Amazingly, this held true for decades. A modern Intel I7 chip has several "cores" (separate CPU's), each containing about 3 billion transistors. But this hasn't changed much in the past 15 years. Clock speeds maxed out around 5 GHz before 2005. Modern supercomputers just stack larger and larger numbers of multicore processors together. The electronic brain described above would need 300 million such cores, and that isn't going to change much. And we are decades to centuries away from "quantum computers" that can exceed the power of today's CPU's, if that ever happens.

That is just to simulate the cerebral cortex and its 16 billion neurons. What about the rest of the brain? You might say, "Doesn't the brain have 100 billion neurons?" The actual number is closer to 86 billion, but most of them are in the cerebellum! See this diagram. (It neglects two items. The limbic system or mid-brain handles our emotional responses and mediates memory storage, using one billion neurons, and the brain stem below the cerebellum shuttles signals between the cerebellum and the body, also with about a billion neurons.) The cerebellum mainly runs the body, but that isn't all...a subject for later discussion.

A disembodied cortex, absent the rest, is not going to operate like a real brain. It'll probably go insane so fast, it will stop running in a matter of seconds. It will die a-borning. Just for one example, the visual cortex at the back of the brain takes up about 1/4 of the whole. What kind of eyes will you give your brain-in-a-warehouse? Without them, what will happen? If you want a truly disembodied brain, you just simulate the frontal lobes, but you need to communicate with them. A big, big issue.

I have a different question: What's the big deal about duplicating the human brain? Millions of natural ones are created every year, and much of their training and education is accomplished in 10-20 years. Our present computer hardware works very differently from a neuron-derived brain. Making a computer simulate a natural brain is incredibly difficult, costly, and probably would suck down electrical power like a steel plant. Can we take advantage of computing machinery in "native mode", to work out new ways of thinking? Until we have a theory of abduction, we need humans to come up with the ideas. Single, thoughtful, bright brains are better at that than any kind of hive mind, committee, or brainstorming team. Ideas need development, however, and we must focus on better and better ways to automate at least some of that.

I like SYNERGY. I built a career on it. If I were to return to work in the field, it's what I would focus on. A final key point of Dr. Larson's: there is not only wasted effort in the falsely-named AI field; effort is being diverted from things that matter, things that need doing, while natural (that is, genuine) intelligence is being denigrated, to the detriment of us all. That's the tragedy of misdirected AI.

No comments: