Tuesday, May 07, 2019

The robots are coming...aren't they?

kw: book reviews, nonfiction, artificial intelligence, essay collections, essays

I almost titled this post "Artificial Intelligence Through a Human Lens". But that describes only a few of the essays in Possible Minds: 25 Ways of Looking at AI, edited by John Brockman.

When I was an active programmer (these days I'm called a "coder"), my colleagues and I often spoke of "putting the intelligence into the code". By "intelligence" we really meant the logic and formulas needed to "get the science done". During that 40-plus year career, and for the half-decade since, I've heard promises about the "electronic brains" we worked with. They were going to replace all of us; they would program themselves and outstrip mere humans; they would solve the world's ills; they would develop cures for all diseases and make us immortal. The spectrum of hopes, dreams and fears shared among professional programmers exactly matched the religious speculations of earlier generations! To many of my former colleagues, the computer was their god.

The term "artificial intelligence" may have first hit print soon after ENIAC became public, but it really took off after 1960, nearly sixty years ago. As far as I can tell, there has been about one article expressing fears for every two articles expressing hopes, over that entire time span. But in the popular press, and by that I mean primarily science fiction, the ratio has been about 4 or 5 dystopian visions for every utopian one…even with the popularity of Isaac Asimov's I, Robot books.

An aside: Asimov was famously neurotic. In his Robot fiction, the people are neurotic while the robots are deified. However, a great many of the short pieces explore all the ways a robot can go wrong, either because of a misapplication of one of the famous "Three Laws", or by a dilemma that arises when two of the laws are in conflict. The scary old film Forbidden Planet has its own version of Robbie the Robot temporarily driven to catatonia by being ordered to shoot a person.

While the pros have been dreaming of the wonderful things "thinking machines" could do, the SciFi community has put equal thought into what they might do, that we don't want them to do. Colossus is probably the best example (the book is much better, and infinitely more chilling, than the film).

While I thoroughly enjoyed reading Possible Minds, I finally realized that the essays tell us a lot about the writers, and very little about machine intelligences.

Some of the essays limn the history of the field, and the various "waves" (or fads!) that have swept through about once a decade. The current emphasis is "deep learning", which is really "neural nets" from about 30 years ago, made much deeper—many more layers of connectivity—and driven by hardware that runs several thousand times faster than the supercomputers of the 1980's. But a few authors do point out the difference between the binary response of electronic "neurons" and the nonlinear response of biological neurons. And at least a couple point out that natural intelligence is embodied, while hardly anyone is giving much attention to producing truly embodied AI (that is, robots of the Asimovian ilk).

Another aside: Before I learned the source of the name "Cray" for the Cray Computer Corporation, I surmised that it represented a contention that the machine had brain power equal to a crayfish. Only later did I learn of Seymour Cray, who has named his company for himself. But even today, I suspect that the hottest computers in existence still can't do things that the average crayfish does with ease.

What is a "deep learning" neural net? The biggest ones run on giant clusters of up to 100,000 Graphics Processing Units (GPU's), and simulate the on-off responses of billions of virtual "neurons". Each GPU models several thousand "neurons" and their connections. They are not universally connected, that would be a combinatorial explosion on steroids! But so far as I can discern, each virtual neuron has dozens to hundreds of connections. The setup needs many petabytes of active memory (not just disk). Such a machine, about the size of a Best Buy store, is touted as having the complexity of a human brain. Maybe (see below). The price tag runs in the billions, and the wattage in the millions.

More modest, and less costly, arrangements, perhaps 1/1000th the size, are pretty good at learning to recognize faces or animals; others perform optical character recognition (OCR). The Weather Channel app on my phone claims to be connected to Watson, the IBM system that won a Jeopardy episode a few years back. The key fact about these systems is that nobody knows what they are really doing "in there", and nobody even knows how to design an oversight system to find out. If someone figures that out, it will probably slow them to uselessness anyway. At least the 40-year-old fad of "expert systems" was able to explain its own reasoning, but proved incapable of much that might be useful.

To expand on the difference between virtual neurons in a deep learning system, and the real neurons found in all animals (except maybe sponges): A biological neuron is a two-part cell. One end includes the axon, a fiber that transmits a signal from the cell to several (or many, or very many) axon terminals, each connected to a different neuron. The other end has many dendrites, which reach toward cells sending signals to the neuron. An axon terminal connects to a dendrite, or directly to the cell body, of a neuron, through a synapse. That is one neuron. It receives signals from tens to hundreds to thousands of other neurons, and sends signals to a similar number of other neurons. Neurons in the cerebellum (which controls bodily functions) can have 100,000 synapses each! The "white matter" in the mid-brain consists of a dense spiderweb of axons that link almost everything to everything else in the brain. The spinal cord consists of long axons between the brain and the rest of the body.

A biological neuron responds (sends a pulse through the axon, or not) to time-coded pulses from one or more of the synapses that connect to it. The response is nonlinear, and can differ for different synapses. The activity of each neuron is very, very complex. It is very likely that, should we ever determine exactly how a particular type of neuron "works", its activity will be found to require one or more GPU's to simulate with any fidelity. Thus, a true deep learning system would need to have, not thousands or even millions of GPU's, but at least 100 billion, simulating the traffic among somewhere between 100 trillion to 1,000 trillion synapses.

Folks, that is what it takes to produce a "superintelligent" AI. As an absolute minimum.

But let's look at that 100 billion figure. It has long been said that a human brain has 100 billion neurons. People think of this as the population of the "gray matter" that makes up the cerebral cortex, the folded outer shell of the brain, the stuff we think with. Here is the actual breakdown:

  • Total average human brain: 86 billion neurons (and a similar number of "glia", that help the neurons in poorly-understood ways)
  • Cerebral Cortex (~80% of brain's mass): 16 billion neurons
  • Cerebellum (18% of brain's mass): 69 billion neurons
  • Brain Stem (~2% of brain's mass): 1 billion neurons

Also, on average, a cerebellar neuron has 10-100 times as many synaptic connections as a cerebral neuron, even though it is much smaller. It takes most of the brain's "horsepower" to run the body! Your much-vaunted IQ really runs on about 19% of the neurons. About 1/3 of that is used for processing vision. (Another aside: This is why I claim that the human visual system is by far the most acute and capable; the human visual cortex exceeds the entire cerebral cortex of any other primate).

I wish that the essayists who wrote in Possible Minds had said more about different possibilities for producing intelligent machinery. Simulating natural machinery is a pretty tall order. The hottest deep learning system is about one-millionth of what is needed to truly replicate human brain function, and by "about" I mean, within a factor of 10 or 100. Considering that Moore's Law ended a decade ago, it may never be worthwhile. Even if Moore's Law restarted, a factor of one million in processing power requires 40 years. Getting the power consumption down to a few hundreds of watts, rather than several million watts (today) and possibly tens of billions of watts in the middle-future, may not be achievable. I hold little hope for "quantum computers". Currently they don't even qualify as a fad, just a laboratory curiosity with coherence times in the range of milliseconds. Quantum coherence needs lots of shielding. Tons. We'll see.

So, robot overlords? At the moment, I put them right next to faster-than-light starships, on the fantasy shelf.

No comments:

Post a Comment