kw: book reviews, nonfiction, artificial intelligence, futuristics
Item: Moore's Law for many attributes of electronic devices, exemplified as a straight line drawn on log-log charts, changed its trend in about 1980. Prior to that, number of devices per cm2 of chip doubled almost yearly; since then, it has doubled each 18 months. It shows signs of tipping further.
Item: The fastest, densest microprocessor chips as of mid-2007 ran at nearly 5 GHz speed and had transistor-gate widths of about 50 nm. Feature size is shrinking by half each 4 years, and maximum speed increases by 1.5x each 4 years.
Put these together, and what do you get? The size of a silicon or other semimetal atom is about half a nanometer. "It takes two to tango", so we can deduce that device features can be no smaller than one nm. From 50 to 1 is 5½ halvings; times 4 is 22 years. Moore's law must end by 2029, or be modified in slope before that time, but 1 nm is where it will end, at any speed. Also, 1.55.5=9.3, so the fastest switching speed possible is 45-50 GHz.
Now, IBM, Intel, AMD and others are putting multiple processors one one chip, as many as sixteen at last count. Many recent PCs have dual-core processors. Assuming appropriate software can take full account of such parallelism (a huge assumption), we get effective single-CPU speeds in the range of 0.5-1 THz. If 512-core CPUs become the norm in the 2020s, a device would have the power of a 25-THz processor. Not bad.
Consider the brain. The cortex of a mammalian brain is organized into "cortical columns" consisting of about a thousand neurons each. On page 214 of Beyond AI: Creating the Conscience of the Machine, Dr. J. Storrs Hall estimates that a cortical column has processing power equivalent to 10 GHz and functional memory of around 1 GByte (A PC made with the "fastest 2007 chip" mentioned above would be about half the speed, but with equal or more data storage).
A human brain contains about ten million cortical columns. I think it safe to say that the reason Deep Blue could beat Gary Kasparov at chess was not processing power but memory, and the fact that its processing power was focused on a single task. It could remember "advantage scores" for 50 billion board positions, and he could not. The "ten percent" of his brain that folklore claims we use consciously has a million times the calculating power of Deep Blue. But even human "lightning calculators" are millions of times slower than current machinery, so Kasparov had no way to pre-calculate more than a few dozen board positions.
Dr. Hall optimistically outlines the likelihood that artificially intelligent machines will outstrip humans in every way in the mid-21st Century. While this just might be true, such a device will be a networked collection of processing centers rather than any single center. Let's see: 10 GHz times ten million is 100,000 THz...a 2030's era 512-core processor could achieve 25 THz, so you need at least 4,000 of them with very, very good communication amongst them.
That communication may be more possible than it appears at first. The brain is only 15-20 cm across, but neuron signals travel only about 50 m/s. Between-device signals speed through copper wires at 2/3 the speed of light, or 200 million m/s; that is four million times as fast, so millions or billions of processors of almost any 21st Century technology could be coupled together, as long as they are within a 400-km sphere.
Getting that much hardware into a mobile robot is another story. Using Dr. Hall's terminology, an epihuman or hyperhuman intelligence would be pretty large: A quarter million processors of postage-stamp size, even if you can stack them two to the millimeter, occupies 3 liters, and you need room for communication interconnections, power supply wiring, and cooling..say you need 10 liters of total volume, and possibly much, much more. A human brain, which includes all this stuff, is about 1 liter.
Though the bulk of Beyond AI is occupied with the history of AI and estimates/speculations about what is needed to approximate human brain power, the author's aim is ethics. In several portions, he likens an advanced AI to a modern corporation. Run well, a corporation can do much more than an independent businessman. They are, in a sense, artificial intellgences, built of natural intelligences. In particular, they answer some of the questions of how an AI might feel about being owned by "lesser minds". He particularly notes that "Corporations are owned, and no one thinks of a corporation as resenting that fact." (p 249)
Of course, in my experience, corporations become bureaucracies, and a bureaucracy is the prime example of a whole that is less, much less, than the sum of its parts. This I see as the real problem with collective-mind AI architecture. If we can't figure out how to prevent creeping bureaucratism in our institutions, how can we ever teach our "children of the mind" to do so?
Dr. Hall, of course, thinks that at some point they will gain the ability to do it for themselves, that they could become our moral teachers. This might be so if morals can be reduced to cost/benefit analyses where the "cost" of harming another is set very high...but who'd do so?
I don't want machines that do my thinking for me, not even my ethical thinking. I am a "power user" of computers, and have been for forty years. I have built a career on correctly discerning the appropriate divide of tasks between human and computer.
I think it was Jerry Pournelle who wrote, at least fifteen years ago when Byte Magazine was still being published, that a computer is a difference detector, while a mind is a similarity detector. Neither is very good at the other task.
Machine memory is so perfect that a single-bit difference between two photographs will be instantly detected, but would require a human days or weeks of work to find unaided. This is why Jpeg compression makes pictures that look so good, even though we're seeing only a twentieth or less of what we think we see. The "recognition engine" in my brain is so good I can recognize a familiar face when a person is so far off their whole body only "lands" on a hundred retinal cone cells. It takes a ton of expensive machinery to do one-thousandth as well.
So I don't want a machine that recognizes my wife at a distance; I can do that for myself (maybe blind folks would feel differently, though). I don't need machines that tell me my conscience is bothered, either. I have a rather keen one. And a psychopath would disable the machine so it was as inactive as his own conscience, just as some folks kill police officers who "get in their way". I am not just being cute to say that; it is a fact of life in Philadelphia, for example.
I need machines that remember for me, and help me find what's in there. I need them to calculate rapidly and accurately. I need them to find stuff; I am a huge user of Google™.
If I become disabled, an inexpensive, reliable helper would be a big boon; it would have to be cheaper and more reliable than a trained monkey, which some people use, and if a monkey can do such tasks, a one monkey-power AI, without the monkey's emotions to distract it, should do nicely. And I reckon that means 5% of a monkey's total brain power is enough.
But one other thing about "personal service" machinery, a very important thing. Safeguards to their operation must be built into their hardware, not just programmed in there with the rest of the logic. It was a single entry in a table, after all, that allowed an advanced X-ray machine to occasionally blast people with unfiltered electron beams 1,000 times too strong, and burn holes in them!
It was a logic state the programmer never thought of that led to the "sudden acceleration" problem some cars were having a few years back. When they replaced the chip with a corrected model, the problem didn't reoccur; but in cars that were properly engineered, the problem isn't possible, no matter how bad the programming.
We need computing machinery to things we don't do well, and leave us the work we are better at, that we enjoy. I have no problems with robot arms doing the welding and riveting of cars and trucks. The work is stultifying. Just read Rivethead by Ben Hamper, about the problems such work causes its human victims. I do have a problem with machinery that replaces every useful function I might engage in.
Well, this has been long on rant, short on actual review. Dr. Hall may be over-optimistic, but I can't fault him for that. I'm a perennial black hat. There is a lot to fear when we consider, as he does, that large corporations and the military will be the first to develop really advanced AI. Neither entity is inclined to produce a "gentlemanly" machine. The army wants efficient killers, and a corporation wants efficient competition-killers.
What keeps corporations and armies from making Earth a living hell for everyone except a tiny oligarchy? Beginning with Teddy Roosevelt, we've had 100 years of "trust busting" and other anti-monopoly action in the US, and a much longer history thereof in Europe, particularly England. To Dr. Hall, this is a hopeful indicator, and I tend to agree.
My motto for the purveyors of AI devices in the future: Never build a computer that controls its own power supply.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment