kw: computers, artificial intelligence, predictions
Some thirty years ago, when I was an OS analyst for large mainframes and supercomputers, I visited my favorite aunt and uncle. At one point, my uncle asked, "Do you think computers will ever take over the Earth?" I replied, "The already have," and went on to explain how society would fall apart without them. But I also said there is no intentionality; the computers were not agents, but tools, very very fast and powerful tools indeed, but tools.
A few days ago, the June 2010 issue of Scientific American arrived, and I have just read the feature article, "12 Events that will Change Everything". One of the twelve is titled "Machine Self-Awareness" with the subtitle, 'what happens when robots start calling the shots?'. The writer of this piece is Larry Greenemeier. He quotes Hod Lipson of Cornell University, that as machines "get better at learning how to learn" (Greenemeier's phraseology), "I think that leads down the path to consciousness and self-awareness." (Lipsom quote)
To be short about it, I don't. Artificial Intelligence (AI) based on computers has been preached for more than fifty years, and seems no closer now than it was when Eniac was called an Electronic Brain. As it happens, heuristic programming remains as difficult as ever it was, and machine learning is very simple indeed. Remember HAL from 2001: A Space Odyssey? Nearly nothing predicted by Arthur Clarke in that screenplay has emerged, here nine years past that date. A few of the simpler goals of the Japanese Fifth Generation project were achieved, but the widespread adoption of Inference Engines never happened, because such Engines were never brought to fruition.
There is a critical difference between the hardware/software combination we call a computer, and the wetware, the brain/body system, we call an animal mind. The following is true not just of humans but of animals in general: a Mind is really, really good at finding similarities and recognizing familiar things, and really, really bad at numerical calculations and at finding subtle differences and distinctions. And the converse is true of all computational devices: a Computer is really, really good at numerical calculations and manipulations, and at finding subtle differences and distinctions, and really, really bad at finding similarity and at recognition. I have built a career upon this distinction, upon taking advantage of the synergy between a Mind and a Machine (or Computer). A slogan in my profession is "Let the singers sing and the dancers dance."
Self-awareness is hard. It requires a recognition task of the highest order. So far as we know, self-awareness is only found in humans, dogs, chimpanzees, and certain birds. All other animals show no signs of self-concept. Yet it takes only a circuit of three or four neurons in an animal brain to perform recognition tasks that require very sophisticated software in computers. For example, the face-recognition software in Picasa (one of my favorite Google tools), does a workmanlike job, but makes some spectacular blunders. A small fly with its few dozen neurons is as fast and more accurate. Of course, it is so far not possible to couple a fly's brain to an installation of Picasa.
It turns out that the behavior of just three neurons in a circuit is so complex that it takes a large, multi-multicore piece of hardware to run a program that accurately mimics it. Artificial neurons have proven very hard to produce. Let's suppose a true learning machine is one day developed. Upon what will it be based? Probably on some kind of artificial neurons. It will be a kind of artificial animal. What other life-systems must be provided for it to operate correctly?
I suspect that it will need sensory input to keep it sane. Any animal kept for too long in a sensory-deprivation environment becomes unbalanced, sometimes permanently. So, provide senses. Now it needs filters, so it is not overloaded by its senses, but can tune its awareness of them. Finally, you have, perhaps, an artificial cockroach, except it is the size of a lapdog. I hope advances in battery technology give it more than a half hour of operation before recharging is needed.
At that point you have an interesting laboratory curiosity, but cockroaches are easy to breed. Training animals is cheaper than replacing them with such contraptions. So my final contention is that it will never be economical to build a machine that is capable of self-awareness, and keep it running long enough to attain a useful amount of education. I simply don't believe silicon (or other technology) will replace carbon-based life. Ever. Sorry, Berserkers; Sorry, HAL; Sorry, Colossus; even Sorry, Friday (Heinlein's cyborg girl) and all the Borgs out there. You're fun fiction. Fiction you will stay.
Subscribe to:
Post Comments (Atom)
1 comment:
So if I understand you correctly you're not saying it isn't possible to build self-aware machines. You are just saying it will never be cost effective?
Forgive me but that seems like a horrible argument for why Self-Aware machines will never proliferate. Paying people to do work is incredibly expensive hence the explosion in automation. I find it hard to believe the automaters won't just march right into self-aware machines if they are actually possible to build.
Even though I disagree with your reasoning I still really liked your post. I wish more people would discus this subject. If someone could come up with a formal proof as to why GAI is not possible it would save many people lots of time. Just like physics proving that perpetual motion machines aren't possible. Educated people don't bother with them anymore because they know they are impossible.
Post a Comment