kw: book reviews, science fiction, space travel, space aliens, artificial intelligence
Considering recent titles by Ben Bova, I figure that his new novel Survival is one of a series, probably the second or third. Very generally, humans have been contacted by a starfaring machine culture that they call the Predecessors, an altruistic one that is attempting to contact as many planet-bound cultures as they can, to help them prepare to survive a "Death Wave" that is spreading from the galactic core. Periodic cataclysms there emit enormous waves of gamma radiation that scour all the planets in a galaxy of organic life, and also any "machine life" that is insufficiently hardened against the radiation.
Much of the dramatic tension in the book, aside from a couple of love stories, concerns the contrast between the Predecessors and another machine culture on a planet about 2,000 light-years toward the galactic center from Earth, dubbed the Survivors. The Survivors are not altruistic. They are motivated only by a drive to survive that was built into them by the organic beings that first created them and then perished in an earlier death wave. A human ship sent to warn the Survivors is held captive by them. They do not wish to receive such help, having survived prior death waves, and they are supremely indifferent to the fate of the organic life on their planet, even though they had a hand in preserving remnants of earlier biospheres, and re-instating them.
It would give away too much to describe more of the plot. Of course, this would not be a Ben Bova novel if the humans didn't find a way to influence the Survivors.
I am interested in decision making, by men and machines. A number of psychological and sociological studies in the past couple of decades have demonstrated that an emotionless, Spock-like hyper-logical being would actually be crippled when it came to deciding between competing alternatives. People who have suffered injuries that disconnect their intellect from their emotions, or that destroy their emotional centers, are incapable of making decisions. It seems that we need the ability to like or even love one thing more than another. Emotion is not contrary to logic, but in some way amplifies it.
We were, or at least my generation was, taught that our brains evolved in layers, with an entirely reactive "reptile brain" or even "fish brain" overlain by an emotional "rat brain", all wrapped in a cerebral cortex, the "higher brain" in which free will, creativity and intelligence resides. Further, if we learned a smidgen of endocrinology, we find that emotion is a largely bodily function, mediated by hormones and other small molecules that motivate us, or its opposite, in response to many kinds of stimuli. The reality is not all that simple. Disconnect the cortex from the inner layers, and reasoning spins along without having any "hooks" to pick one alternative over another.
The "modern" (50-70 years) efforts to create Artificial Intelligence (AI) comprise two branches. The engineering branch includes various attempts to replicate human reason by finding all the rules we follow. The "poster child" for this branch is the Expert System, much touted 20-30 years ago. Its best-known example was the Caduceus System for medical diagnosis. It was supposed to be able to ask a doctor or patient a series of questions and then produce a differential diagnosis of any condition. It could eventually diagnose about 1,000 conditions, a small fraction of the total. As is typical of the field of AI in general, it had a few dramatic successes amidst a great number of equivocal results and a large mass of failures.
The heuristic branch of AI includes neural nets and machine learning systems. A neural net is a framework that is thought to (very roughly) represent natural networks of neurons, and while it can be implemented in hardware, it is most often emulated by software running on a multi-core processor. A machine learning system can be based on a neural net, or on other technologies that are being tried on a rather empirical basis, almost trial-and-error. The Watson system, which won a round of Jeopardy against two very talented men, is the current exemplar. I doesn't make the news much these days. It cost billions to develop, and a number of applications (besides winning game shows) are still being developed. It has been 7 years since the Jeopardy event. So far as I can find out, its main strength is parsing and answering questions posed in natural language, presumably orally, and it is mainly being used for decision support in the treatment of lung cancer.
Many of us now encounter AI in the form of Siri, Alexa, Cortana or a few other voice avatars of our smart phones and "thing in the den" systems. They can indeed parse our vocalizations into text, and then the "big computer in the cloud" can come up with an answer for us. A few years ago a short article in the endpapers of Scientific American touted a machine system that supposedly exceeded the human brain in processing speed and memory capacity. Just a bit of miniaturization is needed before such a "machine brain" can be installed in Robby the Robot, however: the system requires a 9 megawatt power plant to support the electronics and the cooling systems, and fills a warehouse. Can we expect technology to advance until such a system is lunch-pail size and runs of a lithium battery? Can Moore's Law (very roughly: capacity tends to double about every 2 years) deal with this?
Wouldn't you know it, Moore's Law ended about 15 years ago with the development of CPU cores (because more than one CPU is now put on a chip) that run in the 3-4 GHz range. That has been the limit. I built the PC I am using about 8 years ago, to be a middle-of-the-road desktop system. It is still a middle-of-the-road system. In the 1980's a machine with this level of compute power was called a supercomputer. Today's supercomputers are enormous arrays of 4 GHz multicore processors running special software so they can effectively communicate and thus break up a complex process into numerous subprocesses. Had Moore's Law continued from 2005 until today, a single-core processor running at a clock speed close to 1 THz would be the basis of most systems (and phones, etc.), and a large chip holding up to 1,024 cores would form the basis of supercomputers that would fit in a filing cabinet-size unit, consuming no more than a few hundred watts.
Will quantum computing re-start Moore's Law where it left off? So far, it is a great deal harder to take advantage of quantum coherence and decoherence than anybody predicted in the 1990's when "quantum computing" was coined. Without that, however, we have little hope of developing a "machine brain" to rival the mammalian brain for its combination of size and capacity.
And we still have no clue how to even define consciousness, let alone reproduce it. Every SciFi story about artificial consciousness either finesses it with "it just happened when we put together enough circuits", or doesn't try to describe its workings at all. Based on the studies mentioned earlier, a big component of consciousness is our emotional response system(s). We really don't understand that, and until we do, it may be impossible to develop a Watson-like "decision support system" into an effective "decision system".
Still, a well-told tale of how we might interact with any machine civilizations out there is a welcome diversion from the "organic" aliens we usually encounter. It gets the brain cells working in a new direction or two...a good thing.
No comments:
Post a Comment