The title is a quote from Jeffrey Eugenides, and succinctly expresses my understanding of the mind. A longer exposition on the mind and its possible futures is found in The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind by Michio Kaku. Dr. Kaku, a physicist whose specialty is string theory, is well known to those who watch the Science and Discovery Network cable channels. He is always willing to provide a series of provocative and quotable sound bites on scientific subjects.
In The Future of the Mind he first explores what the mind is, particularly the conscious mind, and defines consciousness in his own unique way. I like his approach:
Human consciousness … creates a model of the world and then simulates it in time, by evaluating the past to simulate the future. This requires mediating and evaluating many feedback loops in order to make a decision to achieve a goal. (p 46)I would only add: goals can be both innate (hunger or reproduction) and derived (the engineering steps needed to construct a bridge, even though the bridge is part of a larger, innate goal). The reference to feedback loops harks back to an earlier discussion of levels of consciousness.
- Level 0: Stationary organisms or mechanisms that react to one or a very few feedback loops in a few parameters. The lowest possible consciousness is that of a thermostat, which he defines as Level 0:1 because it reacts to one parameter, Temperature. Plants react to Light, Gravity, Temperature, Moisture and perhaps a few Mineral Concentrations, and could be characterized as Level 0:n where n is about 10.
- Level 1: Motile creatures (and perhaps some mechanisms) that can thus react to changes in space and location, particularly animals with a central nervous system such as fishes and reptiles.
- Level 2: Social animals, particularly those that express a theory of mind and are thus reacting to the possible or probable intentions of their fellows and other animals such as predators or their prey. The number of feedback loops that Dr. Kaku might enumerate here grows into the hundreds or thousands.
- Level 3: Future consciousness, which may or may not be among the capabilities of some nonhuman animals, but is a characteristic of human consciousness. Planning for the future, particularly with multiple contingencies, and not as an instinctual reaction, is the hallmark of this Level.
At this point I must note a puzzling item, an apparent error. In his student years, the author experimented with Sodium-22 (Na-22), an isotope that emits positrons. He then mentions, in two places (pp 5, 26), that Na-22 is used for taking PET (positron emission tomography) scans of brain activity. Not really. Wafers containing a tiny amount of either Na-22 or Ge-68 are used as "spot markers", stuck on the outside of the body to provide orientation markers, typically for organs other than the brain, which has such a distinctive shape that markers are usually not used. Brain scanning in particular uses Fluorine-18 in a glucose analog (fluorodeoxyglucose or FDG); glucose concentrates in active areas of the brain, and FDG with it. The positrons detected in the scanner "light up" these active areas on the scans.
F-18 has the virtue of a very short half life of 110 minutes and must be generated in a reactor possessed by the imaging facility just before use. Na-22 and Ge-68 have half lives of 2.6 years and 8.9 months, respectively. Also, neither can be used to produce a glucose analog. Even if they could, to achieve a similar level of positron emission, much larger amounts would have to be used, which would continue to emit at that level for many months or years. Thus F-18 is thousands of times safer in the body than the other two.
Onward. Leading up to the multilevel model of consciousness, I find this statement:
Self-awareness is creating a model of the world and simulating the future in which you appear. (p 36)This leads later to a discussion of whether machine consciousness can become self-aware. A recent article in Wired by Kevin Kelly discusses Artificial Intelligence as an emerging "cloud service", a scalable on-demand service already being used, for example, by face recognition modules in programs such as Picasa and Photo Gallery. Kelly particularly notes that consciousness seems to need an element of chance to make it work. If this is so, conscious intelligence is inherently less than 100% reliable, so that future AI offerings may need to be certified as "Non-Conscious". Thus his view of machine intelligence is as something supplementary to the "natural" consciousness we experience, and is best kept unaware.
Dr. Kaku believes just the opposite, and discusses at length the possibility of machine self-awareness, and the possibility that we will be replaced by machines. The word "robot" is bandied about, with little acknowledgement that the word has two very distinct, very different uses in science fiction versus industry.
Industrial robots are actually better described either as Waldoes—based on "Waldo" by Robert A. Heinlein in 1942—if they are directly human-controlled (this includes drones), or as programmable actuators when they are controlled by a program running in a connected computer. Thus they are a logical extension of NC (numerically controlled) machining.
Autonomous robots as described by Isaac Asimov in I, Robot and all his later "Robot" books and stories, whether subject to his "Three Laws of Robotics" or not, are still decades in the future, if indeed they can be realized as self-contained entities at all. Current state-of-the art autonomous robotic mechanisms, such as the car from Stanford that finally won the DARPA self-driving competition in 2005, are barely at the threshold of Level 1 consciousness. Their "planning" capabilities are pre-programmed, an analog of animal instinct, and limited to finding a way to specific GPS coordinates.
Moore's Law states that the number of devices on a computer chip tend to double about every 18 months. It is a trend Dr. Gordon Moore observed, but has become a self-fulfilling prophecy driven by the profit motive. Several related trends include the power requirements of a certain amount of processing speed: watts per gigaflop (GFLOP, where FLOP means FLoating-point OPerations; per second is implied) seem to fall by about half every two years. This allows us to make a prediction, based on the assumption that Moore's Law will continue to hold for a long enough period. Today's fastest computer system has processing speed and memory capacity very similar to the human brain, but consumes 9,000,000 watts, including air conditioning. The brain maxes out at 20-25 watts. Nine million divided by 25 is 360,000, or 2 to the 18.5 power. That implies at least 37 years before human-level AI can be run with 25 watts.
Moore's Law is already in trouble, however. The fastest computer chips today run at about the same speed as those of about 10 years ago. Greater total power in a "CPU chip" for your PC is achieved by putting multiple processors on the chip. That is why they are now called "multicore" CPU chips. The computer I am using has a 4-core CPU. Commercial chips top out at 16 cores (as of late 2014), and the Watson supercomputer has thousands of these wired together.
I don't hold out much hope for "quantum computers" (qC's). The hype about these devices is beyond incredible. Their operation requires maintaining coherence among some number of quanta, typically electrons or ions held in some kind of magnetic trap, and being able to decohere them in sequence for readout into ordinary, electronic devices. Holding coherence longer than a small fraction of a second is comparable to balancing a pencil on its point. I suspect that the ancillary machinery needed for maintaining coherence and, even worse, manipulating it quantum-by-quantum for readout, will grow exponentially with the length of time coherence is needed, and the number of quanta in use. I don't anticipate a qC to be able to crack AES-256 encryption anytime this century, if ever.
I find the middle of the book most useful. Dr. Kaku discusses mechanically enhancing our smarts. This is actually what we do all the time with the academic technologies, beginning with the emergence of writing a few thousand years ago. While we still ought to teach times tables to our youngsters (gigantic groan from the grandkids), calculators in our phones and watches ensure that we make fewer arithmetical blunders. In 1958 "The Feeling of Power" by Asimov was published, in which mental arithmetic is rediscovered after decades during which all calculation was done using small devices (in 1958 the "desk calculator" was a bit bigger than a portable typewriter). These days we use Google or Bing or DuckDuckGo to find stuff we're not quite sure we remember, or don't know in the first place. Siri and other voice apps on our phones make this process simpler than ever. This enhances our useful smarts.
I am not sure most of us will ever need the invasive devices he describes, such as nanowire hookups to our hippocampus and other areas that mediate memory. The mind is tough to tinker with mechanically. TMS (trans-cranial magnetic stimulation), using a magnetic coil outside the skull, can briefly inhibit certain functions. It has been used to make a person a temporary psychopath, by zapping the brain area where caring resides, and to briefly release savant capabilities, by shutting down an area of the brain that is inactive in autistic savants. But TMS does not add capabilities, it only releases inhibitions placed upon some functions in ordinary brains. Why would you want to be a psychopath, anyway? Ask Neil Armstrong, who needed totally uncaring, steely resolve to land the Lunar Module in 1969. Not all psychopaths are criminals. Maybe future lunar missions (or even commercial airliners) will include a TMS device to shut down distracting anxiety in a pilot during landing.
Supposing we learn to read out and implant memories, even to create or erase them at will. Sometimes this could be a very good thing. I define neuroses as "out of date defense mechanisms". The person or situation that hurt someone is gone forever, but they still react to certain stimuli in embarrassing or disabling ways. When a neurosis is based on a well defined, focal experience, psychologists call it an Engram, and erasing engrams might be a very useful future use of mind technology. Other than that, leave my memories alone!
But memory is slippery, and specific incidents don't just make a kind of diary record in some spot in the brain. Dr. Kaku describes well how shortcut/thumbnail images go one place, emotional memories another, smells elsewhere and so forth. Recalling a memory means gathering all these bits back together for replay through some part of the frontal lobe (and relevant spots throughout the brain) so you can relive the incident. But we edit our memories, emphasizing certain items at the expense of others that we gradually forget entirely. This makes "truth serums" unreliable, as discussed in a mind control chapter.
Dr. Kaku discusses the possibility that we might merge with our electronic offspring, once it is to our benefit to do so. This simply expands the notion of "prosthesis" to the brain. Certain modern "artificial legs" actually perform better than the original for specific tasks. Just ask the "blade runner" (and it is unfortunate that he is now a felon; I don't think it likely he knowingly killed the girl but he couldn't convince a jury of that). He wasn't nearly such a fast runner before he got springy metal feet. But he'd need differently designed prostheses to play football (soccer in America).
As I have mentioned many times in earlier posts, I made a 40-year career out of writing software that worked with people, taking advantage of what people to well and leaving to the machine the tasks that people do poorly. A mechanical brain excels at detecting differences. There are amusing puzzles such as "find 10 things that are different between these two pictures". Sometimes, one of the pictures is a mirror image, which to me actually makes it easier. Something that takes experienced puzzle solvers 5-10 minutes would be solved by a computer with a webcam in a second or less. It might also highlight several hundred or thousand tiny errors that arise from printing ink interacting with the fibers in the paper, something few humans would be able to notice without using a microscope. A "wetware" brain excels at detecting similarities. That is why we can see camels or fish in a cloudy sky, or recognize someone from seeing only the edge of a face turned mostly away.
Only in the past week, I noticed that Picasa is picking out faces that are in profile, something it couldn't do before. But it is still flagging a percent or so of things that are clearly not a human face. However, its ability to find 90+% of the faces in my photos really speeds up face tagging. If I give it time after loading a new batch of pix, it gathers suggestions for many of the faces from my library of identifications of about 700 friends in multiple images. This is an example of useful AI: it isn't as good as I am, and doesn't need to be. It just needs to do most of the work and leave it to me for refinement. But I would not want to leave it to the Picasa face-recognizer to guide a drone on a kill mission. Not when it mistakes so many other Asian women for my wife!
A minor error seen in passing on p 255: The fastest supercomputer at the time of writing could perform about 20 PFLOPs (P = Peta), which is explained as 20 trillion; it is actually 20 quadrillion. A trillion FLOPS is a TFLOP (T=Tera).
And, oh dear, another: comets in the Oort cloud are described on p 289 as lying "motionless in empty space". Even at distances up to a light year, these comets move at speeds in the range of at least 100 m/s in orbit about the Sun. Compared to the Earth zipping along at nearly 30 km/s, or even Pluto, averaging about 5 km/s, that is quite slow, but far from "motionless". Autobahn speeds top out near 90 m/s.
Dr. Kaku excels in speculation, which means he is frequently mistaken as his expectations are overtaken by actual events. However, only those who have the courage to predict have the chance of sometimes being right. While the single-processor version of Moore's law was played out about the year 2000, multicore chips and continuing experiments with vertical-transistor chips continue a somewhat more modest trend. Will we ever achieve the 20 PFLOP-at-20-watt processor needed to equal a brain in both speed and power required, and also in volume (2L or less)? Moore's law might suggest the 37-year timeline I figured above, but we can't really know until we try.
And I don't think duplicating human consciousness is a worthy goal anyway. Much better is producing machinery with sufficient computing power to enrich everyone's lives at affordable cost. This matches the old Japanese supercomputer project which had as one goal, achieving Cray-1 capability (100 MFLOPs) in a $2,000 PC by 1995. This goal was achieved. The computer I am using now, which I built, is 100 times that fast, and the parts cost $800. I want machines to continue what they do best: complement and supplement our abilities. I think Dr. Kaku would agree, in spite of his excited, blue-sky forecasts.
The Appendix is titled "Quantum Consciousness?". It is likely that "free will" and full consciousness require quantum uncertainty. Here I am in full agreement. There is quite a discussion of the "Cat in a box" proposed by Schrödinger. Based on a carefully set-up radioactive detector that has exactly a 50% chance of triggering the release of poison gas to kill the cat in the next hour, we are asked, at precisely the one hour point, "Do you think the cat is dead or alive?". Much is made of the meaning of the Observer, in the Copenhagen Interpretation favored by Niels Bohr and most physicists, and other interpretations. No mention is made of the fact that the cat is also an observer! In fact, the results of many experiments that are intended to "prove" these things show that photographic emulsions, CCD detectors and other devices are also observers! They record the "collapsed wave function" phenomena, whether or not a human is present. I think nobody suggests that the image on a piece of film, developed in automatic machinery, does not truly appear until a human actually turns on a light and looks at it.
I think I am repeating something I wrote elsewhere to say this: a beam of light passing through a vacuum is affected by everything it passes, at any distance whatever. Of course, if you pass it through a small hole you'll get a diffraction pattern. The edge of the hole is the "observer" that leads to the scattering of the photons into a more divergent beam. But even a 1mm diameter laser beam, if it passes through a 1 meter aperture, will make a different pattern on a distant film than it would if the aperture were 2 m across. It will also differ if the aperture is square vs round. The existence of "things" in the universe provides an infinite number of "observers", contributing to the collapse of the wave function—if indeed that is what actually happens—for every quantum event everywhere.
Thus, the author's conclusion is apt. We must know ourselves better, not only to enhance or even duplicate our abilities, but to develop tools that work with us in better and better ways, in more and more useful realms of experience.
No comments:
Post a Comment