Short Definitions:
Intelligence – the ability to acquire and apply knowledge and skills
Intellect – the faculty of reasoning and understanding objectivelyThe first tension that presents itself in Morphing Intelligence: From IQ Measurement to Artificial Brains, by Catherine Malabou, is that between the use of "intelligence" where most philosophers would have preferred "intellect", in discussions of reasoning ability and its measurement, roughly a century ago. I suspect that the definitions she would use for these terms differ from those shown above. Personally, I care little which term is used.
It took me a good while to get used to the language of the book. It was translated from French by Carolyn Shread. I was not always sure if this or that infelicity of English usage was because of the original French idiom, or from word choices made by the translator, choices I might have made differently. I suspect the former; philosophers, particularly European philosophers, think in ways quite foreign to us hoi polloi. Such is the legacy of a century of Linguistic Analysis (both Ideal and Colloquial) and their sister, Logical Positivism: too much musing on "the meaning of meaning". Once I had my mind in gear, I found the book quite engaging.
The core of the book, explaining the title, surrounds three "metamorphoses" of "intelligence." I would perhaps have used either "manifestation" or even "aspect" rather than "metamorphosis". The three are, in brief:
- The view of Intelligence as a genetic endowment of learning and reasoning ability, that can be measured. It is what IQ tests purported to measure, and in the literature is called both IQ and g, g being the "general factor" as opposed to a seven (later eight) factor model promoted by Howard Gardner. "Mechanical brains" based on digital computers were the early attempts to simulate mental activity in mechanisms.
- The more nuanced understanding of epigenetic effects, both those which affect DNA expression and those which affect the "wiring" of neurons and synapses. Recent developments of "artificial neurons" such as the "neuro-synaptic processor" or TrueNorth chip (see Cassidy et al, 2016) make significant advances over traditional digital processing.
- Future developments are expected to result from "removal of the rigid frontiers between nature and artifice." In other words, the "power of automatism" is expected to yield a constructed system that is a brain in every meaningful sense of the term.
The TrueNorth chip, with upwards of 5 billion transistors, simulating the action of a million neurons connected by 250 million synapses, is intended to functionally simulate 250 cortical columns, of which the neocortex of a human brain has about four million. This is based on measurements made since 2009 that the neocortex contains 16 billion neurons, while the cerebellar cortex (which runs the automatic systems of the body) contains about 80 billion neurons. Clearly, the cerebellar neurons are much more locally connected; much of the greater mass and size of the neocortex results from the great number of longer-distance connections via larger-diameter axons.
Assuming that the constructed neurons in the TrueNorth chip can indeed replicate the full complexity of biological neurons, the construction of a full brain simulation would require 16,000 chips for its neocortex, quite a nest of wiring for the "white matter" that ties the neocortex together, and 80,000 chips for its cerebellum…connecting the cortex to what kind of body, I can't imagine just now. So let's look only at the 16,000 neocortical chips. According to the 2016 article, each chip consumes a mere 65 mw. Times 16,000, that comes to just over a kilowatt (1,040 watts). Not bad. That compares well with a computer system reported upon in a short article in Scientific American a few years ago, which was thought to have capabilities similar to a human brain, and required a 9,000,000 watt power plant.
I suppose we can at least estimate that something similar to Moore's Law applies to these systems, with a doubling of efficiency every two years. Your brain and mine each consume about 15 watts of chemical energy (transformed partly to electrical signals). The neocortex requires a third of this, about 5 watts. 1,040/5 = 208, which comes to 7.7 powers of two (doublings). Perhaps the contention of Ms Malabou is correct, and automatism will prevail. Will it do so in the span of 15.4 years, beginning in 2016? That would be some time in the middle of 2031. Maybe I'll find out, because I'll be 84 that year.
All this I push aside in favor of another thought, one not found in Morphing Intelligence: Of what use is a perfect simulation of the human brain? I am reminded of a story by Isaac Asimov, probably found in his collection The Rest of the Robots. Using science as it was known in about 1960, he describes the endeavors of researchers at U.S. Robotics to make a robot "more and more perfect", as directed by the company president. One day the researchers bring him a robot that he cannot distinguish from a "natural" man. Shortly after this, an alien spaceship lands, and in due time, a delegation visits U.S. Robotics. They are shown the new "perfect human" robot. The company president gushes about the huge amount of research and cost required to develop it. An alien responds, "So, what's the point?"
There is indeed a point. Once we understand brain activity and function sufficiently well that we can simulate it perfectly, we have the basis for producing a true AI that is equal to our NI (Natural Intelligence) in power, but different, in ways we can determine. We can produce variations on a theme, perhaps developing new ways of thinking, a mechanism (or a bunch of them) from which we can learn new ways of perceiving and learning and interacting with the universe. Now, that is interesting!
No comments:
Post a Comment