Saturday, September 14, 2024

Will our children become cyborgs?

 kw: book reviews, nonfiction, futurism, artificial intelligence, the singularity

I started these book reviews just after The Singularity is Near, by Ray Kurzweil, was published. I am not sure I read the book; I think I read some reviews, and an article or two by Kurzweil about the subject. Nineteen years have passed, and Ray Kurzweil has doubled down on his forecasts with The Singularity is Nearer: When We Merge With AI.

As I recall, in 2005 The Singularity referred to a time when the author expected certain technological and sociological trends to force a merging of human and machine intelligences, and he forecast this to occur about the year 2045. The new book tinkers with the dates a bit, but not by much. One notable change: in 2005 he considered the compute capacity of the human brain to be 1016 calculations per second, with memory about 1013 bits (~100 GBytes: woefully small by modern estimates). His current estimate is 1014 calculations per second, and I don't recall that he mentioned memory capacity at all.

I am encouraged that Kurzweil doesn't see us at odds with AI, or AI at odds with us, but as eternal collaborators. I'll be 98 in 2045, and I am likely to be still alive. Time will tell.

I am an algorithmicist. I built much of my career on improving the efficiency of computer code to squeeze out maximum calculations-per-second from a piece of hardware. But a "calculation" is a pretty slippery item. In the 1960's and 1970's mainframe computers were rated in MIPS, Millions of Instructions Per Second. Various benchmark programs were used to "exercise" a machine to measure this, because it doesn't correlate cleanly with cycle time. Some instructions (e.g., "Move Word#348762 to Register 1") might consume one clock cycle, while others (e.g., "Add Register 1 to Register 2 and store the result in Register 3") might require six cycles; and the calculation wasn't really finished until another instruction put the result from the Register back in a memory location. The 1970's saw a changeover from MIPS to MFLOPS, or Millions of FLoating-point Operations Per Second, to measure machine power. Supercomputers of the day, such as the CDC Cyber 6600 and the Cray-1, could perform "math" operations such as addition, in a single cycle, so a machine with a cycle time of 1 MHz could approach a rate of 1 MFLOPS (Note: The Cray-1 and later Cray machines used "pipeline processors", a limited kind of parallel processing, to finesse a rate of 1-FLOP-per-cycle. The Cray-1 achieved 100 MFLOPS).

The middle of the book is filled with little charts explaining all the trends that Kurzweil sees coming together. This is the central chart:


This is from page 165. Note that the vertical scale is logarithmic; each scale division is 10x as great at the one below. The units are FLOPS/$, but spelled out longer, because before 1970 the FLOPS rate had to be estimated from MIPS ratings. Also, the last two points are for the Google TPU (Tensor Processing Unit), a takeoff of the GPU (Graphics Processing Unit), which is specialized for extremely broad-scale massively parallel learning needed to train programs such as ChatGPT or Gemini. One cannot own a TPU, they can only be leased, so some figuration had to be done to make the data points "sit" in a reasonable spot on the chart. The dollars are all normalized to 2023.

The trend I get from these points (an exponential line from the second point through the next-to-last), is 52.23 doublings in 80 years, or a doubling each 18.4 months. It is also a factor of ten each five years plus a month (61 months). Of course, the jitter of the charted line indicates that progress isn't all that smooth, but the idea is clear. Whatever is happening today, can be done about ten times as fast in five years, or one can do ten times as much in the same time, five years from now.

When I was in graduate school, about 1980 (I was several years older than my classmates) we were asked to write an essay on how we would determine the progress of plate tectonics back in time "about 2 billion years", and whether computer modeling could help. I outlined the scale of simulation that would be needed, and stated that running the model to simulate Earth history for a couple billion years would take at least ten years of computer time on the best processors of the day. I suggested that it would be best to take our time to prepare a good piece of simulation software, but to "wait ten years, until a machine will be available that is able to run 100 times as fast for an economical cost". I didn't get a good grade. As it turned out, from 1978 to 1988 the trend of "fastest machine" is seen to be flat on the chart above! It took another five or six years for the trend to catch up after that period of doldrums. Now you can view a video of the motions of tectonic plates around the globe over the past 1.8 billion years, and the simulation can be run on most laptops.

So, I get Kurzweil's point. Machines are getting faster and cheaper and perhaps one day there will be a computer system that is smaller than the average Target store, which can hold, not just the simulation of one person's brain, but the whole human race. However, as I said, I am an algorithmicist. What is the algorithm of consciousness? Kurzweil says at least a couple of times that if 1014 calculations per second per brain turn out not to be enough, "soon" after that there will be enough compute capacity to simulate the protein activity in every neuron in a brain, and later on enough to simulate all of humanity, so that the whole human race could become a brain in a box.

Of course, that isn't the future he envisions for us. He prefers that we not be replaced by hardware, but augmented. Brain-to-machine interfaces are coming. He has no more clue than I do what scale of intervention is needed in a brain so it can handle the bandwidth of data transfer needed to, say, double the "native" compute capacity of a person, let along increase it by a factor of 10, 100, ... or a billion. At what point does the presence of a human brain in the machine even matter? I suspect even a doubling of our compute capacity is not possible.

Let's step back a bit. In an early chapter we learn a little about the cerebellum, which contains 3/4 of our neurons and more than 3/4 of the neuron-to-neuron connectivity of the total brain, all in 10% of its volume. With hardly a by-your-leave, Kurzweil goes on to other things, but I think this is utterly critical. The cerebellum allows our brain to interact with the world. It runs the body and mediates all our senses. Not just the "classic 5", but all of the 20 or more senses that are needed to keep a human body running smoothly. Further, I see nothing about the limbic system; it's only 1% of the brain, but without it we cannot decide anything. It is the core of what it "feels like to be human," among other crucial functions. Everything we do and everything we experience has an emotional component.

Until we fully understand what the 10% and the 1% are doing, it makes little sense to model the other 89% of the brain's mass. Can AI help us understand consciousness? I claim, no, Hell no, never in a million years. It will take a lot of HUMAN work to crack that nut. AI is just a tool. It cannot transcend its training databank.

At this point, I'll end by saying, I find Ray Kurzweil's writing very engaging but not compelling. I enjoyed reading the book, and not just so I could pooh-pooh things. His ideas are worth considering, and taking note of. Some of his forecasts could be right on. But I suspect the ultimate one, of actually merging with AI, of all of us becoming effectively cyborgs?… No way.

No comments: