kw: computers, supercomputers, speed records, trends
In June, the biennial "speed contest" to crown the world's fastest supercomputer gave the honors to "Sequoia", made by IBM for the U.S. Department of Energy. The benchmark? 16.32 petaflops, or over 16 quadrillion numerical calculations per second. See this article for details. I thought back over the history of supercomputers, and dug out enough data for this chart:
The first machine to be called a supercomputer, a CDC 6600 installed in 1964, had a speed of 1 megaflop (Mflop), or 1 million calculations per second.
On this logarithmic scale, it takes a moment to realize that maximum computing speed has risen meteorically. This year's benchmark is more than 16 billion times the speed of the 6600, 48 years ago. That is a doubling of speed about every 17 months, or a factor of more than 130 every ten years. At that rate, we can expect a machine to exceed 1 exaflop (1 quintillion or 1018) in less than 8.5 years.
How far can the trend go? Pretty soon it will have to depend on new materials, or perhaps on quantum computing. Silicon seems to be tapped out at 10 Mflops per core, with cycle times of 5-10 GHz and subtle interleaving schemes. These huge speeds are attained by running a million or more CPU cores in parallel. Calculations that do not parallelize wind up running in a single core, at a millionth the speed of the total array. Of course, you can run millions of such problems at one time, but you can't speed up any one of them.
Semiconductors of the III-V variety, GaAs for example, can run 1,000 times as fast as silicon, but are incredibly costly. There are only about 200 tons of gallium produced yearly, and it is getting harder to find already. We use 400,000 tons of silicon yearly of at least "solar panel" purity. A few thousand tons of that is further refined to "chip grade" for making computer chips and associated circuitry.
I am pessimistic about quantum computing. The promises made by proponents are as wild as those of the artificial intelligence crowd, who have so far been consistently wrong for sixty years.
So can we continue to make millions, or billions, or trillions of CPUs run in parallel? For perspective, Sequoia takes up 4,500 square feet and consumes 8 megawatts of electrical power, running 1.6 million CPU cores. It has roughly the computing capacity of three or four human brains, and perhaps 100 times the memory capacity. A brain runs on the caloric equivalent of 20 watts. The brain is not a quantum computer, but it is massively parallel: a neuron is roughly equivalent to the original IBM PC running at 4 MHz (but it took many cycles to make one calculation), and there are 10 billion of neurons packed into our little skulls!
My realistic expectation is that supercomputer benchmarks based on silicon technology will top out in the 1-10 exaflop range. But a lot of very bright people are trying all kinds of new materials and technologies all the time. There is a good chance of a surprise in a decade or two.
No comments:
Post a Comment