kw: book reviews, nonfiction, computer programming, coding, memoirs, philosophy
David Auerbach coded for some twenty years, twelve of that professionally, including a few years each at Microsoft and Google. Now he is apparently more of globetrotting professional speaker. His memoir Bitwise: A Life in Code is less about what coding is—though he doesn't skimp on that—and much more about the ways we are being changed to interact better with computers. It seems that inducing computers to interact as we do has proven just too hard. To this I agree; I've been hearing promises of "Artificial Intelligence" and "Mechanical Brains" for my entire life. Immense breakthroughs are always "just another ten years" away. It will probably always be so!
I almost never include a book's cover in a review, but this time I just couldn't resist showing you this:
I wonder how many readers or even reviewers of this book noticed that the green letters are phosphorescent? Panel 1: white light; Panel 2: UV light turned on; Panel 3: Room lights and UV light off (a little light scattered in from the next room). I noticed this when I had been reading the book in bed, and saw the green glow after turning off the bedside lamp.
In the title, the special characters are from a scripting language I never learned; it has been so long since I saw them that I don't recall which one. The author tells us he first coded in Logo, a coding script that is easy for children to learn; he began using it at age seven. By the time he began getting paid for writing code his language of choice was C++.
When he worked for Microsoft, he spent a period in a behind-the-scenes battle with AOL, modifying code that took advantage of AOL's Messenger Service so that Microsoft IM users could send and receive messages on both platforms, expanding their contact base without everyone having multiple accounts. Later, working for Google, he had various projects, including some of the ongoing tweaks to the Page Rank algorithms. Search Engine Optimization is an attempt to get a page a higher ranking than it might ordinarily merit, by taking advantage of the way Google ranks pages. Some techniques are rather innocuous, but others are rather predatory, and so Google has a number of coders who continually revise the ranking code to flatten the playing field. It is an arms race. It always will be. As a result, the original "Page Rank" method is dramatically out of date.
The author pretty much disposes of the nuts and bolts of his career in the first part of the book. Part II digs around in the differences between the symbols, labels, and names used by humans, and the way labels are assigned in computer code. Every such "tag" we use is invested with meaning to us and by us. Words such as red, blue, orange and magenta have meaning for us. Usually the meanings of colors don't carry much emotional freight. In data, they would just be numbers. (The most common color coding, often called just "rgb" for red-green-blue, assigns the triplet 255,0,0 to the brightest red and 0,0,255 to the brightest blue. Magenta, being an equal mix of red and blue, is stored as 255,0,255). Other words carry much more meaning.
Suppose we have a group of words such as Baha'i, Zoroastrianism, Christianity, and Islam. Since there are a few hundred religious labels (and let's ignore sub-labels such as Baptist or Sunni), in a database, these might just be referred to by numbers such as 12, 497, 44, and 112. To a computer, these are no more meaningful than the color numbers. But to us, they can carry meaning that people fight and die for. Can you imagine a computer giving its "life" in favor of a practice such as water baptism, or the holiness of a shrine in Kyoto? Such things have occurred among humans!
Part III explores further the implications of about 40 years of "home computing" (and, I suppose, 80 years of programmable machinery starting with the Mark I). Twenty years ago I remember talking with a representative of a disk drive manufacturer. He joked that if his company invented a disk with infinite capacity, perhaps called the "god drive", the government would order two of them. I responded, "So would Sears and J.C. Penney". Now that we can buy multi-trillion-byte disk drives for under $100, and both Google and the NSA (and who knows how many others) have warehouses full of disk servers holding billions of times that much, the term "big data" seems rather pale. We are already in an era of "enormous data", with every likelihood that data storage capacities will continue to explode. Not just the government tracks us. So do a multitude of corporations, Google just being the most visible.
In such an environment, what are we, with our little 3-pound brain and its 100-billion-neuron capacity (via a thousand trillion synapses)? Parts II and III of the book show clearly that the dream of genuine human-level AI remains a dream, and a distant one. Each of our neurons exhibits behavior that requires a very fast multicore processor to emulate. That processor may include a few billion transistors, because the neuron is no transistor. So, to duplicate one human brain's activities in real time would require a 100-billion-computer network, attached to one of Google's data warehouses.
Not only that (the author touches on this, but doesn't dwell), our brain is embodied, attached to millions of sensors of many kinds. Just by the way, not long ago I figured out that the human visual cortex weighs as much as the entire brain of a chimpanzee or gorilla. Among mammals, humans have the most powerful visual engine, by far. Thus, we recognize millions of kinds of items, particularly faces, with ease. The facial recognition software Google and others use still regularly mistakes certain hubcaps and clocks for human faces, and entirely misses most faces that are turned more than 45 degrees from face-on.
What has been the result of a couple of generations of ubiquitous computing, and a generation of "social media" interaction? Simply put, we are getting better at doing things to make it easier for computers to "understand" us. This is not entirely by our choice. When FaceBook added more responses to the "Like" button, they added just five. I am still waiting for a Dislike option. "Angry" doesn't usually suffice; it is a different feeling. However, I realize that they are constraining our choices so they won't have, say, 120 possible responses (yes, we do have something more than a hundred kinds of emotional responses). Six will do, thank you so much, said the FB guru. Six is easier to analyze (and to sell to advertisers). So what do I do? I comment at least as much as I "Like" or "Love", etc. Comments are for people, and as far as I am concerned, my FB friends are still people…the last I checked, anyway.
The upshot is that the latter 2/3 of this book provide the best argument I have seen against the prospect of human-level AI for at least a generation, or two or three, if ever. We really need to consider whether it is really what we want, anyway. I built a 40-year coding career on producing computer products that did things that are hard for people but easy for computers, and making the interface work well enough that the people could also do things people do better, sharing the work appropriately with the machine.
I like the philosophical attitude that David Auerbach brings to the subject. He made me think about a number of things in ways I hadn't thought before. I got a lot more out of this book than I might have had it been more of a chronicle of all the coding projects and languages he'd been involved with. A book to be savored!
No comments:
Post a Comment