kw: book reviews, nonfiction, artificial intelligence, decision support, skeptical views
I pondered several ways to start this review. A quote seems best:
"Computers are good at optimizing a system, but it takes humans to sort out what we actually want from that system." (Emphasis author's, p 246)
The Loop: How Technology is Creating a World Without Choices and How to Fight Back, by Jacob Ward, reveals the penetration of so-called "decision support systems" into every area of society. They are the latest manifestation of "artificial intelligence" (AI). It is quite amazing that true AI cannot be defined, but it has been touted as "just around the corner" since about the time I was born (the 1940's).
Contrary to the contention of a rosy-eyed Google scientist, no AI system has yet achieved sentience. And that is a big part of the problem. Firstly, sentience also defies definition, although somewhere, in some dictionary, no doubt we can find, "Sentience is equal to consciousness," and then when we look up "consciousness" we find, "A synonym for sentience." Many would say, "I know it when I see it," yet most of those same people would deny that "animals" are sentient (ignoring that humans are animals)…unless they have a beloved pet, then they would say, "Oh, cats/dogs are people, too!" But they'd probably draw the line at a mouse or rat or goldfish.
Secondly, sentience must include a moral sense. Without it, a thinking being is a monster, in the sense of horror literature such as Frankenstein (Although the creature in that novel had better moral sense than his creator, which was the point of the book). And, a-a-and, morality is even harder to define than sentience or consciousness!
Well, a bit about the book before I embark on further ranting. Jacob Ward writes of three loops, nested within one another. To quote him further (pp 8-9):
"The innermost loop is human behavior as we inherited it, the natural tendencies and autopilot functions that evolution gave us." In the terminology of some, this is System 1, also called 'instinct' or 'gut feelings'. System 2 will be addressed shortly.
"The second loop is the way that modern forces—consumer technology, capitalism, marketing, politics—have sampled the innermost loop of behavior and reflected those patterns back at us, resulting in everything from our addiction to cigarettes and gambling to systemic racism in real estate and machine learning."
"The outermost loop…[presages] a future in which our ancient and modern tendencies have been studied, sampled, fed into automated pattern-recognition systems, and sold back to us in servings we will be doubly conditioned to consume without a second thought."
Social media are the most visible expression of The Loop. It's pretty well known that FaceBook seldom displays in your News Feed (incredible misnomer!) posts from more than 25 of your "friends", no matter how many you have. To get them to show you someone else, you need to go into your home page's Friends tab, find someone whose posts you haven't seen lately, and click the link to see if they've been posting. Usually they have. So, go along and "Like" or "Love" or whatever the past five or more posts by that person. Most likely, the next item posted by that person will show up in your news feed.
By feeding you only what you have responded to in the past, FB tightens The Loop around you. You need to take deliberate steps to widen it.
Can our System 2 help? System 2 is our reasoning facility, our deliberate, thinking response to something. It is slower than System 1, and is harder work. Very few people use System 2 for more than a few minutes per week. We use it to learn something new, but as soon as we can, we push the "new stuff" onto System 1 and let it run on autopilot.
Only System 2 can get us out of some aspects of The Loop, the personal ones, such as our FB News Feed, for example (and that, only to a limited extent). When we use the Loyalty Card at the grocery or pharmacy, we're feeding data to a Loop system, which determines what coupons to print onto your receipt (or the coupon printer next to the receipt printer). Our credit card company knows which restaurants we frequent, which grocery and department stores, and where we vacation.
I have a System 2 method for finding stuff online. I use a browser in Private Mode (Incognito in Chrome), and the Duck Duck Go search engine. (Be careful about searching videos and images, though; DDG feeds those searches to Google, and during the session, Google can look at cookies to find out who you are. Why do you think they are called "tracking cookies"?) In case I look for something any other way, I can expect to find ads for it popping up for the following month.
That's creepy enough, but it's not dangerous. Surveillance is dangerous. Police use decision support software with databases to check faces seen in a camera against nearly any face ever photographed (yes, such holistic databases exist!). This has been going on for a long time in China and other totalitarian nations; it is in its infancy in the U.S. Don't expect the trend to reverse course, because trying to get politicians and officials to use System 2 thinking to write laws to limit such behavior…it just ain't happenin'.
AI isn't moral. No more moral than a hammer. A hammer is very useful. In the hands of Thor, or anyone else with blunt force homicide in mind, it's an awesome weapon. A screwdriver is very useful. Long ago I learned to throw knives: how to test for balance, and how to hold it so it is poised to penetrate at a particular distance. I found out that a typical screwdriver is almost perfectly balanced, and is very easy to throw so that it sticks right through a 1x4. I never tried it on a living being; I do have some measure of self control.
AI has no self control. Sadly, the people involved with so-called AI systems have no self control either. Those who produce the systems are in it for the money. Those who buy the systems have a goal in mind that pays no mind to side effects such as identifying someone who looks like a perpetrator, but is not. Even without AI's "help", mistaken convictions occur: every year in the U.S. 100-200 people are found to be innocent of a crime for which they were convicted in criminal court; we don't know how many others might be exonerated if there were more workers in exoneration projects.
I have a lot of experience with the face recognition engine in Google Picasa: tens of thousands of ID's of friends and relatives and acquaintances. About every second or third time I check face ID's of new pix in Picasa (I first give it a half hour to noodle around with the database), one or two of the "faces" it finds are non-faces such as hubcaps or odd shadows on a wall. I find it 80-90% accurate in finding matches to faces I already tagged, but for faces that are entirely new to it, it attaches an ID as a "nearest match", rather than just asking, "Is this someone new?"
Just by the bye, Google has a race problem. I have numerous friends who are non-white. If I had a way to ask Picasa why it so frequently confuses them with one another, I expect the answer would echo a common canard of the 1950's, "They all look the same to me". I'd like to hear from anyone who uses Google Photos and its face tagging, whether the same effect is found there. So far, I've avoided moving my 70,000 photos to GP (for one thing, I'd have to pay for the disk space).
One of the big problems with "big data" and "machine learning" and "neural net systems" is that they are opaque. Once they have been "trained" with a few thousand (or a few million) examples, and have developed criteria for deciding something, there is no way to know what those criteria are. Humans tend to develop criteria that number five or fewer (although some of our "gut feelings" may be based on a lot more than that, but System 1 is equally opaque!), but "deep learning" systems may winkle out hundreds or thousands of correlations, each of which contributes a tiny fraction of a percent of the final decision.
While it would be arduous to delve through the printout of such a system's decision tree, it should be mandatory for such a printout to be possible to produce. I am in favor of legislation making it a crime to produce such a system that cannot explain its decisions. For those systems that "support" life-and-death decisions, it should be a capital offense.
Of Course it is harder and more expensive to build a transparent decision support system. So what? I say, kill off a few such offenders, and perhaps the rest will find out how to write them economically! Naturally, no politician will propose such a law.
The real danger of AI isn't from AI itself, but from people who put too much trust in it. That's why The Loop is really a Noose.
No comments:
Post a Comment