Thursday, November 26, 2015

Mental structures that lead us astray

kw: book reviews, nonfiction, psychology, economics, errors, systems 1 and 2

On a recent episode of Star Talk, the latest offering by Neil deGrasse Tyson on the National Geographic Channel, he discussed his interview with Penn Jillette of Penn and Teller. At one point Penn presented this scenario:
Suppose you're a 3-foot hominid such as Lucy, and you hear a rustling in the grass. If you think about it, there are two causes: the wind, or an approaching predator. If you assume it is a predator and run, but it really was the wind, you have lost little but some energy and sweat. But if you assume it is the wind and it is a predator, you're lunch. So it is safer to assume it is a predator, because even if you are wrong, you are alive.
I would add, if you take the time to think about it and weigh the "wind or predator" question, you are probably lunch also. That is why our defensive mechanisms work so fast, moving us out of harm's way before we have thought about it.

So we have two ways of thinking, fast and slow. The fast, reactive system is tuned to keeping us alive. The slow, contemplative system is tuned to revising our model of the world and to informing the faster system how to work more accurately, in a way that keeps us up-to-date more efficiently than waiting for evolution to re-tune our reactions. Daniel Kahneman calls these System 1 and System 2 in his new book, Thinking, Fast and Slow. Though Dr. Kahneman is a psychologist, his Nobel Prize is in Economics.

This is a good spot to emphasize that economics is about how people make choices, not just about how we use money. At several points in the book, the author mentions how psychology and economics, as disciplines, can each inform the other, though historically they are "stovepiped", with too little cross-communication.

The handful of folks who actually "follow" this blog may have wondered where I've been for more than two weeks. I have been reading this book with more than usual care. It is a big book, with the main text totaling 418 pages, but two large appendices (reprints of the seminal articles he and Adam Tversky wrote) and extensive end notes stretch that to 481. A book this big will naturally take me a while to finish. A book this good takes even longer! It has more fine ideas per pound than any other I've read in the past few years. The book is structured around three big ideas, and a host of subsidiary ideas are thus engendered. I really have space only to summarize the Big Three.

Idea 1: System 1 and System 2. These are our Reactive System and our Contemplative (or Calculative) System. System 1 in action: during our courtship, my future bride and I were walking in a park and strolled over to sit on some monkey bars. Such seating is none too secure, and I said, "Don't push me." She did push me, before I finished the sentence, and I grabbed a couple of handholds during the word "me". Her S1 was operating to focus my attention, and my S1 instantly kept me from falling. This was followed by some S2 activity: She grinned to emphasize her playful mood, and I took that in and boosted my "how lovable she is" score a notch. Human courtship, operating as it has for millennia.

  • Our S1 does the things no computer does well: recognize who or what is around us, evaluate each on a hazard/help axis, and frequently prompt us to action, all within one or two tenths of a second. 
  • Our S2 struggles to do things a computer does well: put together the puzzle of our existence and map the world around us, carry out calculations (Quick! What is 27x17?), and feed new insights back to S1. If you could do that "simple" multiplication (partial sums: 340 and 119; add to 459) in less than five seconds, your "horseback arithmetic" skills are at expert level.

I had a conversation a few years ago with a professor of philosophy. He talked a bit about his work on "formal errors of logic", such as broken syllogisms (look it up; it'll save time). I said at one point that I was quite interested in errors of informal logic. He snapped, "That isn't real philosophy," rather pettishly, I thought. His expertise was threatened by the chance the conversation would turn to an unfamiliar area, and his S1 snapped to attention to maintain dominance over me. I retained my integrity by walking off to find a more congenial conversationalist. My S2 intervened just quickly enough to prevent my S1 from answering rashly.

I was fascinated by Dr. Kahneman's reports of ways our body and mind work together. In one experiment, students were asked to work a page of simple arithmetic problems, all the while holding a pencil in their teeth so it stuck out both sides of the mouth. Another group did the same problems, but held the pencil by its eraser with their lips, so it pointed out the mouth. The first group, forced by the pencil to smile, did the problem sheet much faster and more accurately than the other group, forced by the pencil to frown. This gives credence to the adage, "Fake it 'til you make it." We smile, not only to reassure others we are with of our good intentions, but to reassure ourselves that all is, or soon will be, well. This is just one example of many.

Idea 2: Econs and Humans. Much of the theory of Economics is based on the notion of a Rational Actor. People are expected to make choices rationally, unaffected by emotional considerations. The work of Dr. Kahneman and others demonstrates that this is probably the unlikeliest foundation upon which to found a theory.  Our grandmothers knew we act without thinking, and we think—with some modicum of rationality—only when forced, even backed into a corner. Most of us pass most of the day without having one rational thought pass through our head. It is how you get to work, or back home, "on autopilot", particularly when you intended to run an errand on the way home, but arrived at your door wondering what it was you forgot.

Behavioral economist Richard Thaler calls the mythical Rational Agent an Econ, in contrast to the real agent that we all are, a Human. Econs do automatically what Humans typically cannot. I considered this analogy, which I make to distinguish faith from religion: A Religion is a checklist that you can hang on your wall. A robot could perform it all perfectly; you cannot. A Jewish friend told me of his study, in his youth, of the 611 laws in Leviticus, and how he sorted them into, "No problem", "Oh, maybe this is a bit sketchy", and "Who in his right mind would think this is possible?!?" In the wholly secular world, we are often told to "Count to ten first," but we find we've done something we can't undo before getting from one to two. A certain policeman is in the news these days, for shooting a youngster 16 times in 15 seconds, while six or seven of his colleagues were content to watch the young fellow from a step or two away and persuade him to put his little knife down and have a nice chat. Guess who belongs in quite a different line of work? And guess whose emotions take over some 10 to 100 times as quickly as more ordinary folk?

It seems every time someone designs an experiment to ferret out our rational and emotional responses to a situation, the rational mind is pretty hard to find. Later in the book, the author tells of ways to set up a situation so that we are more likely to give ourselves time for rationality, but he acknowledges that they are far from perfect. And he's been studying these things for 30+ years! What hope have we of any bit of rational behavior? Well, some hope, anyway, for we are Human after all, not Econ, and it is in our nature to hope, to try again, and sometimes to succeed a little.

Idea 3: Two Selves. Our memory, called here our Remembering Self, draws different conclusions from our experiences than our Experiencing Self does. The experiment here could not be more clear. Water colder than about 60°F hurts a little, and below 50°F it can hurt a lot, and quickly. A basin of water held at 57°F was provided, and students (nearly all experiments are done on college students! They come cheap) were asked to hold their hand in the water for 60 seconds. Then they reported how painful it was on the familiar ten point scale. After time to warm up, they were asked to repeat the experiment, but to hold their hand in the water for 90 seconds. After 60 seconds, water that was a few degrees warmer was let into the basin. Rather than report on a 1-to-10 scale, they were asked which experience was more painful. Nearly all reported that the first, shorter experience was more painful, even though they had endured a longer period of "torture" in the second experiment (It takes a few seconds for the warmer water to "take over"). The Experiencing Self may be queried during an experience and give you an accurate read on what things feel like "Right Now", but after the fact, the Remembering Self primarily remembers the last part of the experience more than all the rest. It's why we are advised to "go out with a bang"…as long as it is a favorable "bang"!

I suspect if the experiment were repeated in reverse, there would be a different outcome. I'd try this: Session 1, 60 seconds at 55°F. Session 2, 45 seconds at 60°F followed by 15 seconds as 52°F water is added. Maybe the exact temperatures would have to be tuned a little, but I am sure the Remembering Self would report the second session as more painful, even though the sum-total-torture was less. By the way, some experiments done for the Mythbusters TV program were done using a basin in which ice was floating, to hold the temperature at 32°F, and the duration was "as long as you can hold it", with a maximum of three minutes (180 sec.). Some participants held out the full three minutes, and lifted out their hand with ice sticking to the skin. So you can see that temperatures in the 50-60°F range will do no damage.

We think we are better at planning than we really are. All three of these things come together when we commit the Planning Fallacy. Chapter 23 of the book is entirely devoted to it. It is most evident in corporations that are having trouble. A new CEO will call together a team to "plan", and perhaps the plan will even be carried out to some extent. Do profits rise? Wonderful. The CEO gets a bonus. Does business remain "flat"? What a pity, the employees are defective and didn't carry out the plan as intended. Does business go down? Oh, my, "external factors" such as shifting currency ratios or a new and unexpected competitor must be to blame. Does the company tank? The CEO's "golden parachute" is activated, (s)he is booted out with a $10 million handshake, and a new CEO is brought in to repeat the process. As Yogi Berra said, "Predicting is hard, especially about the future." And Donald Rumsfeld warned us of the "Unknown unknowns", for which he was reviled, and then forgotten. Do you know anybody anywhere who strives to ferret out what "unknown unknowns" might become a factor, so as to deal with them?

One clear message of the book is that System 2 is lazy, pathologically lazy. It (we) typically accepts whatever "explanation" or "solution" is offered up in the instant that System 1 takes to perform its heuristic evaluation. Thinking is work, and we'd rather do almost anything else. And we typically do. My, it is a wonder that anything gets done!

No comments:

Post a Comment