kw: book reviews, nonfiction, philosophy, history of philosophy, philosophers
The Wikipedia article "List of Philosophies" has nearly 450 entries. But this list is more of a grab bag of philosophical terminology: it includes Descartes' summation of Rationalism "Cogito ergo sum" and the Empiricist's shibboleth "Occam's Razor"; there are 37 collective terms such as "German Philosophy"; and it includes the major category terms Metaphysics, Epistemology and Ethics, plus Meta-philosophy (AKA Meta-metaphysics). Checking just terms ending in "-ism", I find 257. That is a more appropriate list of actual philosophies. Of course, it is in no way complete, but we'll return to that.
Long ago I learned that the major categories of philosophy are Metaphysics—concerning the causes and nature of things—, Epistemology—concerning knowledge and knowing—, and Ethics—concerning moral choices. To these some would now add Meta-metaphysics or Meta-philosophy—concerning philosophy itself, particularly the boundaries of Metaphysics. Having just read The Philosophy Book, I find another 11 categories added by the six authors/contributors, and a total of 59 "isms", all of which are on the Wikipedia list. The contributors of the 107 short articles in the book are Will Buckingham, Douglas Burnham, Clive Hill, Peter J. King, John Marenbon, and Marcus Weeks. In addition, another 58 very brief items are listed in a Directory assembled by Stephanie Chilman. These include the Objectivism of Ayn Rand, not covered in the main text.
The 11 extra categories are Chinese Philosophy, Eastern Philosophy, Philosophy of Religion, Philosophy of Science, Islamic Philosophy, Political Philosophy, Philosophy of Mind, Philosophy of Language, Ontology, Philosophy of History, and Aesthetics. The only one of these that I would include as a major category is Aesthetics. The others are in-between categories, as they incorporate elements of the major categories, or, indeed, cross with them in a mathematical sense. Methinks the taxonomy of philosophic terms is in need of cleaning up!
The articles, ranging in length from one to six pages, are gathered into six historical eras, showing the progress of philosophical thought over the centuries in the major cultures. A seventh era could have been included, a set of entries under the heading Egyptian Philosophy. Several of the earliest philosophers of "ancient" Greece studied in Alexandria, a hotbed of pre-European philosophy.
This leads to a side thought. I wonder what philosophical traditions existed in Africa south and west of Egypt, prior to being mainly eliminated first by the slave trade, then by misguided missionaries from Europe. I do recognize that there were a few truly godly apostles to Africa, but most missionaries were actually agents of colonial powers and destroyed the cultures of those they were trying to "save".
I considered discussing some of the articles, but I realized my motive was mainly to take cheap shots at those I don't like. In every category, the spectrum of thought is more broad than any of us
could comprehend. Ethical thought, for example, ranges from a few kinds
of Absolutism to a Relativism that denies Ethics exists. This book is like a flea market. It has a wide array of "products"—as wide an array as possible, according to the contributors' goals—and I am free to "buy" what I like and ignore the rest. On one hand, I don't consider myself a philosopher, but on the other, everyone is a philosopher, to whatever extent we think about why we do things and what we know and how we know it.
Any discussion of "reality", particularly if religion is involved, leads to someone saying, "Well, I have my own philosophy." My typical reaction has been to say (or at least to think), "That just means you don't know what philosophy is." But in more recent years I have realized that human thought exhibits such incredible variety, it is very likely that every one of the seven billion of us does indeed have a unique, personal philosophy.
Tuesday, October 29, 2013
Monday, October 21, 2013
Build me a memory
kw: book reviews, nonfiction, memory, memory studies, psychology
We describe memories by relating them to the familiar, and these days, that means a computer is involved. But the way a computer functions is about as different from brain work as it can be. The most enduring metaphor for memory is the storehouse. We imagine opening drawers and cupboards to find a memory, and pulling it out whole, to be viewed or even re-experienced. With more thought, we might consider that some part of our brain has many cubbyholes where memories go. People with "good memories" then have a better index than the rest of us, or are quicker sorting through all the cubbies.
Think instead of a warehouse full of spare parts: boxes of different kinds of sunsets, bins with collections of similar sounds or smells of people and places we've experienced, piles of "the feel of walking in narrow lanes" or holding hands in various ways, and albums of the look and feel of loved ones and friends and acquaintances. At one end of the warehouse, a catalog index directs us to the various bits, so we can relive or review that walk, hand in hand with a lover, speaking together, turning down an alley to stop at the verge of a hill and seeing just that sunset together. Many memories start with a smell, and you are suddenly in that bakery with, say, your long-deceased grandmother on shopping day. I imagine the warehouse might resemble the workshops in Mythbusters.
Charles Fernyhough set out to write a book about the science of memory, and has delivered a book of stories, with the science to explain them: Pieces of Light: How the New Science of Memory Illuminates the Stories We Tell About Our Pasts. I confess I didn't fathom the interplay of the Medial this and the Posterior that. I have a vague idea of the Amygdala and Hippocampus, but as a diagram in the book shows, some dozen named parts of the brain are involved in "Autobiographical Memory". I suppose I ought to have studied a bit as I read along, but I was so taken with the stories (Charles, that is a compliment) that I did not.
But I think the view from 10,000 feet is enough for now. The "Pieces of Light" of the title aptly summarize the way a memory surfaces: bits of various kinds are brought together at some switching center—which is probably the Hippocampus—and reviewed. Such a process affords us much more space for storing memories than if each memory were a five-sense videotape record. Thus, those parts of an experience that we pay attention to, or that thrust themselves into our attention, are picked apart and stored in some sorted manner, and indexed for retrieval. Repeated experiences of the same place or kind of event are in part blended together, and in part kept separate when there are singular experiences on certain occasions.
Quick: try to remember every one of your own birthday parties. If you have a family like mine, there were at least 15 or so that occurred while you were in your parents' home, and a few others since you began to live on your own. Though I have been the "target" of at least thirty birthday parties, the only one I remember is when my beard caught on fire as I was blowing out the candles (so I was 23 that year); my dad clapped both my cheeks with his hands to put it out, growled, "I have been waiting a long time to do something like that", and gave me a grin. Otherwise, the parties are a mishmash of the kids I knew and a vague feeling of too much cake and ice cream and soda. I don't recall any of the gifts!
Another notion: try to remember the second time you did something, such as driving by yourself or making love or hang-gliding. Aren't a lot of our memories all about the novelties? This makes sense from a biological and evolutionary perspective, as the author explains. When we are tiny, everything is new, and we struggle to make sense of it all. We are automated categorization machines, and work at increasing efficiency as we gather memories with which to compare new events.
While studies of people with various kinds of brain damage or under the spell of certain drugs may indicate the function of various parts of the brain, we find that the workings of the intact brains that most of us have are not really geared toward faithfully recording our life. We don't really have a camera crew tucked inside, laying down tracks of videotape (or SD card MPEG files). Later experiences influence the way we remember those "first times". And, because we store the imaginings we have about others' stories in the same way we store our own, the record may be quite faithful for certain details, but rather sloppy about others.
Fernyhough's focus is autobiographical memory, the memories that tell our own story. They are different, in quality if not in kind, from memories such as the times table or the way we make apple fritters. We could say that our memory is not so much a textbook of "My History" as it is a historical novel: many genuine events (or portions of them) strung together with fabrications and borrowings to make a coherent whole. Coherence matters to us more than exactitude. And we tend to remember what happened a lot better than when it happened. In my story of the beard fire, I had to think to recall my age, and had I worn a beard for more than just that year, it may not have been possible.
It makes sense that our memory serves us according to the needs our ancestors had. It is usually less important to recall the exact year of, say, each time the family camp was flooded, and more important to remember what was done to rescue this or that person or to restore the damage. I think of it as akin to managing by exception: we remember the first flood, and recall the others by how they differed from it.
Shocking events that lead to "flashbulb" memories illustrate the extreme edge of memory use. Something in our brain realizes, "This is so unusual it must be VERY IMPORTANT. Record it faithfully!" At least the first time. Thus, people my age have clear memories of the assassination of JFK—where we were and who we were with and how we heard it—but are less clear when recalling the assassination of Bobby Kennedy or Martin Luther King. The events were just as shocking but were no longer novel. Similarly, the Challenger explosion produced sharper memories for us than the incineration of Columbia.
The "first time effect" is critical, and goes a long way to explain the juvenile bump in our collection of important stories. The way our brains develop allows only a few very early memories, so the "formative years" are from about age 10 to 20 or so. In a late chapter we read of the author's grandmother, recalling many events in the 1920s and 1930s, when she was a girl, and the world was in crisis, particularly for a Lithuanian family in the midst of emigration and assimilation into a new culture. He tries bringing a Yiddish-speaking acquaintance to meet her, thinking she may remember some things better if asked in the language of her childhood. A few new memories do surface, but she'd been fluent in English from such an early age that she no longer understands Yiddish very well. A visit from a woman she'd been in school with is another story (or a lot of stories!). They hadn't met in 80 years (one was 93 the other 94), so once they got the small talk out of the way, they had a lot to talk about as they shared those childhood and pre-teen memories.
So here is a clue, if you have been gathering stories from an aged relative. At some point, round up a childhood friend or favorite cousin, and get them reminiscing. I had some hopes of doing this for my father a few years ago, when I visited his childhood home town. He had given me a list of people whom he thought might still live there, and I hoped to find one and call Dad for them to have a chat. At the end of the trip, I reported to him that I'd found them all…in the cemetery. By age 88 he had outlived his entire home town. Now he is 91½, and still pretty sharp, at least for old stuff. I'll be with him for the next few days (I have to cross a continent to see him), and I'll see what I can gather.
Meantime, we all need to realize that it is hard to keep most memories "pure". Later experiences of an event or place, or with similar import, can influence the way we recall just this experience in that place. This is important. State justice departments are only now changing how they make proper use of "eyewitness" testimony. The Biblical requirement that two witnesses had to agree was very wise.
We are like the 7 blind men encountering an elephant. To one it seems like a wall, to another it is like a snake. We each remember different stuff about a shared experience. Nobody gets it all right. We even edit our stories about ourselves, and when we forget the editing, whose story are we telling? Ah, that's the fun part, for we are who we remember we are.
We describe memories by relating them to the familiar, and these days, that means a computer is involved. But the way a computer functions is about as different from brain work as it can be. The most enduring metaphor for memory is the storehouse. We imagine opening drawers and cupboards to find a memory, and pulling it out whole, to be viewed or even re-experienced. With more thought, we might consider that some part of our brain has many cubbyholes where memories go. People with "good memories" then have a better index than the rest of us, or are quicker sorting through all the cubbies.
Think instead of a warehouse full of spare parts: boxes of different kinds of sunsets, bins with collections of similar sounds or smells of people and places we've experienced, piles of "the feel of walking in narrow lanes" or holding hands in various ways, and albums of the look and feel of loved ones and friends and acquaintances. At one end of the warehouse, a catalog index directs us to the various bits, so we can relive or review that walk, hand in hand with a lover, speaking together, turning down an alley to stop at the verge of a hill and seeing just that sunset together. Many memories start with a smell, and you are suddenly in that bakery with, say, your long-deceased grandmother on shopping day. I imagine the warehouse might resemble the workshops in Mythbusters.
Charles Fernyhough set out to write a book about the science of memory, and has delivered a book of stories, with the science to explain them: Pieces of Light: How the New Science of Memory Illuminates the Stories We Tell About Our Pasts. I confess I didn't fathom the interplay of the Medial this and the Posterior that. I have a vague idea of the Amygdala and Hippocampus, but as a diagram in the book shows, some dozen named parts of the brain are involved in "Autobiographical Memory". I suppose I ought to have studied a bit as I read along, but I was so taken with the stories (Charles, that is a compliment) that I did not.
But I think the view from 10,000 feet is enough for now. The "Pieces of Light" of the title aptly summarize the way a memory surfaces: bits of various kinds are brought together at some switching center—which is probably the Hippocampus—and reviewed. Such a process affords us much more space for storing memories than if each memory were a five-sense videotape record. Thus, those parts of an experience that we pay attention to, or that thrust themselves into our attention, are picked apart and stored in some sorted manner, and indexed for retrieval. Repeated experiences of the same place or kind of event are in part blended together, and in part kept separate when there are singular experiences on certain occasions.
Quick: try to remember every one of your own birthday parties. If you have a family like mine, there were at least 15 or so that occurred while you were in your parents' home, and a few others since you began to live on your own. Though I have been the "target" of at least thirty birthday parties, the only one I remember is when my beard caught on fire as I was blowing out the candles (so I was 23 that year); my dad clapped both my cheeks with his hands to put it out, growled, "I have been waiting a long time to do something like that", and gave me a grin. Otherwise, the parties are a mishmash of the kids I knew and a vague feeling of too much cake and ice cream and soda. I don't recall any of the gifts!
Another notion: try to remember the second time you did something, such as driving by yourself or making love or hang-gliding. Aren't a lot of our memories all about the novelties? This makes sense from a biological and evolutionary perspective, as the author explains. When we are tiny, everything is new, and we struggle to make sense of it all. We are automated categorization machines, and work at increasing efficiency as we gather memories with which to compare new events.
While studies of people with various kinds of brain damage or under the spell of certain drugs may indicate the function of various parts of the brain, we find that the workings of the intact brains that most of us have are not really geared toward faithfully recording our life. We don't really have a camera crew tucked inside, laying down tracks of videotape (or SD card MPEG files). Later experiences influence the way we remember those "first times". And, because we store the imaginings we have about others' stories in the same way we store our own, the record may be quite faithful for certain details, but rather sloppy about others.
Fernyhough's focus is autobiographical memory, the memories that tell our own story. They are different, in quality if not in kind, from memories such as the times table or the way we make apple fritters. We could say that our memory is not so much a textbook of "My History" as it is a historical novel: many genuine events (or portions of them) strung together with fabrications and borrowings to make a coherent whole. Coherence matters to us more than exactitude. And we tend to remember what happened a lot better than when it happened. In my story of the beard fire, I had to think to recall my age, and had I worn a beard for more than just that year, it may not have been possible.
It makes sense that our memory serves us according to the needs our ancestors had. It is usually less important to recall the exact year of, say, each time the family camp was flooded, and more important to remember what was done to rescue this or that person or to restore the damage. I think of it as akin to managing by exception: we remember the first flood, and recall the others by how they differed from it.
Shocking events that lead to "flashbulb" memories illustrate the extreme edge of memory use. Something in our brain realizes, "This is so unusual it must be VERY IMPORTANT. Record it faithfully!" At least the first time. Thus, people my age have clear memories of the assassination of JFK—where we were and who we were with and how we heard it—but are less clear when recalling the assassination of Bobby Kennedy or Martin Luther King. The events were just as shocking but were no longer novel. Similarly, the Challenger explosion produced sharper memories for us than the incineration of Columbia.
The "first time effect" is critical, and goes a long way to explain the juvenile bump in our collection of important stories. The way our brains develop allows only a few very early memories, so the "formative years" are from about age 10 to 20 or so. In a late chapter we read of the author's grandmother, recalling many events in the 1920s and 1930s, when she was a girl, and the world was in crisis, particularly for a Lithuanian family in the midst of emigration and assimilation into a new culture. He tries bringing a Yiddish-speaking acquaintance to meet her, thinking she may remember some things better if asked in the language of her childhood. A few new memories do surface, but she'd been fluent in English from such an early age that she no longer understands Yiddish very well. A visit from a woman she'd been in school with is another story (or a lot of stories!). They hadn't met in 80 years (one was 93 the other 94), so once they got the small talk out of the way, they had a lot to talk about as they shared those childhood and pre-teen memories.
So here is a clue, if you have been gathering stories from an aged relative. At some point, round up a childhood friend or favorite cousin, and get them reminiscing. I had some hopes of doing this for my father a few years ago, when I visited his childhood home town. He had given me a list of people whom he thought might still live there, and I hoped to find one and call Dad for them to have a chat. At the end of the trip, I reported to him that I'd found them all…in the cemetery. By age 88 he had outlived his entire home town. Now he is 91½, and still pretty sharp, at least for old stuff. I'll be with him for the next few days (I have to cross a continent to see him), and I'll see what I can gather.
Meantime, we all need to realize that it is hard to keep most memories "pure". Later experiences of an event or place, or with similar import, can influence the way we recall just this experience in that place. This is important. State justice departments are only now changing how they make proper use of "eyewitness" testimony. The Biblical requirement that two witnesses had to agree was very wise.
We are like the 7 blind men encountering an elephant. To one it seems like a wall, to another it is like a snake. We each remember different stuff about a shared experience. Nobody gets it all right. We even edit our stories about ourselves, and when we forget the editing, whose story are we telling? Ah, that's the fun part, for we are who we remember we are.
Sunday, October 13, 2013
It is right in front of your eyes
kw: book reviews, nonfiction, observation
Do you remember getting your first car? Maybe you shopped and dithered for a while, before settling on this model of that make, at a price you could (barely) afford. Maybe it was a nice, affordable Honda Civic. After that, for weeks, it seems every second car you see on the road is a Civic, "just like mine!" You got eyes for it.
I remember another experience of getting eyes for something. On a field trip, some classmates enticed me to take a side trip to collect trilobites. I rode along, imagining the iconic "oversize pillbugs" that seem to define the notion of "fossil." We stopped at a road cut, and everyone got out. I looked at an expanse of light gray rock with a peppery texture, asking, "Where are the trilobites?" Someone said, "Right here," pointing at a dark double-speck. I looked closely, seeing two dots with a couple of lines between them. It was a trilobite, all right, no more than a centimeter in length.
It was similar to this Perenopsis (cropped from a photo found at fossilguy.com). Stepping back, I saw that the texture of the rock face was peppered with thousands of them! I now had eyes for them.
Sometimes we look and look, and finally see. Sometimes we just need someone to point out what was there all along. Alexandra Horowitz sought about a dozen someones to help her see what she had been missing on the 3-times-daily walks she took with her dog, around her block in New York City. Now she kindly brings us along on those walks in her new book On Looking: Eleven Walks With Expert Eyes.
Not all the walks were around her own block. Sometimes she went to the block where her expert lived or worked. The first and last of the book's 13 chapters are walks she took alone. First, she walked on her own, observing her very familiar block, and finally, on another solo walk, remarked on all the things she now has eyes (and ears and nose) for.
Her first expert was her toddler son, Ogden. To a tiny child, all is new, all nearly equally absorbing. A block you can walk in 5 minutes can take a couple of hours with a toddler…I almost wrote "with a toddler in tow," but of course, Ms Horowitz did her best to be "in tow" to her son. After all, he was the expert on this occasion. The infant brain seems to hoover up everything, struggling to make sense, to discern what is important. Young Ogden had a wide array of interests. Triangles at one point (the bracing in a railing), at others dump trucks (particularly their unique sounds), an insect, or a weed growing in a crack. All could stop him in his tracks. It seems little ones either haven't learned to make quick observations while moving along, or just prefer to stop still for long enough to fully appreciate what has just caught their attention. There's this thing about toddlers, though. They aren't specialists yet.
Geologist Sidney Horenstein is a specialist, and a walk with him shows the author that there is a great lot of geology to specialize in. From the crinoids and brachiopods that decorate the foundation stones of her apartment building, and that she'd never noticed, to the various colors and textures of the limestone, marble, sandstone, granite and so forth that formed their structures. Urban geology is a history of the distances people were willing to go to get building stone. Then a walk with type designer Paul Shaw opened her eyes, not to the signage with which a city is festooned, but to the shapes and forms of the letters used. Crinoids in the stones, and Garamond or Helvetica on the signs. Did the signmaker create text that faithfully evokes the character of a business? To Shaw, many were too slapdash to do a proper job. And artist Maira Kalman opens a world of observation, to expand what we think of as aesthetic. To some people, most things are ugly or at best commonplace, and it takes an uncommonly lovely scene to evoke any positive feeling in them. Not Ms Kalman. Maybe she doesn't quite find everything lovely, but she does her best to come close. A discarded couch on the sidewalk catches her attention, and prompts a painting (found before page 87).
Entomologist Charley Eiseman ought to be named "Wiseman", for he makes Ms Horowitz wise in the ways of the little animals that outnumber us, even in our most crowded cities. Probably, just the sidewalk ants outnumber human residents. The next time you sweep for cobwebs, think that there are probably a dozen nearly invisible spiders (and maybe some that are all too visible!) sharing your rooms. Outside, there are even more. Funnel webs abound in a city, tucked into inside corners or at the roots of bushes, or where railings attach to walls. And every spider needs many insect "clients". I learned something I'd never thought of. City trees come in two varieties: those with many signs of insect damage, and those that appear pristine and unchewed. The unchewed are the imports. Insects (and mites and other eaters) in the local environment are not specialized to eat them, as they are the native plants. This is why "invasive" imports invade so well. They are free of enemies to slow down their spread. Larger animals are the province of naturalist John Hadidian, with whom the author walked in Washington, D.C. In a city, they may not outnumber us (well, maybe the squirrels and pigeons do), but they are surprisingly ubiquitous. (I don't know as much about cities, but my suburban yard houses squirrels, rabbits, mice, voles, shrews, frogs, toads and a dozen or more kinds of songbirds, and we've seen visiting foxes, raccoons, vultures, hawks, crows and deer.)
Fred Kent of Project for Public Spaces is an urban planner of another sort. He and his colleagues study the way people interact in cities. They discern how an apparent obstacle can facilitate foot traffic, and which kinds of spaces make a place more or less friendly-seeming. He is a fan of food vendors' carts and restaurants with outdoor tables, for their ability to foster interaction. He is in favor of social streets.
Take a walk with a doctor, and you get eyes for something else entirely, particularly with a classical diagnostician like Dr. Bennett Lorber. He, like my uncle's father, diagnoses first based on what he sees, hears and smells when he first meets a patient; then he listens closely to the patient's story. On a walk, taking someone's story isn't possible, but by seeing how people walk, Dr. Lorber can tell that this man will soon need a hip replaced (or suffer badly if he doesn't) and that young woman is pregnant and probably doesn't know it yet. Ridges on fingernails can indicate a number of conditions (when I was on chemotherapy, ridges on my thumbnails chronicled every treatment). This chapter takes a long digression with Dr. Evan Johnson, the author's back surgeon. Sometimes, your walk reveals problems in your back, and sometimes, problems in your walk cause problems in your back.
To really learn about seeing, take a walk with a blind person. Arlene Gordon, sighted for about 40 years, then blind for another 40 or more, was the perfect companion for a different kind of walk. Our visual cortex occupies 1/4 to 1/3 of our brain's capacity. When the eyes aren't keeping it busy, it finds a way to help out other senses. By helping out hearing, for example, it enables many blind people to echolocate, seeing the way bats do. I've seen a TV program about a blind man who can ride a bicycle, all the while clicking with his tongue. Ms Gordon uses a cane and her ears. The cane has two functions. It is a feeler, but it also clicks when it touches down, and the blind wielder learns to build up a visual image from the return echoes, even if born blind. The ambient sound changes with our surroundings also. Ms Gordon knew when the walk took them under an awning or past the edge of a building at a corner. Walking with a sound engineer exposed the author to another dimension of sound. Scott Lehrer helps us understand the different sounds of auto tires on pavement that is wet or dry, or, more subtly, macadam or concrete. He finds charming a much wider array of city noises. After a walk with him, the author is less offended by "noise", having become attuned to a greater variety of aesthetic qualities. She also learned more about protecting her hearing. Don't be shy about putting your fingers in your ears when a shrieking motorcycle screams by. It may just save you from partial deafness or tinnitus later.
The last expert is a dog, Finnegan. His world is a world of smells, though dogs' vision is as keen as our own (but less richly colored). It is a pity to see someone dragging a dog along on a brisk walk, when the dog would much rather first check the pee-decorated fireplug and curb corner, and then briskly trot to the next signpost. The author's walk with Finn almost wasn't a walk at all. Rather than prompt him to go right or left, she stopped on the stoop to see where he would go. He was content to sit there and take in the smells as the passersby passed by. Dogs expect us to take the lead, when they are on a leash, so she had to lead out. Once on the move, the dog had plenty of opinions about where to go and when to stop or start. Would it surprise you to learn, that the way most of us make a visual map in our brain, is mirrored in the brains of bats and blind people by an auditory map, and in the brains of dogs and many other animals by a scent map?
The map is the thing. The more richly we learn to experience the world, the more rich and detailed our mental map will be, and the more ways we can continue to build it. These walks were, for Ms Horowitz, an education you cannot obtain in any classroom or from any lecture. To learn how to observe as you walk, you need to get out and walk.
Do you remember getting your first car? Maybe you shopped and dithered for a while, before settling on this model of that make, at a price you could (barely) afford. Maybe it was a nice, affordable Honda Civic. After that, for weeks, it seems every second car you see on the road is a Civic, "just like mine!" You got eyes for it.
I remember another experience of getting eyes for something. On a field trip, some classmates enticed me to take a side trip to collect trilobites. I rode along, imagining the iconic "oversize pillbugs" that seem to define the notion of "fossil." We stopped at a road cut, and everyone got out. I looked at an expanse of light gray rock with a peppery texture, asking, "Where are the trilobites?" Someone said, "Right here," pointing at a dark double-speck. I looked closely, seeing two dots with a couple of lines between them. It was a trilobite, all right, no more than a centimeter in length.
It was similar to this Perenopsis (cropped from a photo found at fossilguy.com). Stepping back, I saw that the texture of the rock face was peppered with thousands of them! I now had eyes for them.
Sometimes we look and look, and finally see. Sometimes we just need someone to point out what was there all along. Alexandra Horowitz sought about a dozen someones to help her see what she had been missing on the 3-times-daily walks she took with her dog, around her block in New York City. Now she kindly brings us along on those walks in her new book On Looking: Eleven Walks With Expert Eyes.
Not all the walks were around her own block. Sometimes she went to the block where her expert lived or worked. The first and last of the book's 13 chapters are walks she took alone. First, she walked on her own, observing her very familiar block, and finally, on another solo walk, remarked on all the things she now has eyes (and ears and nose) for.
Her first expert was her toddler son, Ogden. To a tiny child, all is new, all nearly equally absorbing. A block you can walk in 5 minutes can take a couple of hours with a toddler…I almost wrote "with a toddler in tow," but of course, Ms Horowitz did her best to be "in tow" to her son. After all, he was the expert on this occasion. The infant brain seems to hoover up everything, struggling to make sense, to discern what is important. Young Ogden had a wide array of interests. Triangles at one point (the bracing in a railing), at others dump trucks (particularly their unique sounds), an insect, or a weed growing in a crack. All could stop him in his tracks. It seems little ones either haven't learned to make quick observations while moving along, or just prefer to stop still for long enough to fully appreciate what has just caught their attention. There's this thing about toddlers, though. They aren't specialists yet.
Geologist Sidney Horenstein is a specialist, and a walk with him shows the author that there is a great lot of geology to specialize in. From the crinoids and brachiopods that decorate the foundation stones of her apartment building, and that she'd never noticed, to the various colors and textures of the limestone, marble, sandstone, granite and so forth that formed their structures. Urban geology is a history of the distances people were willing to go to get building stone. Then a walk with type designer Paul Shaw opened her eyes, not to the signage with which a city is festooned, but to the shapes and forms of the letters used. Crinoids in the stones, and Garamond or Helvetica on the signs. Did the signmaker create text that faithfully evokes the character of a business? To Shaw, many were too slapdash to do a proper job. And artist Maira Kalman opens a world of observation, to expand what we think of as aesthetic. To some people, most things are ugly or at best commonplace, and it takes an uncommonly lovely scene to evoke any positive feeling in them. Not Ms Kalman. Maybe she doesn't quite find everything lovely, but she does her best to come close. A discarded couch on the sidewalk catches her attention, and prompts a painting (found before page 87).
Entomologist Charley Eiseman ought to be named "Wiseman", for he makes Ms Horowitz wise in the ways of the little animals that outnumber us, even in our most crowded cities. Probably, just the sidewalk ants outnumber human residents. The next time you sweep for cobwebs, think that there are probably a dozen nearly invisible spiders (and maybe some that are all too visible!) sharing your rooms. Outside, there are even more. Funnel webs abound in a city, tucked into inside corners or at the roots of bushes, or where railings attach to walls. And every spider needs many insect "clients". I learned something I'd never thought of. City trees come in two varieties: those with many signs of insect damage, and those that appear pristine and unchewed. The unchewed are the imports. Insects (and mites and other eaters) in the local environment are not specialized to eat them, as they are the native plants. This is why "invasive" imports invade so well. They are free of enemies to slow down their spread. Larger animals are the province of naturalist John Hadidian, with whom the author walked in Washington, D.C. In a city, they may not outnumber us (well, maybe the squirrels and pigeons do), but they are surprisingly ubiquitous. (I don't know as much about cities, but my suburban yard houses squirrels, rabbits, mice, voles, shrews, frogs, toads and a dozen or more kinds of songbirds, and we've seen visiting foxes, raccoons, vultures, hawks, crows and deer.)
Fred Kent of Project for Public Spaces is an urban planner of another sort. He and his colleagues study the way people interact in cities. They discern how an apparent obstacle can facilitate foot traffic, and which kinds of spaces make a place more or less friendly-seeming. He is a fan of food vendors' carts and restaurants with outdoor tables, for their ability to foster interaction. He is in favor of social streets.
Take a walk with a doctor, and you get eyes for something else entirely, particularly with a classical diagnostician like Dr. Bennett Lorber. He, like my uncle's father, diagnoses first based on what he sees, hears and smells when he first meets a patient; then he listens closely to the patient's story. On a walk, taking someone's story isn't possible, but by seeing how people walk, Dr. Lorber can tell that this man will soon need a hip replaced (or suffer badly if he doesn't) and that young woman is pregnant and probably doesn't know it yet. Ridges on fingernails can indicate a number of conditions (when I was on chemotherapy, ridges on my thumbnails chronicled every treatment). This chapter takes a long digression with Dr. Evan Johnson, the author's back surgeon. Sometimes, your walk reveals problems in your back, and sometimes, problems in your walk cause problems in your back.
To really learn about seeing, take a walk with a blind person. Arlene Gordon, sighted for about 40 years, then blind for another 40 or more, was the perfect companion for a different kind of walk. Our visual cortex occupies 1/4 to 1/3 of our brain's capacity. When the eyes aren't keeping it busy, it finds a way to help out other senses. By helping out hearing, for example, it enables many blind people to echolocate, seeing the way bats do. I've seen a TV program about a blind man who can ride a bicycle, all the while clicking with his tongue. Ms Gordon uses a cane and her ears. The cane has two functions. It is a feeler, but it also clicks when it touches down, and the blind wielder learns to build up a visual image from the return echoes, even if born blind. The ambient sound changes with our surroundings also. Ms Gordon knew when the walk took them under an awning or past the edge of a building at a corner. Walking with a sound engineer exposed the author to another dimension of sound. Scott Lehrer helps us understand the different sounds of auto tires on pavement that is wet or dry, or, more subtly, macadam or concrete. He finds charming a much wider array of city noises. After a walk with him, the author is less offended by "noise", having become attuned to a greater variety of aesthetic qualities. She also learned more about protecting her hearing. Don't be shy about putting your fingers in your ears when a shrieking motorcycle screams by. It may just save you from partial deafness or tinnitus later.
The last expert is a dog, Finnegan. His world is a world of smells, though dogs' vision is as keen as our own (but less richly colored). It is a pity to see someone dragging a dog along on a brisk walk, when the dog would much rather first check the pee-decorated fireplug and curb corner, and then briskly trot to the next signpost. The author's walk with Finn almost wasn't a walk at all. Rather than prompt him to go right or left, she stopped on the stoop to see where he would go. He was content to sit there and take in the smells as the passersby passed by. Dogs expect us to take the lead, when they are on a leash, so she had to lead out. Once on the move, the dog had plenty of opinions about where to go and when to stop or start. Would it surprise you to learn, that the way most of us make a visual map in our brain, is mirrored in the brains of bats and blind people by an auditory map, and in the brains of dogs and many other animals by a scent map?
The map is the thing. The more richly we learn to experience the world, the more rich and detailed our mental map will be, and the more ways we can continue to build it. These walks were, for Ms Horowitz, an education you cannot obtain in any classroom or from any lecture. To learn how to observe as you walk, you need to get out and walk.
Sunday, October 06, 2013
Looking too hard, and not looking
kw: book reviews, nonfiction, forecasting, prediction, statistics
We are remarkably good at cutting through the clutter in many situations. For example, we can talk to someone at a crowded party and pick out what they are saying in spite of the noise all around; and we can often spot a familiar face in a crowd. However, we sometimes see (or hear, etc.) things that are not there. When I was a child we would look for faces or other shapes in clouds. In a few minutes of looking, something suggestive is bound to appear. And there is a painting by my father of waves breaking on a rocky seashore. One of the big rocks looks like a leopard's head, and once I'd seen it, ever since I always see that leopard's head whenever I glance at the painting.
My father had no intention to hide faces in his paintings. Seeing the leopard's head is an example of a Type 1 error. If my father did actually hide faces in all his paintings, and I have noticed only this one (I have several others), then missing the faces that are there would be Type 2 errors. If I become so rapt in searching clouds for faces that I don't notice a friend approaching until he taps me on the shoulder, I have fallen victim to both kinds of error! We lazy, sedentary Westerners tend to do this frequently. Not so someone living hand-to-mouth in the woods.
For nearly everyone, through all the one or two million years of our evolution as brainy apes, hyper-alertness was required. Where it matters most, a Type 1 error does no harm, but a Type 2 error might be fatal. Running from a rock that looks like a leopard can make you look silly, but not running from a leopard that looks like a rock will probably get you eaten. Strangely, though we have kept our strong propensity to make Type 1 errors, as the risk of not noticing a real leopard has fallen, we are more and more likely to make Type 2 errors. In our modern world, in which we increasingly rely on forecasts and predictions, this leads to trouble.
Nate Silver, in his new book The Signal and the Noise: Why So Many Predictions Fail – But Some Don't, presents a number of similar examples that display our modern tendency to pick faces out of clouds while ignoring the approaching friend (or foe). I'll simplify matters and mention that he finds successful forecasting in only two areas: weather and baseball. Politics and stock picking and a number of other areas come in for a drubbing.
This simple diagram tells me all I need to know about "technical analysis" of stock prices. The data are the day-to-day percent change in the price of DuPont stock, from 1962 to mid September of this year. That's just over 13,000 data points. The X axis is the change on any particular day, and the Y axis is the change on the following day. This diagram shows perfect non-correlation! It is a 2-D bell curve, though with thicker tails than a Gaussian bell curve.
During those 51 years, the stock rose nearly 4,200%. That averages out to 7.7% per year but only 0.032% daily. Someone who bought $1,000 of DD stock in early January 1962 would have $43,000 today. Now, there's been a lot of inflation. That $1,000 in 1962 had the buying power of $7,740 today. So a half-century of waiting produced an effective multiplier of 5.5. That's 3% yearly after adjusting for inflation. Better than the bank.
The most extreme daily jumps are -20% and +10%. Stock speculators, particularly day traders, dream of taking advantage of the many days that a stock's price changes more than a percent or two. And such days are more common than if the distribution were strictly Gaussian. DuPont stock moves up at least 2.5% in a day about 5% of the time, and downward with similar frequency. That means, if you could pick just those up days, about 12 days each year, you could earn at least a 20% return yearly. That's 2-3 times what a buy-and-hold strategy will earn. Then, look at this:
The chart shows the historical record of DuPont stock, adjusted for splits. Focus on late 1974, late 1987, and late 2008 to early 2009. These show DD following the herd during market crashes, and represent downturns of 50%, 41% and 65%, respectively. If you could have avoided them, by selling just at the peak and buying back in at the bottom, your final return would be 9.69 times greater, for a total value of $416,000! Adjusted for inflation, that's over 8% return yearly (12.5% dollar-for-dollar yearly return).
Such figures stoke the dreams of day traders. But the first chart, showing no day-to-day correlation, dashes those dreams. Day traders work very hard for little return, and most lose. Some lose, big time, and some gain, but it is by accident either way. There are millions of day traders and other stock speculators. As Churchill wrote, "Even a fool is right once in a while."
Now we must differentiate prediction from forecasting. A prediction is a flat statement that a specific happening will or will not occur at some time or in some time horizon. For example, "There will be a magnitude 7 earthquake in Fremont within the coming year." A proper forecast includes the forecaster's uncertainty and is stated in probabilistic terms, as, "Projecting the trend of earthquakes in Fremont indicates that an earthquake of magnitude 7 or greater occurs about 3 times every 200 years." [Fremont was the imaginary State in the novel Space by James A. Michener]. One might add to such a forecast, a hybrid statement such as, "Fremont has not experienced an earthquake of magnitude greater than 6 in the past 100 years," which implies that "the big one" may be overdue. But it may indicate that conditions deep down may also be changing.
Earthquake prediction is the poster child of unpredictable phenomena. Intense study and research over decades, even centuries, have failed to yield a single valid prediction. Sports betting is close behind, except in the arena of baseball. Nate Silver once created a system he calls PECOTA, that rates the strength of teams against one another according to the past statistics of their players, and a well-known "aging curve" of the way performance changes over a player's career. Because baseball has such a rich data set, going back a century, and the principles needed to make useful forecasts are also well known, PECOTA and similar systems can evaluate players and teams at a level nearly equal to the best scouts. The computer can't quite replicate the humans, but it does give 'em a run for the money!
Why are forecasting and prediction so hard? Even though we have randomness at the deepest level of atomic phenomena, that randomness is constrained by the statistics of large numbers, and physics works very accurately to predict many systems, such as planetary orbits. Thus, though the path of an electron after passing through a hole may be uncertain, the distribution center of the paths of trillions of electrons (say, a millionth of an ampere for 0.1 second or so) will be very sharply defined and can be accurately measured, and the shape of the distribution tells you additional facts: the hole's size and shape. The much larger "distribution" consisting of the atoms making up a baseball mean that its flight, once thrown or batted, will be easily predicted.
The geological setting of an earthquake is not as simple as an electron. Perhaps this year, an earthquake might occur, large enough that the two sides of a fault will slip by each other by half a meter. That may be enough to put two kinds of rock in contact, that were not in contact before, which changes the likelihood of the next earthquake.
What about the weather? Air is in constant motion; its humidity and temperature, and thus its density, change constantly. How can anyone make a useful weather forecast? In some ways, we are still dependent on the "signs in the sky" that Jesus mentioned. In modern (18th Century) terms, "Red sky at morning, sailor take warning. Red sky at night, sailor's delight." Lore such as this is a compilation of patterns that happen over and over, so that generations of our ancestors took note and remembered. Yet now we can get a forecast up to a week or two ahead, complete with expected high and low, precipitation chances and intensity, and wind strength.
It's all done in a computer. Air may have complex behavior, but the physics of air motion and how it changes with temperature, pressure and humidity are well known. The 3D-gridded-cell models that run in supercomputers use surprisingly simple physics to determine how a 3D cell is influenced by the 6 cells it is in facial contact with, and the 8 cells at its corners. The reason supercomputers are used is that Earth is big. The surface area of the planet is 4πr², where r is 6,370 km: about 510 million km². Cells of half a km on a side, plus 0.1 km in depth (up to 12 km altitude) result in a Global Circulation Model (you'll see the acronym GCM in some weather web sites) with 1/4 trillion cells. It takes a lot of calculation to determine what will happen in the next quarter hour. There are 96 quarter hours in a day, and 672 in a week. To do all those trillions and quadrillions of calculations in only an hour or two requires today's largest computers. And the forecasters' computer gurus don't do it once, they run it several times with very small variations (the formal practice of selecting the variations is called Design of Experiments), to test the stability and sensitivity of the forecast to perturbations.
Weather forecasters have an incentive to get it right that others don't have. The reality is going to arrive tomorrow or the next day, it is visible to all, and it is no fun getting a call such as, "I have ten inches of 'partly cloudy' that I need to shovel off my driveway. Want to come over and help?" They also get a ton of research money from the Dept. of Defense, because good forecasts are crucial to military activities. Earth dynamic studies are different. Students of earthquakes can't observe the day-to-day conditions of a fault line. Its active zone is typically 8-15 km deep, and we can't yet drill a well that deep. Earthquakes are also rare. Sure, there are thousands of little ones, at the bottom of "measurable", every day, but there are trillions of weather events around the globe, every few minutes.
Mr. Silver entertains us with many, many stories of the vagaries of forecasts of all types. In the end, most phenomena are too difficult to forecast appropriately. Some involve living things. The cardinal rule of animal studies is, "Given any particular set of temperature, lighting, food availability and ambient noise, the rat will do whatever the rat wants to do." And this is in spite of lab rats being so inbred that their genetics are practically identical. The statistics of playing poker yield a few big winners, who work hard for the kind of edge they need to beat their fellow experts. But they love to be in a game that is well supplied with "fish": overconfident amateurs. A well-written computer package might tell a poker player the optimum betting strategy, but only if it is betting against other computers. The social aspects of the game, bluffing and speed or slowness of a bet for example, often provide a lot more of an edge than the math does. Carefully crafted intimidation works wonders. I don't expect a computer to master these aspects of the game for a number of decades (that's my forecast!).
The book's final example is the climate, particularly "global warming" or "climate change" or "greenhouse effect" or whatever the next buzzword will be. Climate is not weather. It is the setting in which weather happens. Climate changes unfold over multiple decades or centuries or millennia. Weather changes take seconds. In numerical analysis, this is the Stiffness problem. When something changes suddenly, it takes time for the effects to either move elsewhere or to die down. If you are interested in something with a 5-year cycle, such as El Niño (also called ENSO), the exact location and timing of today's sudden thundershower will not matter one tiny bit. If your interest is in human-induced greenhouse warming that began in the late 1700s, ENSO is an irritation at best. In fact, weather and medium-scale cycles such as ENSO are "noise" in the context of this book's thesis. Another researcher, later on, made clear a different view, that noise is really signals, but about stuff you aren't interested in at the moment.
This is like the crystal radio I made as a kid. It initially consisted of a long wire, running to a treetop, a piece of germanium crystal, and a "whisker", a wire that formed a diode with the germanium; and earphones attached to the whisker and the ground connection on the back of the germanium crystal. The diode "detected" the audio signal by separating it out of the radio frequency "hash". There was just one strong station nearby, so I could hear them pretty clearly. But later, as more stations came on the air (this was the 1950s), I could hear all of them at once. So, following a diagram in Mechanix Illustrated, I made a coil and paid a dime for a small capacitor and a piece of copper, to make a rough tuner. It could be tuned to resonate with one AM station at a time, so I could "tune out" the "noise" of the other stations. They were actually signals, just signals I didn't want right then.
The global greenhouse has warmed about 0.5°C (0.9°F) in a century, and perhaps 1°C (1.8°F) since 1750. Some of that may be warming since the Little Ice Age, which some consider a regional phenomenon, not a global one. But the current "ForecastFox for Mozilla" forecast for the next 24 hours indicates we'll have a 20°F swing tomorrow, from 75 in midafternoon to 55 overnight. You have to average out a lot of daily temperatures to see a change of a degree over 250 years. When you want weather, that is your signal. When you want climate, weather is noise, and lots of it.
The science of greenhouse warming is partly very well known, and partly not so well known. I learned to replicate the Arrhenius calculations from 150 years ago, when I was a pre-teen. Actual warming since his day has been about twice what he expected, because there seem to be amplifying factors. These are very poorly known. Does more cloud cover cool the atmosphere by reflecting more sunlight, or warm it by acting as a further thermal blanket? Or does it do one thing at a certain latitude and another elsewhere? If we do have a further warming by 2 to 4°C, will it shift the Hadley Cell north, or south, or not at all? (The northern edge of the Hadley Cell is a range of latitudes characterized by dry, descending air that form all the world's great deserts.) I've thought of buying land in central Canada, that is currently too cold to farm. Perhaps in 20 years it will be arable…unless the Hadley Cell shifts north and dries out Canada. Then maybe the Mojave would become a tropical paradise!
Y'know how to make a complex system into a positively unsolvable mess? Make it political. Both sides of the Climate debate are so politicized that they can only talk past each other. The tiniest proposal to set any policy is vigorously fought by every vested interest, even those who might benefit (the devil you know…). Heaven help us if weather forecasting ever gets politicized! It is already true that most forecasters err on the wet side: a 20% chance of rain is reported as a 40% or even 50% chance, because the ones rained on are less likely to complain, and those that aren't will feel they dodged a bullet. What if some "weather outcomes" become more politically correct than others?
By the way, I take issue with Silver's definition of statistical rain forecasts. He writes that if 40% of the computer models indicate rain in Chicago, and the rest don't, it is reported as a 40% chance of rain. Sounds logical, but it is quite different than that. The "chance of rain" has different meanings in spring (plus summer) and autumn (plus winter). Spring and summer squall lines pass through areas that are well predicted by most GCM programs. But a squall line is not a solid front of rain. It is a line of thunderstorms. A light squall line may have storms half a mile wide, spaced 2-3 miles apart, giving 20% of the area a 100% chance of rain. The forecasters just don't know which 20%, so the whole area is given a 20% chance of rain. A heavy squall line will have larger storms with closer spacing, and maxes out at about 80% coverage (though this will probably be reported as "near certain"). Fall and early winter storms tend to be solid and widespread, but subject to ripples several miles wide in the upper atmosphere. As a system rides up a ripple, it drops rain along a solid band dozens of hundreds of miles long but only about a mile wide or so. As it rides down, it dries out. The height of the ripples determines whether the overall chance of rain is 30% or 70% or somewhere between. The ripples drift along as system after system rides through, so it is very hard to tell exactly where the rain will fall. Timing is everything. Then, a lower-level storm that just dumps (ignoring the ripples) leads to those 100% forecasts, which are generally accurate.
In most arenas, Silver advocates using Bayesian analysis rather than "frequentist" simulations or estimations. These allow individualized forecasts for particular cases. An example is the probability of breast cancer in a woman in her 40s, who has just had the unwelcome news that a mammogram is "positive". The factors of a Bayesian calculation are:
In this case, the woman may wish for a needle biopsy, but a bit of blood chemistry may be in order first. Enzymes in the blood can indicate whether a new cancer is likely to be slow growing, or faster. Is it slower (the most likely case)? She can wait a year for another mammogram. If the next mammogram is positive, re-do the analysis, replacing the 1.4% with 9.6%. Now the "new x" is just over 44%, and at the very least a biopsy is indicated. Most other forecasting methods don't use multi-step refinement. And by the way, if the next mammogram is negative (and no palpation can detect a lump, or any growth in an earlier lump), running the analysis with 9.6%, 10% and 75%, in that order, reverts to 1.4% as the "new x".
Those who follow this blog may wonder why it took me 3 weeks to read such a fascinating book. The writing is good and the examples are interesting, so that didn't slow me down. We have a lot going on, however, so I have had much less time for reading than usual. Retirement has been good to me so far, but I have to be careful not to take on too many projects at once. I completed a Real Estate course and passed the test in July. However, I will probably not seek a license or become a Realtor®, because there are simply too many other things I'd prefer to do. The change of style and reduced frequency with which I post is a similar effect. I used to post almost every lunch hour, doing research in off hours. I think I am working longer days than when I worked! Better busy than bored. Since retiring in February, I have put 24 items in my "job jar" file. Half of them, mostly the bigger ones, have been completed. One major item is awaiting an event that is at least a year in the future, but the preparations are nearly all completed. Others are smaller so I can take an odd half day to perform one. All things in their own time. In the meantime, I read when I can, and report what I read.
We are remarkably good at cutting through the clutter in many situations. For example, we can talk to someone at a crowded party and pick out what they are saying in spite of the noise all around; and we can often spot a familiar face in a crowd. However, we sometimes see (or hear, etc.) things that are not there. When I was a child we would look for faces or other shapes in clouds. In a few minutes of looking, something suggestive is bound to appear. And there is a painting by my father of waves breaking on a rocky seashore. One of the big rocks looks like a leopard's head, and once I'd seen it, ever since I always see that leopard's head whenever I glance at the painting.
My father had no intention to hide faces in his paintings. Seeing the leopard's head is an example of a Type 1 error. If my father did actually hide faces in all his paintings, and I have noticed only this one (I have several others), then missing the faces that are there would be Type 2 errors. If I become so rapt in searching clouds for faces that I don't notice a friend approaching until he taps me on the shoulder, I have fallen victim to both kinds of error! We lazy, sedentary Westerners tend to do this frequently. Not so someone living hand-to-mouth in the woods.
For nearly everyone, through all the one or two million years of our evolution as brainy apes, hyper-alertness was required. Where it matters most, a Type 1 error does no harm, but a Type 2 error might be fatal. Running from a rock that looks like a leopard can make you look silly, but not running from a leopard that looks like a rock will probably get you eaten. Strangely, though we have kept our strong propensity to make Type 1 errors, as the risk of not noticing a real leopard has fallen, we are more and more likely to make Type 2 errors. In our modern world, in which we increasingly rely on forecasts and predictions, this leads to trouble.
Nate Silver, in his new book The Signal and the Noise: Why So Many Predictions Fail – But Some Don't, presents a number of similar examples that display our modern tendency to pick faces out of clouds while ignoring the approaching friend (or foe). I'll simplify matters and mention that he finds successful forecasting in only two areas: weather and baseball. Politics and stock picking and a number of other areas come in for a drubbing.
This simple diagram tells me all I need to know about "technical analysis" of stock prices. The data are the day-to-day percent change in the price of DuPont stock, from 1962 to mid September of this year. That's just over 13,000 data points. The X axis is the change on any particular day, and the Y axis is the change on the following day. This diagram shows perfect non-correlation! It is a 2-D bell curve, though with thicker tails than a Gaussian bell curve.
During those 51 years, the stock rose nearly 4,200%. That averages out to 7.7% per year but only 0.032% daily. Someone who bought $1,000 of DD stock in early January 1962 would have $43,000 today. Now, there's been a lot of inflation. That $1,000 in 1962 had the buying power of $7,740 today. So a half-century of waiting produced an effective multiplier of 5.5. That's 3% yearly after adjusting for inflation. Better than the bank.
The most extreme daily jumps are -20% and +10%. Stock speculators, particularly day traders, dream of taking advantage of the many days that a stock's price changes more than a percent or two. And such days are more common than if the distribution were strictly Gaussian. DuPont stock moves up at least 2.5% in a day about 5% of the time, and downward with similar frequency. That means, if you could pick just those up days, about 12 days each year, you could earn at least a 20% return yearly. That's 2-3 times what a buy-and-hold strategy will earn. Then, look at this:
The chart shows the historical record of DuPont stock, adjusted for splits. Focus on late 1974, late 1987, and late 2008 to early 2009. These show DD following the herd during market crashes, and represent downturns of 50%, 41% and 65%, respectively. If you could have avoided them, by selling just at the peak and buying back in at the bottom, your final return would be 9.69 times greater, for a total value of $416,000! Adjusted for inflation, that's over 8% return yearly (12.5% dollar-for-dollar yearly return).
Such figures stoke the dreams of day traders. But the first chart, showing no day-to-day correlation, dashes those dreams. Day traders work very hard for little return, and most lose. Some lose, big time, and some gain, but it is by accident either way. There are millions of day traders and other stock speculators. As Churchill wrote, "Even a fool is right once in a while."
Now we must differentiate prediction from forecasting. A prediction is a flat statement that a specific happening will or will not occur at some time or in some time horizon. For example, "There will be a magnitude 7 earthquake in Fremont within the coming year." A proper forecast includes the forecaster's uncertainty and is stated in probabilistic terms, as, "Projecting the trend of earthquakes in Fremont indicates that an earthquake of magnitude 7 or greater occurs about 3 times every 200 years." [Fremont was the imaginary State in the novel Space by James A. Michener]. One might add to such a forecast, a hybrid statement such as, "Fremont has not experienced an earthquake of magnitude greater than 6 in the past 100 years," which implies that "the big one" may be overdue. But it may indicate that conditions deep down may also be changing.
Earthquake prediction is the poster child of unpredictable phenomena. Intense study and research over decades, even centuries, have failed to yield a single valid prediction. Sports betting is close behind, except in the arena of baseball. Nate Silver once created a system he calls PECOTA, that rates the strength of teams against one another according to the past statistics of their players, and a well-known "aging curve" of the way performance changes over a player's career. Because baseball has such a rich data set, going back a century, and the principles needed to make useful forecasts are also well known, PECOTA and similar systems can evaluate players and teams at a level nearly equal to the best scouts. The computer can't quite replicate the humans, but it does give 'em a run for the money!
Why are forecasting and prediction so hard? Even though we have randomness at the deepest level of atomic phenomena, that randomness is constrained by the statistics of large numbers, and physics works very accurately to predict many systems, such as planetary orbits. Thus, though the path of an electron after passing through a hole may be uncertain, the distribution center of the paths of trillions of electrons (say, a millionth of an ampere for 0.1 second or so) will be very sharply defined and can be accurately measured, and the shape of the distribution tells you additional facts: the hole's size and shape. The much larger "distribution" consisting of the atoms making up a baseball mean that its flight, once thrown or batted, will be easily predicted.
The geological setting of an earthquake is not as simple as an electron. Perhaps this year, an earthquake might occur, large enough that the two sides of a fault will slip by each other by half a meter. That may be enough to put two kinds of rock in contact, that were not in contact before, which changes the likelihood of the next earthquake.
What about the weather? Air is in constant motion; its humidity and temperature, and thus its density, change constantly. How can anyone make a useful weather forecast? In some ways, we are still dependent on the "signs in the sky" that Jesus mentioned. In modern (18th Century) terms, "Red sky at morning, sailor take warning. Red sky at night, sailor's delight." Lore such as this is a compilation of patterns that happen over and over, so that generations of our ancestors took note and remembered. Yet now we can get a forecast up to a week or two ahead, complete with expected high and low, precipitation chances and intensity, and wind strength.
It's all done in a computer. Air may have complex behavior, but the physics of air motion and how it changes with temperature, pressure and humidity are well known. The 3D-gridded-cell models that run in supercomputers use surprisingly simple physics to determine how a 3D cell is influenced by the 6 cells it is in facial contact with, and the 8 cells at its corners. The reason supercomputers are used is that Earth is big. The surface area of the planet is 4πr², where r is 6,370 km: about 510 million km². Cells of half a km on a side, plus 0.1 km in depth (up to 12 km altitude) result in a Global Circulation Model (you'll see the acronym GCM in some weather web sites) with 1/4 trillion cells. It takes a lot of calculation to determine what will happen in the next quarter hour. There are 96 quarter hours in a day, and 672 in a week. To do all those trillions and quadrillions of calculations in only an hour or two requires today's largest computers. And the forecasters' computer gurus don't do it once, they run it several times with very small variations (the formal practice of selecting the variations is called Design of Experiments), to test the stability and sensitivity of the forecast to perturbations.
Weather forecasters have an incentive to get it right that others don't have. The reality is going to arrive tomorrow or the next day, it is visible to all, and it is no fun getting a call such as, "I have ten inches of 'partly cloudy' that I need to shovel off my driveway. Want to come over and help?" They also get a ton of research money from the Dept. of Defense, because good forecasts are crucial to military activities. Earth dynamic studies are different. Students of earthquakes can't observe the day-to-day conditions of a fault line. Its active zone is typically 8-15 km deep, and we can't yet drill a well that deep. Earthquakes are also rare. Sure, there are thousands of little ones, at the bottom of "measurable", every day, but there are trillions of weather events around the globe, every few minutes.
Mr. Silver entertains us with many, many stories of the vagaries of forecasts of all types. In the end, most phenomena are too difficult to forecast appropriately. Some involve living things. The cardinal rule of animal studies is, "Given any particular set of temperature, lighting, food availability and ambient noise, the rat will do whatever the rat wants to do." And this is in spite of lab rats being so inbred that their genetics are practically identical. The statistics of playing poker yield a few big winners, who work hard for the kind of edge they need to beat their fellow experts. But they love to be in a game that is well supplied with "fish": overconfident amateurs. A well-written computer package might tell a poker player the optimum betting strategy, but only if it is betting against other computers. The social aspects of the game, bluffing and speed or slowness of a bet for example, often provide a lot more of an edge than the math does. Carefully crafted intimidation works wonders. I don't expect a computer to master these aspects of the game for a number of decades (that's my forecast!).
The book's final example is the climate, particularly "global warming" or "climate change" or "greenhouse effect" or whatever the next buzzword will be. Climate is not weather. It is the setting in which weather happens. Climate changes unfold over multiple decades or centuries or millennia. Weather changes take seconds. In numerical analysis, this is the Stiffness problem. When something changes suddenly, it takes time for the effects to either move elsewhere or to die down. If you are interested in something with a 5-year cycle, such as El Niño (also called ENSO), the exact location and timing of today's sudden thundershower will not matter one tiny bit. If your interest is in human-induced greenhouse warming that began in the late 1700s, ENSO is an irritation at best. In fact, weather and medium-scale cycles such as ENSO are "noise" in the context of this book's thesis. Another researcher, later on, made clear a different view, that noise is really signals, but about stuff you aren't interested in at the moment.
This is like the crystal radio I made as a kid. It initially consisted of a long wire, running to a treetop, a piece of germanium crystal, and a "whisker", a wire that formed a diode with the germanium; and earphones attached to the whisker and the ground connection on the back of the germanium crystal. The diode "detected" the audio signal by separating it out of the radio frequency "hash". There was just one strong station nearby, so I could hear them pretty clearly. But later, as more stations came on the air (this was the 1950s), I could hear all of them at once. So, following a diagram in Mechanix Illustrated, I made a coil and paid a dime for a small capacitor and a piece of copper, to make a rough tuner. It could be tuned to resonate with one AM station at a time, so I could "tune out" the "noise" of the other stations. They were actually signals, just signals I didn't want right then.
The global greenhouse has warmed about 0.5°C (0.9°F) in a century, and perhaps 1°C (1.8°F) since 1750. Some of that may be warming since the Little Ice Age, which some consider a regional phenomenon, not a global one. But the current "ForecastFox for Mozilla" forecast for the next 24 hours indicates we'll have a 20°F swing tomorrow, from 75 in midafternoon to 55 overnight. You have to average out a lot of daily temperatures to see a change of a degree over 250 years. When you want weather, that is your signal. When you want climate, weather is noise, and lots of it.
The science of greenhouse warming is partly very well known, and partly not so well known. I learned to replicate the Arrhenius calculations from 150 years ago, when I was a pre-teen. Actual warming since his day has been about twice what he expected, because there seem to be amplifying factors. These are very poorly known. Does more cloud cover cool the atmosphere by reflecting more sunlight, or warm it by acting as a further thermal blanket? Or does it do one thing at a certain latitude and another elsewhere? If we do have a further warming by 2 to 4°C, will it shift the Hadley Cell north, or south, or not at all? (The northern edge of the Hadley Cell is a range of latitudes characterized by dry, descending air that form all the world's great deserts.) I've thought of buying land in central Canada, that is currently too cold to farm. Perhaps in 20 years it will be arable…unless the Hadley Cell shifts north and dries out Canada. Then maybe the Mojave would become a tropical paradise!
Y'know how to make a complex system into a positively unsolvable mess? Make it political. Both sides of the Climate debate are so politicized that they can only talk past each other. The tiniest proposal to set any policy is vigorously fought by every vested interest, even those who might benefit (the devil you know…). Heaven help us if weather forecasting ever gets politicized! It is already true that most forecasters err on the wet side: a 20% chance of rain is reported as a 40% or even 50% chance, because the ones rained on are less likely to complain, and those that aren't will feel they dodged a bullet. What if some "weather outcomes" become more politically correct than others?
By the way, I take issue with Silver's definition of statistical rain forecasts. He writes that if 40% of the computer models indicate rain in Chicago, and the rest don't, it is reported as a 40% chance of rain. Sounds logical, but it is quite different than that. The "chance of rain" has different meanings in spring (plus summer) and autumn (plus winter). Spring and summer squall lines pass through areas that are well predicted by most GCM programs. But a squall line is not a solid front of rain. It is a line of thunderstorms. A light squall line may have storms half a mile wide, spaced 2-3 miles apart, giving 20% of the area a 100% chance of rain. The forecasters just don't know which 20%, so the whole area is given a 20% chance of rain. A heavy squall line will have larger storms with closer spacing, and maxes out at about 80% coverage (though this will probably be reported as "near certain"). Fall and early winter storms tend to be solid and widespread, but subject to ripples several miles wide in the upper atmosphere. As a system rides up a ripple, it drops rain along a solid band dozens of hundreds of miles long but only about a mile wide or so. As it rides down, it dries out. The height of the ripples determines whether the overall chance of rain is 30% or 70% or somewhere between. The ripples drift along as system after system rides through, so it is very hard to tell exactly where the rain will fall. Timing is everything. Then, a lower-level storm that just dumps (ignoring the ripples) leads to those 100% forecasts, which are generally accurate.
In most arenas, Silver advocates using Bayesian analysis rather than "frequentist" simulations or estimations. These allow individualized forecasts for particular cases. An example is the probability of breast cancer in a woman in her 40s, who has just had the unwelcome news that a mammogram is "positive". The factors of a Bayesian calculation are:
- x - Prior Estimate: the chance that a proposition is true.
- y - Type 1 analysis: the chance that new data which indicates "Yes" is actually correct.
- z - Type 2 analysis: the chance that the proposition is not true, in spite of the new data.
In this case, the woman may wish for a needle biopsy, but a bit of blood chemistry may be in order first. Enzymes in the blood can indicate whether a new cancer is likely to be slow growing, or faster. Is it slower (the most likely case)? She can wait a year for another mammogram. If the next mammogram is positive, re-do the analysis, replacing the 1.4% with 9.6%. Now the "new x" is just over 44%, and at the very least a biopsy is indicated. Most other forecasting methods don't use multi-step refinement. And by the way, if the next mammogram is negative (and no palpation can detect a lump, or any growth in an earlier lump), running the analysis with 9.6%, 10% and 75%, in that order, reverts to 1.4% as the "new x".
Those who follow this blog may wonder why it took me 3 weeks to read such a fascinating book. The writing is good and the examples are interesting, so that didn't slow me down. We have a lot going on, however, so I have had much less time for reading than usual. Retirement has been good to me so far, but I have to be careful not to take on too many projects at once. I completed a Real Estate course and passed the test in July. However, I will probably not seek a license or become a Realtor®, because there are simply too many other things I'd prefer to do. The change of style and reduced frequency with which I post is a similar effect. I used to post almost every lunch hour, doing research in off hours. I think I am working longer days than when I worked! Better busy than bored. Since retiring in February, I have put 24 items in my "job jar" file. Half of them, mostly the bigger ones, have been completed. One major item is awaiting an event that is at least a year in the future, but the preparations are nearly all completed. Others are smaller so I can take an odd half day to perform one. All things in their own time. In the meantime, I read when I can, and report what I read.