kw: book reviews, nonfiction, forecasting, disciplines
Forecasting means different things in different realms. "Prediction is hard", said Yogi Berra, "especially about the future." If you're on a high hillside, looking down at a bend in a river, and a canoeist is paddling industriously downstream, you can see pretty well what he has in store, at least for the next few hundred yards. If there's a waterfall around the bend, it isn't hard to predict that he'll be in trouble if he doesn't soon pull to shore. But there's another factor: You don't know his plan. He may know the waterfall is there, and he is either planning to go ashore and portage around it, or he may have a plan for going over, either because he's really good at riding the rapids, or because he is suicidal. He may not know it is there, and is just exploring, in which case you can hope he has good hearing. Perhaps you can think of other possibilities.Suppose instead you were asked, "How likely is it that Iran will produce a nuclear weapon in the coming 12 months?" Assuming you can find people without a political investment in that question, and you ask ten of them this question, will you get similar answers, or will they range from, "No way, nohow!" to "I am certain they will."? You may get at least one person who says, "Maybe." I think you are most likely to hear, "How should I know?"…but there I am, making a sort of forecast myself!
Then there is the weather, and there is the climate. When a weather forecaster says, "70% chance of rain tomorrow," what does it mean? Is she talking about your city, your neighborhood, or the whole state (easier to contemplate about Rhode Island than about Texas!)? Let's assume it is a local forecast. Decades ago a weatherman explained on a radio program, "70% chance of rain means that 70% of the area will get rained on." I wonder if that is still true? In these days of supercomputers producing a fresh forecast over the whole earth about every hour, it may mean something else. For example, it may mean that when they run the forecasting software over and over again with tiny adjustments to the initial conditions, 70% of them predict rain in "your area". Or it may mean, of the several hundred supercomputers being used with numerous versions of the modeling software, 70% of them predict rain. Or it may still mean "It will rain over 70% of the several square miles surrounding such-and-such a place."
That last version, akin to the version of half a century ago, is probably still what they mean. The prediction is often based on a squall line passing through. A loose squall line will produce a string of small storms that drop rain for a few miles between formation and dissipation. That might mean 30% to 50% of the area will get rained on. A tighter line will have larger storms, closer together, and we're in the 70% range. A heavy squall line will bring rain almost everywhere, and the forecast is 100% rain.
What about climate? It is the average of "weather" over a period of decades. You can't tell from one year to the next whether changing weather patterns mean the climate is getting warmer or cooler, dryer or wetter. I recently saw an article about cherry blossoms in Kyoto, Japan. They are blooming early this year, during the first week of April (today, the 7th is the expected peak). The article stated that the "usual" peak is between April 10-17. Of course, "global warming" was mentioned. That's funny. Kyoto is south of Tokyo and warmer, and the cherry trees there bloom a few days earlier. We visited the Tokyo area from March 28 to April 11, 1992. The peak of the cherry blossoms was March 31. I presume the peak in Kyoto was closer to March 25. With that in mind, if this year's peak in Kyoto is more than a week later than the peak thirty years ago, neither datum means much about a changing climate.
Back to Iran. There is a reason for stating the question like I did, particularly "in the coming 12 months". The political winds are shifty. If the time frame were 10 years, the answers would, I hope, be different (maybe not!), and much less certain. Too many things can happen in a decade. Lots happens in just one year! But the study and research one would need to do to answer that question are, just barely, possible. Having studied and formed an answer, though, how good is it?
"How good is it?" is the subject of the work of Professor Philip E. Tetlock. He has conducted experiments with numerous forecasters for the past few decades, and has discovered factors that make some people better than others at it. Some are "superforecasters", and with Dan Gardner, he was written Superforecasting: The Art and Science of Prediction.
It so happens that superforecasters have certain characteristics. Super-intelligence is not one of them. Of course, they are intelligent, but the important factors include the humility to question their own assumptions and premises, the willingness to break out of false dichotomies, and the diligence to do lots of study and research. The book has a couple of lists of these factors, which in themselves constitute a good introduction to learning to make better predictions…if you're willing to do the work.
We have to forecast to be able to plan. I have two friends I should mention here. One expects the country to descend into anarchy, and soon. It's not just about President Biden, because he expected the same thing when Donald Trump was President. This conviction colors his thinking. His long-range plans include getting rich enough quickly enough to afford land in some out-of-the way place.
The other friend is older and has a longer view. Remembering the decades of the Vietnam War, and the times that preceded it, and those that came after, he is more optimistic. He also likes suburban living. His long-range plans are very different, including his expectation of a long retirement filled with volunteering and service. If you were to place these two men in almost any situation you choose, unless you know their backgrounds and attitudes rather well, it would make no use to make forecasts about the situation itself. Their own internal landscape would be more important than the external. I dare say, if you were told just one thing about these two men, that one is a paranoid prepper, the other a contented suburbanite, it could greatly improve the accuracy of your forecast. Maybe.
That is a big maybe. Dr. Tetlock found that for most people, if someone had already researched a situation and made a prediction, new information was unlikely to have much effect. Superforecasters, by contrast, would frequently incorporate new information and revise a forecast. Some would do so daily or oftener.
It is helpful to remember that forecasting is like other skills, that practice is needed. To learn to forecast, make lots of forecasts. BUT make them in a particular way: the result needs to be quantifiable. Don't say, "There is a significant chance of X happening." To you, "significant" may mean "a 75% or better likelihood", to someone else, 15%, and to others, as little as 5% or even less. If 75% is what you mean, state it that way. Then keep score. One result isn't too useful. It takes three to see the inkling of a trend, and twelve to begin to do decent statistics. The book contains basic instructions on calculating a Brier Score, a measure of a forecaster's accuracy. It is also useful to learn to use Bayesian methods of revising a prediction. Superforecasters that revised their estimates didn't indulge in large adjustments; they were more likely to change a 40% likelihood to 42% or 37%.
It is also helpful to know that these methods are all about the "what" and sometimes the "how" of something. The "why" of something is outside the realm of science or technology. It is frequently more of a theological matter.
Dr. Tetlock is noble enough to introduce a monkey wrench into this whole matter, by discussing his relationship with Nassim Taleb, author of The Black Swan, a book I reviewed in 2007. The idea of a Black Swan is something nobody could have predicted, but it changes everything. Fourteen years ago I was quite taken with the notion. Now I have a more nuanced view. Dr. Taleb contends that because Black Swans can't be predicted, one cannot account for them, and because they matter more than other events, nothing that can be predicted is going to matter enough.
One of the conceptual ills we encounter is either-or thinking. The idea that Black Swans make forecasting useless is either-or thinking. A superforecaster will weigh the possible influence of something unprecedented throwing everything else out the window, without needing to know just what that might be. This is the realm of the "unknown unknowns" of Donald Rumsfeld. This is also why, as stated in The Art of War (I paraphrase), "No battle plan survives contact with the enemy." Yet a good battle plan puts you in a position to re-evaluate and succeed anyway. I find it fascinating that neither Dr. Tetlock nor Dr. Taleb points out that as historical Black Swan examples accumulate, they allow us to re-adjust our knowledge of the expected range of possibilities, to take a better view of the distribution of possible events, and so make better forecasts and more robust plans. We just need to have or to gain the wisdom to discern whether a particular Black Swan is truly unprecedented, or if it is a case of an extreme event that happens more frequently than we'd been led to believe.
For example, the original Black Swans were discovered in Australia in 1697. They were not new to Australians, of course. They were new to Europeans, who knew only white swans. The Western white swan, Cygnus olor, and the Asian black swan, Cygnus atratus, are related species in the genus Cygnus, but have been separated by half a world for tens of thousands of years. There are four other species in the genus Cygnus, including the black-necked swan, C. melancoryphus. Its existence could have clued the Europeans into the possibility that swans somewhere else on Earth could be black, but it did not.
Other examples used to illustrate Black Swans include stock market booms and busts, and extreme flooding events. Just last year I discussed a series of mega-floods that caused large boulders to move across a flood plain south of Sturgis, SD. The context, however, was discussing a new way to analyze daily (or weekly or even monthly) changes in the price of stocks. Considering the floods, I realize that the extreme flooding events in the Black Hills may not be extreme examples of the usual flood frequency regime. They are just as likely to be symptoms of extreme weather events that just "work differently" from the ones that occur yearly and decade-to-decade. It wasn't just a thousand-year flood that moved 20-ton boulders half a mile or more. It may have been a 5,000-year flood, except that this happened twenty times in 5,000 years. So something different was happening, and I don't really know what it was. However, we now know that floods of that magnitude occurred about every 250 years in the past, so it's best to take that into account if you want to build anything on the outwash plain east of the Black Hills.
How could you become a superforecaster? An appendix in the book describes the basic skills, and another invites any reader to join the Good Judgment Project, at www.goodjudgment.com. Enjoy!
No comments:
Post a Comment