Indie rock band Kaiser Chiefs famously predicted mass unrest. The Sun’s resident astrologist Mystic Meg regularly foretells unexpected bounties with mercury rising. And every economist under the sun is trying to predict what will happen after Brexit.
But, in an era of seemingly unending economic and political uncertainty, who should we listen to and why?
One option is to forget the professionals and rely on our own judgment, Home Alone. To Michael Gove’s point, who needs experts anyway? But while it’s arguably the easiest option, research shows that we could be making some dud predictions if we choose to go with gut instinct alone.
Firstly, we often have insufficient knowledge to make an accurate judgement. While some decisions are relatively simple, building an accurate picture of something as complex as potential economic growth and business performance is very difficult.
Secondly, even if we switch on the brain and engage in deep, deliberative thought about the likely future, we’re susceptible to a whole series of behavioural biases. The biggest of which is perhaps our natural tendency to be overly optimistic. Seeing the world through rose tinted glasses means that our own forecasts are rarely reliable.
And thirdly, when we do make considered forecasts, we very rarely evaluate how accurate our predictions are. By failing to consistently compare what actually happens with predictions, we don’t tend to learn from experience.
So, if few of us are good at forecasting, should we rely on experts and well-known pundits? And, if so, which ones?
Philip Tetlock has spent decades looking at the science of forecasting. His early research explored how good professional forecasters are across a broad range of domains.
Tetlock found that when properly evaluating the accuracy of a forecast against what happens, professional forecasters’ performance is little above the average. As he quips, the likely accuracy is little better than a chimp throwing darts into a dartboard (in the hope of hitting a specific number).
Sadly, the trend largely applies to forecasts and predictions made by big businesses. Organisations, which are often focused on short-term impacts on the bottom line, are particularly bad at learning – ironically, they rarely profit from their own experience.
In subsequent research, Tetlock and his team held tournaments to see who is good at forecasting. Interestingly they found a group of people who are superforecasters, but they only made up 5% of the people tested.
The superforecasters share a number of common characteristics. Unsurprisingly, they are knowledgeable, intelligent and inquisitive. But this is only a small factor in their predictive prowess. Much more important is the thinking process that they follow, which gives them a much greater ability to make accurate predictions.
Specifically, superforecasters are highly adept at following a process that breaks questions or problems into sensible, and manageable, smaller parts.
Tetlock lauds the example set by Italian physicist Enrico Fermi. For example, when asked to predict how many piano tuners there are in London, rather than plucking a number out of the air, he considered:
- How many people in his street have pianos?
- How many streets are there in London?
- Given the first two predictions, how many pianos are there in London?
- Approximately how often do pianos need to be tuned?
- How long does it take to tune a piano?
- How many hours does a piano tuner typically work?
An example that may not be music to everyone’s ears, but a process that can be followed for other questions – from likely ROI on new marketing initiatives, to expected performance from colleagues.
Essentially, it is slightly easier to estimate a series of micro issues and build into a global picture, than the other way around.
A second important characteristic of superforecasters is that they regularly record and evaluate their own predictions. They test the accuracy of their calculations, regularly determining how much over or under they are. This helps to expose flaws in their thinking and address optimism and related biases. And it also improves the likely accuracy of their future predictions.
Most people and organisations forget to assess the accuracy of their estimate, or just assume they got it right.
In an attempt to improve, many organisations are increasingly reliant on statistical modelling and machine learning. However, the research suggests that the best judgements are often a combination of intuition and technology.
One thing is for certain, there will be no shortage of pundits making predictions over the next few weeks.
An assessment of the science of prediction suggests we should all look at the process and evidence behind the forecast to determine who to listen to. Is the forecaster consistently accurate? Is the big prediction made up of a series of small assessments that make the whole forecast more likely?
As for Brexit? I predict a riot.