Kevin Yin is a contributing columnist for The Globe and Mail and an economics doctoral student at the University of California, Berkeley.
We’re so used to being bombarded with forecasts and predictions from economic analysts, it’s easy to forget just how often these corporate influencers are wrong.
At the beginning of August, the panic sell-off in equity markets was prompted in part by the fact that weak U.S. employment data caught analysts significantly off-guard.
Before that, in March, forecasters had expected 25,000 new jobs in Canada in February. Instead we lost 2,200. And a month earlier, the number of jobs created in January came in at more than double what was predicted. A particularly comical case was October, 2022, where Canada created 10 times more jobs than analysts expected.
Making sense of the present is difficult enough. Knowing the future would be a lucrative skill indeed. That’s why banks, hedge funds, consultancies and think tanks hire scores of people in their research departments to do exactly this, with questionable results.
These analysts use a combination of statistical models, academic theory and personal judgment to churn out numbers and predictions for their clients to consume. Many, like myself, are economists. And while many of our jobs require poring over pages and pages of such analysis, few of us would be willing to bet our own hard-earned cash on their views.
In defence of these forecasters, part of the issue is just that the problem is very hard. There are simply too many unknown unknowns in the economy and political environment to make reliable sense of it all. Often analysts get the mechanism right but the magnitude wrong. And sometimes new aspects of the problem become relevant without anyone having the courtesy of letting them know beforehand.
Even a diligent labour market analyst cannot know the inner thoughts of union leaders, political actors and foreign oil markets all at once. Yet any one of these confounding factors might come in to throw off what was otherwise a principled forecast. Before 2008, the housing market hadn’t crashed since the Great Depression and no one outside of a trading floor had heard of a reverse repurchase agreement, the key financial instrument of the bank run. The future is not always like the past.
But these challenges alone do not explain why many forecasts are systematically incorrect – that is, why they tend to clump so tightly together when they are wrong. Statistical models provide probabilities and error bands covering many likely scenarios, suggesting that purely quantitative methods would result in a whole array of predictions that could be right at least on average.
This herding effect is due to the flexibility that analysts have to ignore their models, and the incentive structure of the market for predictions. Being the only wrong one in the room might leave you without a job, but being wrong with everyone else can be forgiven. The future may not be like the past, but if no one else is pointing that out, it’s often safer to act as if it will be.
Thus, the discretion that analysts keep to adjust their forecasts, which was meant to allow them to account for confounding factors, also gives them the freedom to conform and play it safe. Your time series analysis, theoretical model or line of reasoning might lead you to a very different conclusion from your peers, but overstate these differences and it’s your neck on the line.
Yet while accusing the forecaster of snake oil salesmanship will never go out of style, it’s worth reflecting on what the alternative would be. Demand for an informed view of the future would persist and, in the absence of methodical professionals, charlatans and dilettantes would rush in to fill the void. Politicians could make fanciful projections about the impacts of their policy platforms and CEOs would be free to exaggerate their expectations about future earnings and market share, all unchecked by third-party rigour. Educated guesses would simply become guesses.
Part of the reason these predictions are still valuable is that when we consume them, we take in not only the predictions themselves but the narratives and mechanisms they allude to. Such analysis gives us an interlocutor of sorts, against whom we can argue and test our business plans, investment theses and political strategies. If they are correct, fantastic. If not, the falsification process is educational in its own right.
Thus, an expert forecast is less like snake oil and more like chemotherapy; it might not save you but it’s certainly better than nothing. Wrong or not, it doesn’t seem like these analysts will be going anywhere any time soon.