Kazakhstan and the limits of risk prediction

The January 1st edition of The Economist has an interesting graphic entitled The year in probabilities: the implied chance of happening in 2022. There’s no mention of Kazakhstan. I turned to the Europe news section, and the Asia section too, just in case, but it doesn’t seem to get a mention there either. It isn’t just The Economist that missed this one: Kazakhstan doesn’t feature large in most of the global risk maps and special reports issued by risk consultancies.

Is it a failure of analysis, or a failure to understand the limits of prediction?

Plenty of companies are offering platforms that promise reliable analytics, expert human analysis and so on. But they still didn’t predict political unrest in a key Eurasian nation state. 

So what’s going on? 

One of the biggest problems in risk management is a tendency to assume that complex systems are predictable. And a lot of the products and services that risk consultancies promote are based on this assumption. If a product uses new and exciting analytics, then it must produce better predictions, right? 

Maybe not. What makes a system complex? In human affairs it’s probably because the inputs to the system are the product of human decisions, and humans aren’t consistently rational in the face of uncertainty. This type of complexity is far more accident-prone, dynamic, and thus unpredictable than an analytical model can deal with. To make matters worse, different types of situations have different types of uncertainty – how is your analytical model quantifying all of this unquantifiable stuff? How do you present structural and fundamental problems on a screen?

These limits to prediction are well known in the investment world. The best traders admit to calling it right about 55 to 65 percent of the time, but “the winners tend to make three or four times what the losers cost”. They also recognise the “naivety of black box applications” and pay at least as much attention to focused qualitative analysis as they do to the quantitative models. In this context, quantitative models are useful for tracking and monitoring trends but they aren’t reliable prediction engines.

We take quantitative analytical models seriously, but we don’t promote using them alone in situations where the information is unstable and dynamic. One qualitative model we’ve found very helpful is a constraints-based framework called the Net Assessment. The Net Assessment relies on diligent research of discrete problems to organise various possible scenarios into a decision tree. This gives decision makers a visual representation of plausible outcomes, and helps to formalise the choices available to deal with them. A great strength of this approach is that it allows decision makers to ‘see’ the fulcrum constraint, which is the material constraint that inhibits their opportunities and choices of action in any scenario. 

As #Marko Papic puts it in his brilliant book on the subject, preferences are optional and subject to constraints, whereas constraints are neither optional nor subject to preferences. So, it’s essential that analysis identifies the constraints as a first step towards choosing preferred courses of action in decision making.

The approach uses probabilities, but it’s better to understand these as conditional statements of confidence and not predictions. Net assessments are meant to be changed as the situation unfolds through time, so it is better to treat it as the starting point of a forecast.

Back to Insights

Share this insight