A better way of doing analysis

When was the last time you enjoyed reading a long security risk assessment or political risk report? Or even found one particularly useful and used it as a basis for making big decisions about project feasibility? In my past life as the head of consulting for a political risk firm, I spent too much of my time trying to rescue draft reports written by analysts before they landed in a client’s inbox. Even though clients told us they wanted reports that were short and to the-point, they tended to be long-winded and anything but readable. To paraphrase The Duke of Wellington, ‘I don’t know what effect these reports will have upon the client, but by God, they bore me.’

There is another way of doing things, as Marko Papic lays out in Geopolitical Alpha. Papicisn’t scared of taking an out-of-consensus view, and he’s justified in doing so because he’s taken the time to work out a coherent framework for thinking about complex problems that are rooted in uncertainty. 

Papic is writing for the investment community, but his ideas are equally relevant to other disciplines including security, ESG and more operationally focused country-political risk analysis. His framework is a viable alternative to compliance-driven approaches in ESG, or the persistent but flawed risk matrix approach to security risk assessments in the security field, for example. All of these approaches have their uses in a routine risk environment, but they fail in the face of open-ended complex risks, where the lack of information demands real qualitative judgement. 

The quality of the data matters more than its quantity, especially in circumstances where a complete data set is impossible to obtain. The problem with security and societal risks in particular, is that the main variable is human behaviour. And despite what conventional economic theory says, humans don’t tend to make rational choices in the face of uncertainty. This makes your risk scenarios inherently unpredictable. 

For example, take a nearshore natural gas exploration project in Mozambique, or a copper mining project in a remote area of Pakistan, and add the complications of societal issues and the early signs of insurgency. Now try to leverage data in innovative ways to produce a dependable prediction about risk. It isn’t possible.

Papic is correct in pointing out that good analysis of complex scenarios thrives on uncertainty, the paucity of high-quality information, and concept-driven (as opposed to data-driven) analysis. Instead of shying away from these challenges – or trying to fit the qualitative square peg into the quantitative round hole – his approach admits that good analysis can be subjective and qualitative.

It’s about forecasting not prediction. George Soros got this right years ago with his explanation of reflexivity in stock markets. Likewise,Papic does a good job of explaining the difference between conditional and subjective probabilities, and how to use them to good effect in producing realistic and actionable risk scenarios. Probability (or likelihood, same thing) is the worst understood concept in security and other types of risk assessment and I’ve seen it lead to all sorts of serious problems. One of the reasons this happens so often is that ‘probabilities’ generated by tick-box risk assessments are taken as hard predictions when they’re better understood as statements of confidence in forecasting the occurrence and consequences of plausible scenarios. As Papic explains, in Bayesian reasoning your initial ‘prediction’ isn’t meant to be correct and you’re supposed to change it as new evidence emerges; but generally this doesn’t happen in security risk assessment, in particular. The differences between forecasting and prediction are subtle, and this section of the book is worth careful consideration.

Scenarios are the key to making qualitative analysis work. This is another area where standard approaches to security risk assessment gets it badly wrong. A useful scenario isn’t a sentence or two squeezed into a risk assessment template, followed by scores for ‘likelihood’ and ‘impact’ with scant explanation. Nor is it useful to fill a template with n such scenarios in an effort to appear thorough. Instead, there’s a good case for focusing hard on a small number of plausible scenarios that have a realistic potential to threaten feasibility and viability, and use these as the basis for understanding fulcrum constraints and opportunities.

Papic goes into good depth on this subject, which he frames within the concept of the Net Assessment and competing hypothesis analysis. This sounds complicated but isn’t, and the basic ideas should be familiar to military planners, for example. However, Papic does a great job of explaining how to operationalize the framework in his field of geopolitical forecasting.

Papic brings it all together by offering the decision tree as a simple and effective tool for visualizing the results of analysis. Analysts and security practitioners do seem to like writing long reports, but I wonder who enjoys reading them. On the other hand, risk owners and decision makers might prefer shorter reports that present a few plausible scenarios along with an easy visualization of how they line up. This shifts the emphasis away from tiresome reports to a more direct engagement between analyst and client, offering a clearer channel for explanation and challenge. 

Back to Insights

Share this insight