What if…it had been worse?

Lloyd’s of London recently published a useful paper (https://www.lloyds.com/news-and-insight/risk-insight/library/understanding-risk/reimagining-history) on counterfactual risk analysis- a topic that will appeal to viewers of “The Man in the High Castle”, or readers of the novel Fatherland.

Counterfactuals, or “what ifs?” are interesting, because they require an individual to consider how even a small change can have a significant impact. After all, the “Russian Revolution” succeeded almost in spite of its primary actors; but what if it had not?

Turning to the more prosaic world of (re)insurance, the world of NatCAT is full of “what ifs?”- for example, the track of Hurricane Irma being 20 or so kilometres east, such that it hit Miami directly?

As the Lloyd’s paper quite properly (and importantly) points out, there are both “upward” and “downward” counterfactuals- the former asking what would have happened if things had turned out better; and the latter if they had been worse. One does not really want to operate on the basis of the “upward” approach!

Thinking about and modelling different potential outcomes, while now carried out on an industrial scale, both inside and outside (re)insurers, is still prone to bias and the lure of “commonality”. Paradoxically, if a regulator requires those it supervises to model the outcome of a series of standardized scenarios, it may create for itself a useful comparison of relative vulnerabilities or weaknesses across it charges, yet at the same time run the risk of causing them (and itself) to be constrained in thinking about risks that fall outside the dataset used.

Therefore, it becomes important to think about risk with less constraint, because “out of experience” or “unmodelled” events have the unfortunate habit of occurring rather more often than they “should”. Modelling the probability of defined risks to a 1-100- or 1-in 250-, or even 1-in 10,000-year standard is all very well, but what if it is the “wrong” risk, or the estimate of probability or consequence is seriously flawed? What if it is “not in model”?

Of course, some risks are inherently constrained or obviously bounded. For example, if one is an unleveraged equity or debt investor, one can only lose 100% of one’s investment; whereas, the risk of loss from, say, a rapid series of sequential and inter-connected defaults by large, highly-leveraged financial institutions can wreak damage far beyond anyone’s or any then extant model’s expectations.

In the realm of (re)insurance, managements will also argue that they have controls over aggregation of risks; set careful limits based upon rigorous technical underwriting; and, naturally, have a carefully-crafted programme of reinsurance and/or retro in place. However, given that each (re)insurer is an autonomous actor, which tries to protect the proprietary nature of its risk management protocols, what if all the assumptions about network linkages and effects are wrong and the “Big One” (whatever it is) occurs? Having focused on first-order effects, they potentially miss the second- or third-order ones.

Interestingly, the issue with counterfactual analysis is not that something could not have happened, but more often that it was not conceived of as capable of happening; or a necessary connection was not made; or an event forgotten. Intriguingly, the Lloyd’s paper also makes the point that many European languages do not have any expression equivalent to English’s “counterfactual history”. As Wittgenstein said: “The limits of my language mean the limits of my world”, which begs the question of vocabulary as a constraint upon conceptualization.

In the world of NatCAT, post-event analysis often tends to be limited to understanding what happened and why; but gives little, if any consideration, to what might have happened, which is unfortunate because, yet again, it means that thinking and modelling become self-limited. One has to believe that, in an era where the resources available to model and develop scenarios are becoming ever more powerful, it should be possible to generate counterfactual scenarios and outcomes with more frequency than appears currently to be the case.

At Awbury, we do not pretend to be believe we are infallible. However, we do believe that our thinking should be as unconstrained as possible, so that we minimize the risk of downward counterfactuals.

The Awbury Team

Standard

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s