What if…it had been worse?

Lloyd’s of London recently published a useful paper (https://www.lloyds.com/news-and-insight/risk-insight/library/understanding-risk/reimagining-history) on counterfactual risk analysis- a topic that will appeal to viewers of “The Man in the High Castle”, or readers of the novel Fatherland.

Counterfactuals, or “what ifs?” are interesting, because they require an individual to consider how even a small change can have a significant impact. After all, the “Russian Revolution” succeeded almost in spite of its primary actors; but what if it had not?

Turning to the more prosaic world of (re)insurance, the world of NatCAT is full of “what ifs?”- for example, the track of Hurricane Irma being 20 or so kilometres east, such that it hit Miami directly?

As the Lloyd’s paper quite properly (and importantly) points out, there are both “upward” and “downward” counterfactuals- the former asking what would have happened if things had turned out better; and the latter if they had been worse. One does not really want to operate on the basis of the “upward” approach!

Thinking about and modelling different potential outcomes, while now carried out on an industrial scale, both inside and outside (re)insurers, is still prone to bias and the lure of “commonality”. Paradoxically, if a regulator requires those it supervises to model the outcome of a series of standardized scenarios, it may create for itself a useful comparison of relative vulnerabilities or weaknesses across it charges, yet at the same time run the risk of causing them (and itself) to be constrained in thinking about risks that fall outside the dataset used.

Therefore, it becomes important to think about risk with less constraint, because “out of experience” or “unmodelled” events have the unfortunate habit of occurring rather more often than they “should”. Modelling the probability of defined risks to a 1-100- or 1-in 250-, or even 1-in 10,000-year standard is all very well, but what if it is the “wrong” risk, or the estimate of probability or consequence is seriously flawed? What if it is “not in model”?

Of course, some risks are inherently constrained or obviously bounded. For example, if one is an unleveraged equity or debt investor, one can only lose 100% of one’s investment; whereas, the risk of loss from, say, a rapid series of sequential and inter-connected defaults by large, highly-leveraged financial institutions can wreak damage far beyond anyone’s or any then extant model’s expectations.

In the realm of (re)insurance, managements will also argue that they have controls over aggregation of risks; set careful limits based upon rigorous technical underwriting; and, naturally, have a carefully-crafted programme of reinsurance and/or retro in place. However, given that each (re)insurer is an autonomous actor, which tries to protect the proprietary nature of its risk management protocols, what if all the assumptions about network linkages and effects are wrong and the “Big One” (whatever it is) occurs? Having focused on first-order effects, they potentially miss the second- or third-order ones.

Interestingly, the issue with counterfactual analysis is not that something could not have happened, but more often that it was not conceived of as capable of happening; or a necessary connection was not made; or an event forgotten. Intriguingly, the Lloyd’s paper also makes the point that many European languages do not have any expression equivalent to English’s “counterfactual history”. As Wittgenstein said: “The limits of my language mean the limits of my world”, which begs the question of vocabulary as a constraint upon conceptualization.

In the world of NatCAT, post-event analysis often tends to be limited to understanding what happened and why; but gives little, if any consideration, to what might have happened, which is unfortunate because, yet again, it means that thinking and modelling become self-limited. One has to believe that, in an era where the resources available to model and develop scenarios are becoming ever more powerful, it should be possible to generate counterfactual scenarios and outcomes with more frequency than appears currently to be the case.

At Awbury, we do not pretend to be believe we are infallible. However, we do believe that our thinking should be as unconstrained as possible, so that we minimize the risk of downward counterfactuals.

The Awbury Team

Standard

Innovate effectively, or become a Zero…

The news of the creation by Google’s DeepMind unit of an enhanced and self-taught AlphaGo Zero, which trounced its own progenitor AlphaGo, and appears to play not only differently, but at a different level from human experts, set the Awbury Team thinking about the topic of innovation and change.

FinTech and InsureTech are all the rage, with more and more (re)insurers announcing “venture” units that seek access to new ideas and technologies externally; while predictions are made constantly about the industry being yet another one to be “disrupted” (another newly-fashionable term). Executives may wish to ponder the meaning and consequences of the word, and the fact that it implies that they are in danger of losing control over the destiny of their own businesses to others- disrupt, or be disrupted, because business as usual is not an option for most.

Whether rightly or wrongly, there is also a perception that the (re)insurance industry itself is “out of ideas”; and at the mercy of the latest “fad”, as it seeks desperately to find revenues that will augment its current “zero-sum”, commoditized business lines, and tries to justify losing tens of billions of dollars as a result of the recent spate of NatCATs, wiping out years of profit accumulation.

One can see similar patterns in other industries, where the incumbents, having become complacent, find that they are no longer able to generate significant ideas themselves, and have to rely on third parties to do the basic research that they themselves used to do- “Big Pharma” now being a classic example. The glory days of Bell Labs and Xerox PARC are long gone, while many governments seem to regard funding basic research (other than for “defence” and “national security”) with disdain, because it does not serve the particular vested interests to which they are in thrall. Clearly, there are pockets of excellence such as DeepMind and various university research departments, but too much of what passes for innovation is merely disruption and a “re-hash” rather than original- such as Uber, or WeWork. The so-called “gig economy” is hardly a step forward in human progress.

This matters; because, while the “D” in “R&D” is also essential, without the basic research on which it builds, there can be no progress. Incrementalism is all very well, but Humanity’s welfare has historically improved because of step-changes resulting from different modes of thinking.

So, the (re)insurance industry needs to focus on how it can generate truly fresh ideas itself, that will enhance its offering and margins. If it does not, others will consume its premia, and it will become increasingly obsolescent, as investors lose patience with moribund returns that do not even meet the cost of capital. There is absolutely no reason why a Google, Amazon, or Apple cannot create (re)insurance businesses built upon new business models, and almost certainly using artificial intelligence (AI).

The ability to generate executable and scalable new intellectual property is a fundamental requirement for any business that wishes to survive and prosper. Note the use of the word “executable”. Paradoxically, while basic research and new ideas underpin progress, if one cannot then execute on them, the enterprise is pointless. One’s lunch will still be eaten!

At Awbury, while we regard ourselves as an integral part of the industry, our franchise depends upon both innovation and execution; and we have absolutely no intention of becoming “zeros” as we build for the long term.

The Awbury Team

Standard