Everyone, whether a business, government, or any complex entity, tracks (or certainly should!) the things that go (badly) wrong and have an immediate impact- for example, bankruptcies, air crashes, deaths in hospital, crime levels, etc.. Almost by definition, the (re)insurance industry obsessively tracks many “bad things”, and tries for forecast their probability, as they may become claims and so losses.
However, the tracking of so-called “near misses” appears somewhat less prevalent and systematic. This is partly a function of human nature. We are obviously happy when something bad does not happen, and so tend to dismiss it as unimportant.
Unfortunately, actual events form only part of the overall dataset for identifying and understanding what might happen. How an outcome was avoided provides a better idea of the scale of potential risks, their level of randomness, and whether avoidance was “accidental”, or the result of a conscious decision or protective protocol.
Airlines have long tried to instill processes that encourage pilots and others to report “near misses” or “close calls”. For example, the US FAA has a confidential reporting database called the Aviation Safety Reporting System (ASRS) for precisely that purpose. A similar approach is increasingly used in the medical profession to try to curb deadly and expensive medical errors.
In 2018, EIOPA published its own report on the topic (https://www.eiopa.europa.eu/sites/default/files/publications/pdfs/eiopa_failures_and_near_misses_final_1_0.pdf) focused on insurance failures and near failures. Perhaps not surprisingly, one of the key findings was the impact of inadequate or failed systems of corporate governance and overall controls. If one does not have clear and effective processes in place to identify, manage and mitigate risks, the potential for failure increases significantly. One would think that would be blindingly obvious for any (re)insurance business! Yet, while large-scale P&C company failures are quite rare, one does wonder how much the “rampant positivism” of most public pronouncements at the company level squares with industry-level angst about matters such as climate risk, and the extent to which there is correlation across portfolios, where “management” of aggregations masks the true level of tail risk.
A persistent problem is the fact that reporting a “near miss” can, in itself, be seen as somehow disloyal or disruptive, even if (or especially if) the person making the report was neither involved nor responsible. As a result, it is highly probable that many issues are under-reported; and so valuable data are lost.
In the realm of underwriting credit and related risks, a “near miss” would be analogous to being approached by a potential client to consider exposure to a particular Obligor generally regarded as “popular” and a sound risk by other markets (WireCard and WeWork come to mind), but declining to do so because one could not create a defensible thesis that the risk/reward would ever be acceptable, or having the sense that something did not quite “add up”. Arguably, that is what credit underwriters are supposed to do, but sometimes, being only human, they can be swept up into situations in which critical thinking and judgement are somehow suspended. And sometimes, one is just lucky, because one is “off risk” when something bad happens. Nevertheless, one should aim to learn from what actually happened.
Of course, we all like to think that we avoided an event because of our foresight; and, if one is always alert to anomalies and facts or patterns that simply do not make sense or “fit” that may be so. Yet sometimes it is simply luck- a reality which we should admit; and which should keep all of us humble.
The Awbury Team