It’s not magic, but it’s not simple either…

By definition, anyone who underwrites credit risk is dealing with a combination of understanding and analyzing data; then assigning probabilities which (should!) lead to a rational decision, based upon what is known or foreseeable, and focused on risk versus reward within the terms of a business’ risk appetite and capacity.

Seems obvious, does it not?

Well, the concept may be, but the process is not.

Objectively, the future is unknowable; yet, as credit underwriters, we still have to make decisions, and try to avoid, or at least control for negative outcomes.

While the universe may, for practical purposes, be infinite, our world is finite, albeit complex.

We have access to ever-expanding quantities of data, and new tools in the form of generative AI. However, we are constrained by the quality of our decision-making, which is a topic and discipline that still seems to receive too little attention, even though (re)insurers constantly tout the quality and (supposed) accuracy of their underwriting and pricing models.

In Nicholas Nassim Taleb’s book “Anti-fragile”, he make the following point:”… risk management professionals look in the past for information on the so-called worst-case scenario and use it to estimate future risk- this method Is called “stress-testing””.

Quite so. Think of the concept and tool of Value at Risk- VaR- which is one cornerstone of risk assessment for managing derivative and similar transactions which are “marked-to-market”. Everyone likes to talk about “confidence intervals”, and whether they use 95%, 97.5%, 99%, 99.5% or 99.9%, which is just a mathematical expression of standard deviation from the observed or modelled measurement of risk. Of course, (re)insurance regulators have built whole capital models on similar concepts.

This does not mean that any of them are accurate!

In the same section of his book, Taleb refers to this approach to stress-testing as the Lucretius problem (and an example of a mental defectiveness) after the Roman poet and author of the influential poem “De Rerum Natura” (On the Nature of Things), who made the point that fools believe that the tallest mountain they have seen must be the largest that exists.

The Great Financial Crisis (GFC) forcefully demonstrated how misleading VaR models could be. We see the same muddle now with discussions of climate or cyber risks. It is not that the risks do not exist, but that we assume that they are easily measurable. As the Greek orator, Demosthenes said: “What a man wishes, he will also believe”.

All this leads to be point that, in underwriting credit risk, we are always grappling with the quandary not only of whether we are asking the right questions, but also whether we can actually arrive at truly useful answers, and avoid trying to “fit the facts to the desire”. Avoiding wishful thinking and self-delusion is essential.

In reality, the key is trying to distinguish between risks which are clearly “bounded”, and those which, even if we choose to believe otherwise, are potentially unbounded. One can price for the former; while, for the latter, one has to try to find some means of mitigating and containing the risk exposure in order to avoid the potential for ruin- failing which one should simply refuse to accept it. It should also be borne in mind that sponsors and corporate executives, or politicians and bureaucrats, will be at some pains to try somehow to deflect an underwriter from asking the questions that they cannot or do not wish to answer- the “shiny bauble” and “righteous indignation” approaches.

At Awbury, we try to harness our institutional paranoia when it comes to risk selection and acceptance, to minimize falling for the delusion that we can always fully understand a risk. If we cannot create and test a valid thesis, then the risk should not be accepted.

The Awbury Team

Standard