Loss Creep or Mission Creep…?

Until recently, the term “loss creep” was not one much heard publicly in (re)insurance circles. Reserve releases were generally the order of the day, and useful for primping Combined Ratios for public consumption.

Now, however, the phrase has become almost a trope, as increasing estimates for the overall claims cost of a greater number of major CATs (think of 2018’s Typhoon Jebi as the current “poster child”) mean that (re)insurers not only have to worry about increasing their reserves, but also about the risk of blowing through their retro covers. And just think what specialized writers of retro CAT must be feeling!

Such events are a further sign that, in the commoditized world of CAT, the “old certainties” need re-thinking. Hitherto “conservative” assumptions are now revealed as no longer fit for purpose. All this means that rather than being as equally wrong as everyone else, underwriters (and their CAT-modelling colleagues) are going to need to re-think their assumptions about what a “1-in-x” year event might look like, both in terms of frequency and then scale. Relying upon prior “industry standard” models, or estimates could become rather damaging to (re)insurers’ crucial reserve management methodologies, and ultimately to solvency levels.

Of course, this is likely to lead to executive managements wondering where they can look to assuage the pain of regularly missed “target” or “normalized” Combined Ratios. And what exactly is an appropriate “attritional” CAT Loss ratio or “budget” anymore?

Another phrase that is also becoming more prevalent is “closing the gap” when speaking about the availability of insurance coverage for natural disasters, particularly in so-called emerging markets. Given the demonstrated difference between economic and insured losses depending upon jurisdiction, and the continuing shift in the rate of economic growth away from developed markets, it is not surprising that a CEO looking for premium flow would be attracted to the idea of expanding into new geographic markets and helping to close the gap in coverage. However, one wonders whether a sufficient level of skepticism and true conservatism will be employed in the process of deciding to expand coverage into new jurisdictions. One can imagine the temptation to argue by analogy with existing developed markets that the same assumptions and criteria can be used. Yet if, in developed markets, the existing models are being demonstrated to be no longer fit for purpose, what can make a (re)insurer’s Board comfortable that somehow the process will be easier or more accurate in a new market?

We are not saying that entering new markets is misguided; simply that current experience in supposedly well-known and hitherto understood developed markets should give pause for thought before blithely entering new ones, especially if, as is often the case, everyone thinks the same thing at the same time. It may sound absurd, but could “closing the gap” become the classic “crowded trade”, whereas the smart money re-engineers its processes and increases discipline in markets in which it has long experience?

At Awbury, we believe strongly in focusing on the area- credit/economic/financial risks- in which we have a demonstrable and defensible track record. We adapt as markets and risks change, but we know that there are realistic boundaries to the scale and probability of losses that may occur, and that careful structuring can significantly mitigate risk of loss. Contrast that with the CAT environment, in which the probability of full-limit losses is all too real, especially in a world beset by increasing loss creep.

The Awbury Team

Standard

The Dangers of Economic Dogma- Models and Financial Mayhem…

We recently came across an interesting paper by George Akerlof, a Nobel Prize winning economist in which he describes the unreality of the basic macro-economic models used by the profession in the half-century ending with the Great Financial Crisis (GFC).

In essence, Akerlof posits that the models used were misleading, because they failed to ascribe sufficient importance to the impact of the financial system on the wider economy. He also makes a point, often overlooked, that the choice of textbook used as the core for teaching a particular topic has far-reaching consequences, because it influences how students are taught and come to understand a subject, and thus how they apply their knowledge.

In the 1960s (before Friedman’s monetarism and economic neoliberalism took over the world), the basic model used (at least in most US universities, including Akerlof’s MIT) was the so-called Keynesian neoclassical synthesis, which was based upon the concept of finding equilibria between the various components of the underlying economic model, principally supply and demand. Unfortunately, in creating the “synthesis”, its acolytes decided that any changes to an equilibrium would be one step at a time and proportional. The models did not really address circumstances in which disorderly changes could occur- i.e., panic or crashes.

Somehow, they had forgotten, or overlooked, Keynes’ own “beauty contest” theory of market behaviour, under which individuals (and so corporations and banks) allocate their wealth and make financial decisions based not upon careful analysis of economic fundamentals, but rather on what they think others will see as the value of an asset- a version of the “greater fool” approach, which only works as long as the greater fool exists and behaves as expected!

As Akerlof explains, because the real world is a very complicated place, even for the DSGE (Dynamic Stochastic General Equilibrium) models now beloved of central banks, relying upon a model that essentially smooths out the impact of financial decisions is a recipe for macroeconomic mayhem, because it fails to account for the fact that systems and economies can appear very stable until, suddenly, they are not. In the case of banks, deregulation in the 1980s and 1990s removed both oversight and constraints, which, when coupled with malign incentives and dogma such as “housing prices cannot decline systemically”, created the conditions for the GFC, whose effects we are still living with today.

Strange as it may seem, the dominant economic models failed to include the impact of the financial system (as a system) on the wider economy. With a few honourable exceptions, the dismal science failed miserably in terms of its forecasting ability. Why? Because a model, which actually ignored a key tenet of its supposed creator (Keynes), became the basis for teaching a generation of economists- and questioning it was risky at the individual level.

One may ask what such a tale has to do with (re)insurance. Simply that there are dominant models and “orthodoxies” in the industry (as in many others) that are used to guide decisions, often without question. In reality, as we aim to do at Awbury, every time one uses a model one should always ask one’s self not only whether it is appropriate for the decision that will be based upon it, but also whether there any characteristics which place a boundary on the circumstances in which it will remain useful. In other words, is it truly fit for purpose?

As we all know, the real damage comes from the extreme left of the distribution (which, ironically, is another convention!); but, first, the distribution has to be grounded in some form of reality!

The Awbury Team

Standard

Dissonant Models and Distracting Measures?

At Awbury, we try to avoid being caught out by “framing” issues, in which the use of or adherence to particular cognitive pathways can lead to “blind spots” when analyzing or assessing risks.

It is axiomatic that measuring credit risk is about trying to identify what the most important ones are for a particular obligor, portfolio or scenario, and then assigning probabilities to one or more of them causing distress or default.

Problems arise when there is a lack of data on relevant past events, coupled with types of risk that are infrequent, such as systemic financial crises. We know they occur; but because they are infrequent predicting them and their outcomes can be a futile exercise.

A good example of this is the point made by the research foundation, Vox, in a paper entitled “The dissonance of the short and long term”- that an OECD member country suffers a crisis every 43 years on average; while true global financial crises are even less frequent- consider the time period between the Great Depression and the Great Recession. So, if modern financial markets are not even 200 years old, the sample size available for predictive purposes is very small.

Of course, models such as Monte Carlo simulations are supposed to be able to tease out the extremes of possible distributions. However, they are only models, and not representations of the real world. As actual experience during the Great Financial Crisis amply demonstrated, events that (according to the then-existing models) are not supposed to be able to happen during the known life of the Universe nevertheless do, because the predictive models which told one that was impossible were deeply flawed.

Another problem with risk measurement is that if people believe that they can track, measure and model a particular risk factor, they may tend to focus on it, because it can be measured; and so fall prey to being caught within a frame of reference. As a result, by focusing on short term, measurable factors, they overlook or ignore the more important and potentially threatening longer term ones. For example, economists are constantly seeking to measure the various factors which they believe are harbingers of recession (or future trends in interest rates), yet it is truism that they generally would be better tossing a coin, because recessions (or movements in interest rates) are the result of the interplay of multiple complex factors, many of which are not (at least yet) truly measurable.

As Goodhart’s Law states: when a measure becomes a target, it is subsequently no longer a good measure. For example, if people anticipate the effect of a policy, and their actions therefore alter the policy’s outcome, the target was a bad measure. In other words, measuring isolated factors is a distraction from careful and thoughtful analysis. The “beauty” of the models is too alluring.

Using short term measurements to drive complex decisions with long term outcomes is simply foolish.

From Awbury’s point of view, while we, of course, use multiple types of models as part of our risk analysis and management process, we aim to avoid becoming seduced by their apparent certainty; always overlaying their outputs with an element of robust “but what if we’re wrong?” and “what might we have missed?” thought experiments- our “testing a thesis to destruction” approach.

The Awbury Team

Standard