Loss Creep or Mission Creep…?

Until recently, the term “loss creep” was not one much heard publicly in (re)insurance circles. Reserve releases were generally the order of the day, and useful for primping Combined Ratios for public consumption.

Now, however, the phrase has become almost a trope, as increasing estimates for the overall claims cost of a greater number of major CATs (think of 2018’s Typhoon Jebi as the current “poster child”) mean that (re)insurers not only have to worry about increasing their reserves, but also about the risk of blowing through their retro covers. And just think what specialized writers of retro CAT must be feeling!

Such events are a further sign that, in the commoditized world of CAT, the “old certainties” need re-thinking. Hitherto “conservative” assumptions are now revealed as no longer fit for purpose. All this means that rather than being as equally wrong as everyone else, underwriters (and their CAT-modelling colleagues) are going to need to re-think their assumptions about what a “1-in-x” year event might look like, both in terms of frequency and then scale. Relying upon prior “industry standard” models, or estimates could become rather damaging to (re)insurers’ crucial reserve management methodologies, and ultimately to solvency levels.

Of course, this is likely to lead to executive managements wondering where they can look to assuage the pain of regularly missed “target” or “normalized” Combined Ratios. And what exactly is an appropriate “attritional” CAT Loss ratio or “budget” anymore?

Another phrase that is also becoming more prevalent is “closing the gap” when speaking about the availability of insurance coverage for natural disasters, particularly in so-called emerging markets. Given the demonstrated difference between economic and insured losses depending upon jurisdiction, and the continuing shift in the rate of economic growth away from developed markets, it is not surprising that a CEO looking for premium flow would be attracted to the idea of expanding into new geographic markets and helping to close the gap in coverage. However, one wonders whether a sufficient level of skepticism and true conservatism will be employed in the process of deciding to expand coverage into new jurisdictions. One can imagine the temptation to argue by analogy with existing developed markets that the same assumptions and criteria can be used. Yet if, in developed markets, the existing models are being demonstrated to be no longer fit for purpose, what can make a (re)insurer’s Board comfortable that somehow the process will be easier or more accurate in a new market?

We are not saying that entering new markets is misguided; simply that current experience in supposedly well-known and hitherto understood developed markets should give pause for thought before blithely entering new ones, especially if, as is often the case, everyone thinks the same thing at the same time. It may sound absurd, but could “closing the gap” become the classic “crowded trade”, whereas the smart money re-engineers its processes and increases discipline in markets in which it has long experience?

At Awbury, we believe strongly in focusing on the area- credit/economic/financial risks- in which we have a demonstrable and defensible track record. We adapt as markets and risks change, but we know that there are realistic boundaries to the scale and probability of losses that may occur, and that careful structuring can significantly mitigate risk of loss. Contrast that with the CAT environment, in which the probability of full-limit losses is all too real, especially in a world beset by increasing loss creep.

The Awbury Team


The Dangers of Economic Dogma- Models and Financial Mayhem…

We recently came across an interesting paper by George Akerlof, a Nobel Prize winning economist in which he describes the unreality of the basic macro-economic models used by the profession in the half-century ending with the Great Financial Crisis (GFC).

In essence, Akerlof posits that the models used were misleading, because they failed to ascribe sufficient importance to the impact of the financial system on the wider economy. He also makes a point, often overlooked, that the choice of textbook used as the core for teaching a particular topic has far-reaching consequences, because it influences how students are taught and come to understand a subject, and thus how they apply their knowledge.

In the 1960s (before Friedman’s monetarism and economic neoliberalism took over the world), the basic model used (at least in most US universities, including Akerlof’s MIT) was the so-called Keynesian neoclassical synthesis, which was based upon the concept of finding equilibria between the various components of the underlying economic model, principally supply and demand. Unfortunately, in creating the “synthesis”, its acolytes decided that any changes to an equilibrium would be one step at a time and proportional. The models did not really address circumstances in which disorderly changes could occur- i.e., panic or crashes.

Somehow, they had forgotten, or overlooked, Keynes’ own “beauty contest” theory of market behaviour, under which individuals (and so corporations and banks) allocate their wealth and make financial decisions based not upon careful analysis of economic fundamentals, but rather on what they think others will see as the value of an asset- a version of the “greater fool” approach, which only works as long as the greater fool exists and behaves as expected!

As Akerlof explains, because the real world is a very complicated place, even for the DSGE (Dynamic Stochastic General Equilibrium) models now beloved of central banks, relying upon a model that essentially smooths out the impact of financial decisions is a recipe for macroeconomic mayhem, because it fails to account for the fact that systems and economies can appear very stable until, suddenly, they are not. In the case of banks, deregulation in the 1980s and 1990s removed both oversight and constraints, which, when coupled with malign incentives and dogma such as “housing prices cannot decline systemically”, created the conditions for the GFC, whose effects we are still living with today.

Strange as it may seem, the dominant economic models failed to include the impact of the financial system (as a system) on the wider economy. With a few honourable exceptions, the dismal science failed miserably in terms of its forecasting ability. Why? Because a model, which actually ignored a key tenet of its supposed creator (Keynes), became the basis for teaching a generation of economists- and questioning it was risky at the individual level.

One may ask what such a tale has to do with (re)insurance. Simply that there are dominant models and “orthodoxies” in the industry (as in many others) that are used to guide decisions, often without question. In reality, as we aim to do at Awbury, every time one uses a model one should always ask one’s self not only whether it is appropriate for the decision that will be based upon it, but also whether there any characteristics which place a boundary on the circumstances in which it will remain useful. In other words, is it truly fit for purpose?

As we all know, the real damage comes from the extreme left of the distribution (which, ironically, is another convention!); but, first, the distribution has to be grounded in some form of reality!

The Awbury Team


Dissonant Models and Distracting Measures?

At Awbury, we try to avoid being caught out by “framing” issues, in which the use of or adherence to particular cognitive pathways can lead to “blind spots” when analyzing or assessing risks.

It is axiomatic that measuring credit risk is about trying to identify what the most important ones are for a particular obligor, portfolio or scenario, and then assigning probabilities to one or more of them causing distress or default.

Problems arise when there is a lack of data on relevant past events, coupled with types of risk that are infrequent, such as systemic financial crises. We know they occur; but because they are infrequent predicting them and their outcomes can be a futile exercise.

A good example of this is the point made by the research foundation, Vox, in a paper entitled “The dissonance of the short and long term”- that an OECD member country suffers a crisis every 43 years on average; while true global financial crises are even less frequent- consider the time period between the Great Depression and the Great Recession. So, if modern financial markets are not even 200 years old, the sample size available for predictive purposes is very small.

Of course, models such as Monte Carlo simulations are supposed to be able to tease out the extremes of possible distributions. However, they are only models, and not representations of the real world. As actual experience during the Great Financial Crisis amply demonstrated, events that (according to the then-existing models) are not supposed to be able to happen during the known life of the Universe nevertheless do, because the predictive models which told one that was impossible were deeply flawed.

Another problem with risk measurement is that if people believe that they can track, measure and model a particular risk factor, they may tend to focus on it, because it can be measured; and so fall prey to being caught within a frame of reference. As a result, by focusing on short term, measurable factors, they overlook or ignore the more important and potentially threatening longer term ones. For example, economists are constantly seeking to measure the various factors which they believe are harbingers of recession (or future trends in interest rates), yet it is truism that they generally would be better tossing a coin, because recessions (or movements in interest rates) are the result of the interplay of multiple complex factors, many of which are not (at least yet) truly measurable.

As Goodhart’s Law states: when a measure becomes a target, it is subsequently no longer a good measure. For example, if people anticipate the effect of a policy, and their actions therefore alter the policy’s outcome, the target was a bad measure. In other words, measuring isolated factors is a distraction from careful and thoughtful analysis. The “beauty” of the models is too alluring.

Using short term measurements to drive complex decisions with long term outcomes is simply foolish.

From Awbury’s point of view, while we, of course, use multiple types of models as part of our risk analysis and management process, we aim to avoid becoming seduced by their apparent certainty; always overlaying their outputs with an element of robust “but what if we’re wrong?” and “what might we have missed?” thought experiments- our “testing a thesis to destruction” approach.

The Awbury Team


Failure is an option…

As regular Readers of our blog will know, the Awbury Team is inherently paranoid (in the Andy Grove “Only the Paranoid Survive” sense). We are also regular readers of the transcripts from the Farnam Street blog’s excellent podcast series, which we can recommend as a window into the thinking of a diverse range of first-class minds.

So we read with particular interest a section in Shane Parrish’s recent conversation with Jim Collins (“Built to Last”, “Good to Great” to name a few of his published works) which dealt with how supposedly great businesses or institutions fail, often quite surprisingly in the eyes of the outside world.

In examining what causes such decline, Collins posited 5 stages, the first 3 of which are often hidden from outsiders, explaining why failures can often be unanticipated or unexpected.

Of course, one can reasonably ask how long in practical terms the potential lifespan is of any business model, but the point that Collins makes is that what become catastrophic or ultimately terminal failures usually have internal causes; and that failure should not be seen as inevitable for those who are aware of the risks.

The first stage harks back to the structures of classical Greek tragedy- when a character becomes so successful or powerful that this leads to arrogance, hubris in tragic terms. In the case of a company, its management comes to believe that it is somehow better than anyone else.

Interestingly, in stage two, Collins points out that, while one might think that this arrogance can lead to complacency, the real danger is overreach. Not satisfied with its level of achievement and market position, a company’s management aggressively seeks yet more dominance and growth, or believes it can translate its “success” into other areas. In essence, stage two behaviour amounts to a lack of discipline- say in the form of an ill-conceived but superficially attractive “transformative” acquisition. Clearly, there is a fine line here. There could be further apparent success, or stage two can imperceptibly shade into stage three, when hitherto unseen strains or imperceptible risks begin to surface, but management dismisses or chooses to ignore them- because they are so “successful” that the fault must lie elsewhere.

Now hubris (having passed through “ate” or folly) leads to the potential for nemesis (inescapable doom). Now the problems become visible externally, and there is some sort of significant failure or mis-step, which cannot be hidden or suppressed. Even now, there is still the chance of redemption and recovery if management is able to discern and understand the causes, and responds in a reasoned and disciplined way. However, in many cases, it does the opposite. The team panics and acts incoherently without thinking through consequences, or somehow hopes for rescue from an external source.

If that happens, with resources and capital exhausted, and leadership absent or remaining in denial, the business slides into oblivion or irrelevance, leaving the way open for the cycle to start again elsewhere.

At Awbury, while we certainly aim to be the “best in class” at what we do, we have no intention of succumbing to hubris, as we continue methodically and patiently to build and extend our franchise. To behave otherwise would be folly!

The Awbury Team


Hi Ho, Hi Ho, it’s off to (We)Work We Go…

While the endgame for WeWork, following the debacle of its recent failed and withdrawn IPO, is still unfolding, and a lot of ex post facto schadenfreude has been exhibited, it is worth pointing to certain aspects of what has happened that demonstrate that reality eventually intrudes upon suspension of disbelief.

We have written before of how the need for a business to be profitable prior to a public listing seems to have become a rather quaint notion. The WeWork saga demonstrates that in spades.

However, there is more to it than that.

Consider the widely used financial metric of “EBITDA”, often as a proxy for cash available for debt service and capex. As anyone who has read a syndicated bank loan document knows, the definition of and adjustments to “EBITDA” show that its natural meaning can be tortured within an inch of its life. In addition, “Adjusted EBITDA” is a favourite of companies in corporate presentations to demonstrate that their prospects are somewhat better than the numbers produced by statutory accounting may suggest. Yet, WeWork, with its now much-mocked concept of “Community-Adjusted EBITDA”, took the distortion of reality to a new level. Nevertheless, that appears not to have prevented sundry investment and commercial banks, who should have known better, from reportedly promising the earth in terms of what WeWork should be valued at upon a public listing. USD 47BN was “conservative”. As an aside, according to the Financial Times, the SEC has recently issued fresh “guidance” to company CFOs on the topic of EBITDA, while the IASB is considering standardizing the definition of what operating profit is, presumably in an attempt to prevent what is a useful concept becoming discredited.

Secondly, robust governance matters. The S-1 which WeWork issued ahead of its proposed IPO disclosed a catalogue of circumstances in which there were clear conflicts of interest between the company and its CEO/Co-Founder, but which WeWork’s Board had seemingly chosen to overlook, or even approved. Of course, in any organization, particularly one growing so fast and with a dominant and controlling founder, there is always the potential for the agency issue and misaligned incentives to lead to outcomes that, when scrutinized, do not pass a reasonable test of propriety. In the case of WeWork, these seem to have become inextricably entangled with personal interests. Sadly, as the case of Theranos amply demonstrated, Boards often struggle to act as a check on a dominant CEO. WeWork is hardly alone on that score.

Thirdly, if a key investor pours in so much money that it removes any real incentive for the founders and managers of a start-up to exercise discipline in terms of how they allocate capital and spend cash, it makes a mockery of the paradigm that a start-up should be “lean and hungry”; not because, at the other extreme, operating on “starvation rations” is somehow a virtue, but because a surfeit of capital, and no real controls on how it is spent, create inflated expectations in terms of the value supposedly being created, leading to “magical thinking”.

Fourthly, WeWork was treated and ostensibly valued as if it had somehow created a technology platform, when its core business model was, in fact, that of an entity that leased-long and sub-let short. The mis-match between (un)predictable cash inflows and demonstrable lease obligations was breathtaking, even if an increasing proportion of its available space was leased to large corporations, less likely to be vulnerable to economic cycles.

And, finally, if there is no clear trajectory to real “cash” profitability or generating any return on capital invested, how is it possible to create a valuation model that has any credibility? If a valuation is based upon the expected Net Present Value of future cashflows and/or dividends, and no-one can explain how that number will ever become a positive one, valuing “potential” crosses over into the realm of fantasy.

Of course, it is easy to mock. Clearly the spaces which WeWork created were attractive and showed the potential for improving the concept of an office or workspace. What was, and remains troubling is the fact that somehow its business model was treated as if it was revolutionary, when it was nothing of the sort; and that a wide range of parties became invested in maintaining that fiction, because the alternative probably became too awful to contemplate.

At Awbury, we take the view that, while clearly there are and will be paradigm-shifting businesses created by those who have a vision of the new, because that is evidenced by long experience, the constraints which surround such entities remain the same- a path to profitability and positive cashflow, good governance, management accountability and robust accounting, to name a few. One can create a transformative business, but breaking free from reality is a lot harder.

The Awbury Team


Banana Split Thinking…

Surveys of (re)insurance industry participants are a common method of assessing those issues which are of most concern at any point in time. One can compare them with the World Economic Forum’s (WEF) annual “Global Risk Report”.

One of the more long-established such series is the annual CSFI/PwC “Banana Skins”, in which (in the case of the recently published 2019 Reinsurance Survey) the views of some 320 executives who were polled on what they saw as the major risks facing the industry. Strangely enough, WTW then published its own much less frequent “Extreme Risks Report”, produced by its in-house “Thinking Ahead Institute”. Having one’s own “institute”, or access to one, is a growth industry…

A comparison of the respective “Top 5s” shows:

Banana Skins

Extreme Risks

1. Technology          

Global Temperature Change

2. Cyber Risk  

Global Trade Collapse

3. Climate Change    

Cyber Warfare

4. Change Management  

Resource Scarcity

5. Regulation 

Currency Crisis

The “fear remit” of the Banana Skins is clearly more inward looking and industry-focused than that of Extreme Risks, with the former being based on a survey and the latter on a more formal internal research methodology.

Of course, as with the WEF’s own offering, anyone reading them is likely to conclude that they are statements of the obvious, hardly containing any new information; representing, as they do, the current perceptions of a group, or number of supposedly expert individuals. They are not “forecasts”, nor are they “impact-weighted”, and they both suffer from the problem of familiarity threatening to breed contempt: none of the risks ranked at any level in either publication is, as we mentioned” exactly new, so “risk fatigue” is a concern.

The real question to be posed is: “Will any of this make a difference to the actions of any management team within the (re)insurance industry?” Frankly, we very much doubt it, because any executive who was not aware of and familiar with any of the risks articulated would not be performing at an acceptable level and should probably be cashiered.

It is, therefore, debatable whether such publications serve much purpose beyond telegraphing what the conventional thinking is. In providing a common taxonomy of perceived risks they also raise the issue of “framing” in the sense that, if the conventional thinkers and industry members are focused on what is published, perhaps that limits their exploration of risks not articulated? After all, the rankings are meant to convey a level of concern, so diverting attention from what is not there. Of course, many companies do have “Emerging Risks” as a remit for their ERM or Risk Management functions, although we wonder how much traction their findings get if they are (paradoxically) seen as “outside the mainstream”.

At Awbury, we do, naturally, make sure we are aware of what others are thinking, because we do not function in a vacuum, and the existence of such publications is useful in terms of understanding why others may behave in a certain way. However, we prefer to go our own way when it comes to risk identification, assessment, and ranking; and we worry about succumbing to risk orthodoxy, not out of any sense of superiority, but because it is the risks that you do not see and so do not address or prepare for that have a tendency to cause ruin.

A Publication of the Awbury Institute…


Is your model an ideology?

In finance, economics and (re)insurance (as elsewhere), it is accepted as a statement of the obvious that one’s model is only as good as the assumptions used to construct it. This depends upon our understanding of what the model’s purpose is; what we consider important; and what we omit.

Yet, as Karl Mannheim (a sociologist of the first half of the 20th Century, and yet another refugee from Nazism) pointed out: thinking (as in the construction of one’s model) is an activity that must be related to other social activity within a structural framework- i.e., in any complex environment, it never exists in isolation. It is, in fact, the product of a particular worldview and context, and so an ideology. Ironically (and echoing Marxism), this meant that any critique of an ideology was also ideological!

Having rescued ourselves from disappearing down that rabbit hole of Marxist dialectic, and barely avoided veering off into Platonic Ideals, the point to be made is that one must always test the assumptions used to build any model not only for their validity and relevance, but also for their origins. Are they the product of some currently dominant belief system, which may have introduced unconscious bias, or caused certain crucial factors to be ignored or overlooked? For, example, a Marxist, Keynesian, or Neo-liberal economist would approach the same issue, or particular fact set from very different starting points, and using very different mental “toolboxes”. As result, the outputs of their models, and so their consequences, would be likely to be very different.

Friedrich Hayek, the Austrian economist, was in no sense a member of the so-called “mathematical wing” of Economics, but rather set out models based upon his philosophical beliefs about the dangers of introducing any element of state influence into an economy. To claim that his most famous work, “The Road to Serfdom” (written in the depths of WWII), was “influential” is to understate the case. However, in reality, he had propounded an ideology, rather than created economic models, in the same way that Milton Friedman did a generation later. We live with the results still.

One might think that the world of (re)insurance must be above such influences, being devoted to the rational evaluation, pricing and management of risks across a broad spectrum of products. One might be wrong!

To give a couple of hypothetical examples: what if a Political Risk underwriter allowed his or her personal and subjective political beliefs to influence a decision, but in ways that were not visible in the underwriting file; or a D&O underwriter deliberately downplayed, or over-emphasized, certain risks to which a business was subject because of his or her own subjective beliefs about the particular industry in which it operated? Of course, any human being is subject to the consequences of his or her biases, preferences and beliefs, and “objective truth” can be a very elusive concept in many areas. However, the failure to consider relevant factors, or basing a decision upon a particular personal belief system, is, in reality, the product of ideology; which, quite interestingly, has an archaic usage referring to the science of ideas- i.e., the study of their origins and nature.

The team at Awbury is most definitely human, and a group with varied backgrounds and personal beliefs. However, we strive always to ensure that any models we build are fact-based and as free from any inherent bias as is possible, testing them to destruction to ensure their robustness.

The Awbury Team