Oily tails…

Like forecasting the timing of recessions, it is a truism that predicting the price of oil over anything beyond the short term is usually delusional, because of all the economic and geopolitical factors which have an impact.

How far will US tight oil production surge? Will OPEC decide to curb production with a view to maintaining prices at levels that support members’ already strained budgets? What will be the outcome of the US/PRC “trade war”? How accurate are forecasts of the demand side of the equation?

Amongst all this, there is that perennial favourite, Middle Eastern geopolitics. Venezuela’s production travails are seemingly yesterday’s news!

It can make one’s head spin to try to assess the various scenarios for mayhem that could unfold- especially in a “post-Abqaiq” world in which a few missiles and drones can take out more than 5% of global supply in an instant. In that case, while there was a temporary movement up in price, there was no sustained uptick, which perhaps begs the question of how bad things would have to be before there was a material sustained change in expectations and so prices.

While the world remains beset by far too many bilateral tensions (India/Pakistan being a good example of nuclear war as a tail risk), it is the Middle East which is the source of most state and state-supported conflicts; many of which, because of their seemingly interminable and intractable nature, lead to a jaded and potentially dangerous “so what?” response. What is interesting is that actions which would once have been considered a casus belli are now seemingly regarded as “background noise”. Israel attacks Iranian positions in Syria; or Turkey invades northern Syria, while Saudi Arabia is “certain” that Iran was behind the attack on its Abqaiq facility.

So one has to wonder what it would take for a real “shooting war” to break out that would actually get the oil market’s attention. After all, the conflict in Yemen seems only to elicit ennui.

Iran and Iraq are both much less stable than they may appear (which is saying something!) The governing elites in each, if they felt sufficiently threatened, could easily decide that a “patriotic” war (ruinous as it might be) was “necessary” to maintain power and disguise that increasing fragility. After all, there is a precedent from the war of 1980-88. Of course, this is just one scenario. Israel or Iran could each be seen by the other to have “gone too far” with a particular action, with either side goaded into a war of choice, simply because a failure to respond could be seen as an existential weakness.

These are the topics which “think tanks” and “pundits” enjoy speculating about- but without “skin in the game” it is mere idle chatter.

At Awbury, such things matter. The price of oil has both a macro and an idiosyncratic impact on portfolio risks, so, while we would not claim that we somehow have a special “edge” in forecasting, we most certainly do avoid “wishful thinking” and constantly update our knowledge and understanding of the key factors which do or could have an impact on the supply and price of oil. Regarding it as “background noise” would be foolish.

The Awbury Team


Dead before you know it…

Readers will be familiar with the concept of the post mortem, in both the literal bodily and the figurative business sense.

They may also be familiar with the somewhat modish “pre mortem”, now often used in a wide range of contexts to try to identify potentially unforeseen risks and flaws in a plan before a major, and potentially irreversible decision is made. The foundational text (building upon earlier research on the concept of “prospective hindsight” and his book The Power of Intuition) which brought the technique a wider business audience was published in 2007 in the HBR by Gary Klein.

Some 12 years later, Mr. Klein (together with two co-authors from Columbia University, Paul Sonkin and Paul Johnson) has published a draft paper (Rendering a Powerful Tool Flaccid: The Misuse of Premortems on Wall Street), in which an example of how not to conduct one is described, followed by a “how to” guide covering the right way to do so.

As with many concepts which become a ‘term of art’, intellectual sloppiness and shallow thinking can corrupt what should be a useful tool, rendering it dangerous in the hands of those who have not properly studied and assimilated the proper approach and the reasons for its effectiveness.

In the world of (re)insurance, as in many other industries, we suspect the use of the technique is fairly prevalent, particularly when it comes to reviews of potential M&A transactions.

So, it may be helpful (as set out in the recent paper) to re-visit the factors that make a pre mortem a valuable tool, as opposed to the potential precursor to a damaging mistake.

Firstly, problems and issues are reframed, because the purpose is to explain ex ante why the plan failed, forcing the participants in the process to identify the causes.

Secondly, one cannot have a pre mortem team of one! One needs to assemble as diverse a team as possible, not just in terms of background, but also in terms of experience and expertise. Youth and inexperience may well not be a hindrance in this case, as questions should be raised which others may have dismissed as irrelevant based upon their greater “experience”. Cognitive diversity is essential.

Thirdly, because every team member is supposed to participate (no exceptions!), it is absolutely critical that each member feels safe doing so, without being overwhelmed by fear of mockery or retribution because those perceived to be in a position of power may not like what they hear. Therefore, the usual approach to mitigating this is for the leader or most senior member of the group (perhaps the CEO) to go first- in effect explaining why what may well be his or her “pet” project has failed. It should be a salutary lesson in intellectual humility.

Fourthly, each member of the team has to be treated equally, without incorporating bias or hierarchy in terms of their opportunity to express the concerns they have. Applying a weighting ab initio negates the purpose of the exercise.

And, finally, a pre mortem is not intended to be a leisurely exercise or symposium. There has to be urgency and pace, to avoid the danger of “over-thinking” or “discussion unto death”.

If done properly, the pre mortem is valuable. However, the title of the paper hints at and mocks the fact that overuse and abuse of the technique have led to it becoming more part of a box-ticking exercise, rather than an attempt to produce an effective, executable decision, or even demonstrate the folly of proceeding with a particular course of action. One wonders if those underwriting the IPO of WeWork, made use of pre mortems. After all, what could possibly go wrong…?

As readers of this blog will know, at Awbury we are constantly exploring a diverse range of decision-making and risk-assessment tools. The pre mortem is part of our armoury- one approach to avoiding over-reach, ruin, or the destruction of value.

The Awbury Team


Loss Creep or Mission Creep…?

Until recently, the term “loss creep” was not one much heard publicly in (re)insurance circles. Reserve releases were generally the order of the day, and useful for primping Combined Ratios for public consumption.

Now, however, the phrase has become almost a trope, as increasing estimates for the overall claims cost of a greater number of major CATs (think of 2018’s Typhoon Jebi as the current “poster child”) mean that (re)insurers not only have to worry about increasing their reserves, but also about the risk of blowing through their retro covers. And just think what specialized writers of retro CAT must be feeling!

Such events are a further sign that, in the commoditized world of CAT, the “old certainties” need re-thinking. Hitherto “conservative” assumptions are now revealed as no longer fit for purpose. All this means that rather than being as equally wrong as everyone else, underwriters (and their CAT-modelling colleagues) are going to need to re-think their assumptions about what a “1-in-x” year event might look like, both in terms of frequency and then scale. Relying upon prior “industry standard” models, or estimates could become rather damaging to (re)insurers’ crucial reserve management methodologies, and ultimately to solvency levels.

Of course, this is likely to lead to executive managements wondering where they can look to assuage the pain of regularly missed “target” or “normalized” Combined Ratios. And what exactly is an appropriate “attritional” CAT Loss ratio or “budget” anymore?

Another phrase that is also becoming more prevalent is “closing the gap” when speaking about the availability of insurance coverage for natural disasters, particularly in so-called emerging markets. Given the demonstrated difference between economic and insured losses depending upon jurisdiction, and the continuing shift in the rate of economic growth away from developed markets, it is not surprising that a CEO looking for premium flow would be attracted to the idea of expanding into new geographic markets and helping to close the gap in coverage. However, one wonders whether a sufficient level of skepticism and true conservatism will be employed in the process of deciding to expand coverage into new jurisdictions. One can imagine the temptation to argue by analogy with existing developed markets that the same assumptions and criteria can be used. Yet if, in developed markets, the existing models are being demonstrated to be no longer fit for purpose, what can make a (re)insurer’s Board comfortable that somehow the process will be easier or more accurate in a new market?

We are not saying that entering new markets is misguided; simply that current experience in supposedly well-known and hitherto understood developed markets should give pause for thought before blithely entering new ones, especially if, as is often the case, everyone thinks the same thing at the same time. It may sound absurd, but could “closing the gap” become the classic “crowded trade”, whereas the smart money re-engineers its processes and increases discipline in markets in which it has long experience?

At Awbury, we believe strongly in focusing on the area- credit/economic/financial risks- in which we have a demonstrable and defensible track record. We adapt as markets and risks change, but we know that there are realistic boundaries to the scale and probability of losses that may occur, and that careful structuring can significantly mitigate risk of loss. Contrast that with the CAT environment, in which the probability of full-limit losses is all too real, especially in a world beset by increasing loss creep.

The Awbury Team


The Dangers of Economic Dogma- Models and Financial Mayhem…

We recently came across an interesting paper by George Akerlof, a Nobel Prize winning economist in which he describes the unreality of the basic macro-economic models used by the profession in the half-century ending with the Great Financial Crisis (GFC).

In essence, Akerlof posits that the models used were misleading, because they failed to ascribe sufficient importance to the impact of the financial system on the wider economy. He also makes a point, often overlooked, that the choice of textbook used as the core for teaching a particular topic has far-reaching consequences, because it influences how students are taught and come to understand a subject, and thus how they apply their knowledge.

In the 1960s (before Friedman’s monetarism and economic neoliberalism took over the world), the basic model used (at least in most US universities, including Akerlof’s MIT) was the so-called Keynesian neoclassical synthesis, which was based upon the concept of finding equilibria between the various components of the underlying economic model, principally supply and demand. Unfortunately, in creating the “synthesis”, its acolytes decided that any changes to an equilibrium would be one step at a time and proportional. The models did not really address circumstances in which disorderly changes could occur- i.e., panic or crashes.

Somehow, they had forgotten, or overlooked, Keynes’ own “beauty contest” theory of market behaviour, under which individuals (and so corporations and banks) allocate their wealth and make financial decisions based not upon careful analysis of economic fundamentals, but rather on what they think others will see as the value of an asset- a version of the “greater fool” approach, which only works as long as the greater fool exists and behaves as expected!

As Akerlof explains, because the real world is a very complicated place, even for the DSGE (Dynamic Stochastic General Equilibrium) models now beloved of central banks, relying upon a model that essentially smooths out the impact of financial decisions is a recipe for macroeconomic mayhem, because it fails to account for the fact that systems and economies can appear very stable until, suddenly, they are not. In the case of banks, deregulation in the 1980s and 1990s removed both oversight and constraints, which, when coupled with malign incentives and dogma such as “housing prices cannot decline systemically”, created the conditions for the GFC, whose effects we are still living with today.

Strange as it may seem, the dominant economic models failed to include the impact of the financial system (as a system) on the wider economy. With a few honourable exceptions, the dismal science failed miserably in terms of its forecasting ability. Why? Because a model, which actually ignored a key tenet of its supposed creator (Keynes), became the basis for teaching a generation of economists- and questioning it was risky at the individual level.

One may ask what such a tale has to do with (re)insurance. Simply that there are dominant models and “orthodoxies” in the industry (as in many others) that are used to guide decisions, often without question. In reality, as we aim to do at Awbury, every time one uses a model one should always ask one’s self not only whether it is appropriate for the decision that will be based upon it, but also whether there any characteristics which place a boundary on the circumstances in which it will remain useful. In other words, is it truly fit for purpose?

As we all know, the real damage comes from the extreme left of the distribution (which, ironically, is another convention!); but, first, the distribution has to be grounded in some form of reality!

The Awbury Team


Dissonant Models and Distracting Measures?

At Awbury, we try to avoid being caught out by “framing” issues, in which the use of or adherence to particular cognitive pathways can lead to “blind spots” when analyzing or assessing risks.

It is axiomatic that measuring credit risk is about trying to identify what the most important ones are for a particular obligor, portfolio or scenario, and then assigning probabilities to one or more of them causing distress or default.

Problems arise when there is a lack of data on relevant past events, coupled with types of risk that are infrequent, such as systemic financial crises. We know they occur; but because they are infrequent predicting them and their outcomes can be a futile exercise.

A good example of this is the point made by the research foundation, Vox, in a paper entitled “The dissonance of the short and long term”- that an OECD member country suffers a crisis every 43 years on average; while true global financial crises are even less frequent- consider the time period between the Great Depression and the Great Recession. So, if modern financial markets are not even 200 years old, the sample size available for predictive purposes is very small.

Of course, models such as Monte Carlo simulations are supposed to be able to tease out the extremes of possible distributions. However, they are only models, and not representations of the real world. As actual experience during the Great Financial Crisis amply demonstrated, events that (according to the then-existing models) are not supposed to be able to happen during the known life of the Universe nevertheless do, because the predictive models which told one that was impossible were deeply flawed.

Another problem with risk measurement is that if people believe that they can track, measure and model a particular risk factor, they may tend to focus on it, because it can be measured; and so fall prey to being caught within a frame of reference. As a result, by focusing on short term, measurable factors, they overlook or ignore the more important and potentially threatening longer term ones. For example, economists are constantly seeking to measure the various factors which they believe are harbingers of recession (or future trends in interest rates), yet it is truism that they generally would be better tossing a coin, because recessions (or movements in interest rates) are the result of the interplay of multiple complex factors, many of which are not (at least yet) truly measurable.

As Goodhart’s Law states: when a measure becomes a target, it is subsequently no longer a good measure. For example, if people anticipate the effect of a policy, and their actions therefore alter the policy’s outcome, the target was a bad measure. In other words, measuring isolated factors is a distraction from careful and thoughtful analysis. The “beauty” of the models is too alluring.

Using short term measurements to drive complex decisions with long term outcomes is simply foolish.

From Awbury’s point of view, while we, of course, use multiple types of models as part of our risk analysis and management process, we aim to avoid becoming seduced by their apparent certainty; always overlaying their outputs with an element of robust “but what if we’re wrong?” and “what might we have missed?” thought experiments- our “testing a thesis to destruction” approach.

The Awbury Team


Failure is an option…

As regular Readers of our blog will know, the Awbury Team is inherently paranoid (in the Andy Grove “Only the Paranoid Survive” sense). We are also regular readers of the transcripts from the Farnam Street blog’s excellent podcast series, which we can recommend as a window into the thinking of a diverse range of first-class minds.

So we read with particular interest a section in Shane Parrish’s recent conversation with Jim Collins (“Built to Last”, “Good to Great” to name a few of his published works) which dealt with how supposedly great businesses or institutions fail, often quite surprisingly in the eyes of the outside world.

In examining what causes such decline, Collins posited 5 stages, the first 3 of which are often hidden from outsiders, explaining why failures can often be unanticipated or unexpected.

Of course, one can reasonably ask how long in practical terms the potential lifespan is of any business model, but the point that Collins makes is that what become catastrophic or ultimately terminal failures usually have internal causes; and that failure should not be seen as inevitable for those who are aware of the risks.

The first stage harks back to the structures of classical Greek tragedy- when a character becomes so successful or powerful that this leads to arrogance, hubris in tragic terms. In the case of a company, its management comes to believe that it is somehow better than anyone else.

Interestingly, in stage two, Collins points out that, while one might think that this arrogance can lead to complacency, the real danger is overreach. Not satisfied with its level of achievement and market position, a company’s management aggressively seeks yet more dominance and growth, or believes it can translate its “success” into other areas. In essence, stage two behaviour amounts to a lack of discipline- say in the form of an ill-conceived but superficially attractive “transformative” acquisition. Clearly, there is a fine line here. There could be further apparent success, or stage two can imperceptibly shade into stage three, when hitherto unseen strains or imperceptible risks begin to surface, but management dismisses or chooses to ignore them- because they are so “successful” that the fault must lie elsewhere.

Now hubris (having passed through “ate” or folly) leads to the potential for nemesis (inescapable doom). Now the problems become visible externally, and there is some sort of significant failure or mis-step, which cannot be hidden or suppressed. Even now, there is still the chance of redemption and recovery if management is able to discern and understand the causes, and responds in a reasoned and disciplined way. However, in many cases, it does the opposite. The team panics and acts incoherently without thinking through consequences, or somehow hopes for rescue from an external source.

If that happens, with resources and capital exhausted, and leadership absent or remaining in denial, the business slides into oblivion or irrelevance, leaving the way open for the cycle to start again elsewhere.

At Awbury, while we certainly aim to be the “best in class” at what we do, we have no intention of succumbing to hubris, as we continue methodically and patiently to build and extend our franchise. To behave otherwise would be folly!

The Awbury Team


Hi Ho, Hi Ho, it’s off to (We)Work We Go…

While the endgame for WeWork, following the debacle of its recent failed and withdrawn IPO, is still unfolding, and a lot of ex post facto schadenfreude has been exhibited, it is worth pointing to certain aspects of what has happened that demonstrate that reality eventually intrudes upon suspension of disbelief.

We have written before of how the need for a business to be profitable prior to a public listing seems to have become a rather quaint notion. The WeWork saga demonstrates that in spades.

However, there is more to it than that.

Consider the widely used financial metric of “EBITDA”, often as a proxy for cash available for debt service and capex. As anyone who has read a syndicated bank loan document knows, the definition of and adjustments to “EBITDA” show that its natural meaning can be tortured within an inch of its life. In addition, “Adjusted EBITDA” is a favourite of companies in corporate presentations to demonstrate that their prospects are somewhat better than the numbers produced by statutory accounting may suggest. Yet, WeWork, with its now much-mocked concept of “Community-Adjusted EBITDA”, took the distortion of reality to a new level. Nevertheless, that appears not to have prevented sundry investment and commercial banks, who should have known better, from reportedly promising the earth in terms of what WeWork should be valued at upon a public listing. USD 47BN was “conservative”. As an aside, according to the Financial Times, the SEC has recently issued fresh “guidance” to company CFOs on the topic of EBITDA, while the IASB is considering standardizing the definition of what operating profit is, presumably in an attempt to prevent what is a useful concept becoming discredited.

Secondly, robust governance matters. The S-1 which WeWork issued ahead of its proposed IPO disclosed a catalogue of circumstances in which there were clear conflicts of interest between the company and its CEO/Co-Founder, but which WeWork’s Board had seemingly chosen to overlook, or even approved. Of course, in any organization, particularly one growing so fast and with a dominant and controlling founder, there is always the potential for the agency issue and misaligned incentives to lead to outcomes that, when scrutinized, do not pass a reasonable test of propriety. In the case of WeWork, these seem to have become inextricably entangled with personal interests. Sadly, as the case of Theranos amply demonstrated, Boards often struggle to act as a check on a dominant CEO. WeWork is hardly alone on that score.

Thirdly, if a key investor pours in so much money that it removes any real incentive for the founders and managers of a start-up to exercise discipline in terms of how they allocate capital and spend cash, it makes a mockery of the paradigm that a start-up should be “lean and hungry”; not because, at the other extreme, operating on “starvation rations” is somehow a virtue, but because a surfeit of capital, and no real controls on how it is spent, create inflated expectations in terms of the value supposedly being created, leading to “magical thinking”.

Fourthly, WeWork was treated and ostensibly valued as if it had somehow created a technology platform, when its core business model was, in fact, that of an entity that leased-long and sub-let short. The mis-match between (un)predictable cash inflows and demonstrable lease obligations was breathtaking, even if an increasing proportion of its available space was leased to large corporations, less likely to be vulnerable to economic cycles.

And, finally, if there is no clear trajectory to real “cash” profitability or generating any return on capital invested, how is it possible to create a valuation model that has any credibility? If a valuation is based upon the expected Net Present Value of future cashflows and/or dividends, and no-one can explain how that number will ever become a positive one, valuing “potential” crosses over into the realm of fantasy.

Of course, it is easy to mock. Clearly the spaces which WeWork created were attractive and showed the potential for improving the concept of an office or workspace. What was, and remains troubling is the fact that somehow its business model was treated as if it was revolutionary, when it was nothing of the sort; and that a wide range of parties became invested in maintaining that fiction, because the alternative probably became too awful to contemplate.

At Awbury, we take the view that, while clearly there are and will be paradigm-shifting businesses created by those who have a vision of the new, because that is evidenced by long experience, the constraints which surround such entities remain the same- a path to profitability and positive cashflow, good governance, management accountability and robust accounting, to name a few. One can create a transformative business, but breaking free from reality is a lot harder.

The Awbury Team