The wrong way to approach risk identification and assessment

The World Economic Forum’s (WEF) annual Davos global elite “love-in” is now behind us, and life goes on. As usual, just prior to its convening, the WEF published its “Global Risks Report”- a document we have written about before. This is intended to act as a mechanism to identify and publish a Top 10 list of risks in the categories of “Likelihood” and “Impact” by canvassing the views of those who form part of the self-same, self-selected elite at the WEF.

While the document is full of colour (and confusing diagrammes), and is supposedly the product of much effort, its value in any real sense is becoming debatable. The FT’s Alphaville column used the telling phrase that the product was the result of “conference room homeopathy”- so diffuse as to have no demonstrable efficacy. And we doubt the “placebo effect” works on risks!

The Report’s content does demonstrate what its creators are most concerned about. However, as one reads through the lists, one notices that the wordings used are so vague and broad as to be practically meaningless, or laughably obvious- “Extreme Weather Events (#1 in Likelihood), or Weapons of Mass Destruction (#1 in Impact.)The “insights” are stunning in their banality.

We are quite sure that, individually, there are many deep-thinking and original minds within the group surveyed. Unfortunately, assessing risk by survey of a self-referential “elite” has completely obscured their existence, to such an extent that there is nothing controversial or thought-provoking in sight.

If one adds to that the “interconnections” maps, showing the supposed key links between various risk categories, one is then left with a presentation of information that has essentially lost any value. It conveys nothing other than visual noise- cognitive dissonance, not cognitive diversity.

Of course, it is easy to mock such earnest and well-meaning efforts, but one has to ask whether any policy-maker is going to have an epiphany as a result of reading the document, thus leading to a significant shift in behaviour or actions.

Turning to the real world (something whose existence seems to escape many Davos attendees), to have any value, risk assessments have to be specific, concrete and probabilistic in terms of timing and scale. The contrast between the approach of the WEF and that of, for example, Philip Tetlock’s Good Judgment Project is quite telling. While the latter also uses the “wisdom of crowds” it asks very specific questions and seeks probabilistic answers which can then be analyzed ex post facto to identify so-called “superforecasters” who have a demonstrable capability in assessing risk, even if only in relative terms.

As “ground up’ underwriters of very specific risks, the Awbury Team recognizes that it is much more effective to focus carefully on what actually matters in a particular set of circumstances rather than worry about nebulous concepts that provide no additional value to the process of trying to obtain a deep understanding of the risk being underwritten. Contexts, connections and correlations truly matter, but only to the extent that they are relevant to the matter at hand.

Reading the WEF Report itself is merely an exercise in witnessing “groupthink”, because an unexacting consensus is the goal- not a reasoned dissent, a difference of perspective, or true originality.

The Awbury Team


Information is alpha- as long as you know what to do with it…

In a world that often seems to be drowning in data masquerading as information, how is “alpha” or an “edge” to be found?

There seem to be two main alternative routes: one either has to have better information than one’s competitors; or, with the same information, a superior ability to identify and exploit patterns in it to identify both risks and opportunities.

It is obvious that there is an escalating “arms race” in acquiring “better” information, with the term “alternative data” now widely used in business and financial circles. For example, the use of data from commercial satellites is becoming increasingly common for hedge funds and others as a means to acquire “non-public” data. Yet, paradoxically, this simply leads towards a scenario in which all those who can afford it, have it. The edge is increasingly blunted. This then leads to the search for the next “alternative” source, but begs the question of whether “the next big thing” has any real value.

So, what about superior pattern recognition? Human beings are, after all, programmed by evolution to look for patterns in what their senses perceive as a means to avoid the lion lurking in the underbrush. What began as a mechanism necessary for survival has become a dominant trait, with the ability to recognize patterns, for example, visually/spatially considered an essential component of intelligence.

In the world of credit and risk analysis, the ability to understand and forecast what may happen in respect of a particular obligor or scenario is essential. To a large extent, this involves the ability to discern patterns that one knows from experience and acquired knowledge are likely to lead to a particular outcome, good or bad- for example, over-leverage, or insufficient liquidity. However, it also involves being able to distinguish between patterns that are meaningful (a signal) and those which are merely distracting noise, as well as to recognize that there may be a new pattern or paradigm, because one can be lulled into a false sense of comfort by failing to question what one perceives or “knows”.

Naturally, the growth of AI has led to something of a frenzy in terms of interrogating data for patterns that no-one else has yet discovered. Within certain parameters, specialized AI (for that is all that exists at present), backed by ever-rising processing and computing power has the potential ability to see things quicker, or differently from human beings, no matter how experienced or skilled. One only has to look at the fact that AI systems can now overwhelm even the best human players of chess or Go (to mention only two examples) to understand that.

However, the world is a complex, non-linear place, which means that, for now at least, even if the existential risk from AI to the role of (re)insurance underwriters in high-volume, commoditized product lines looms ever nearer, in the more complex areas in which being able to understand causation, correlation, constraints and the nuances of game theory and human behaviour are critical, the pattern-recognition abilities of human operators should prosper for much longer.

While we are eternally paranoid at Awbury about mistaking noise for a signal, or to our thesis being simply wrong, we believe that there is hope for us yet, given our relentless focus on complex, non-standard risks!

The Awbury Team


The chaos beneath the surface…

“Civilization is hideously fragile… there’s not much between us and the horrors underneath, just about a coat of varnish”- CP Snow

Reading Seth Klarman’s (of Baupost fame and fortune) year-end letter set us thinking about how fragile seemingly stable environments can be.

Those of us who are fortunate to live in what are considered well-ordered and reasonably well-governed societies, tend rather smugly to believe that “‘twill ever be thus”. This is a good example of recency bias and the availability heuristic in action: because it is so, and has been for a long time (in our terms); because our experience has always been the same, we find it hard to bring ourselves to believe that our world will not simply continue as before. In market terms, for example, there has been an almost 36-year bull market in US bonds. Careers have passed without the experience of a real bear market. Knowledge has been lost. What happens when a true reversal starts?

As the CP Snow quotation warns, there is often a fine line between order and chaos- systems or trends are stable, until they are not. Disruptive forces can evolve remarkably quickly, such that seemingly invincible and secure companies, long-standing markets or even governments find themselves at risk of degradation, dissolution or irrelevance. Who would have thought that December 2018 would bring the worst final month for major equity indices since 1931- the depths of the Great Depression?

In this context, the quality, agility and effectiveness of analysis and decision-making become paramount.

Unfortunately, as Klarman pointed out in his letter, there are signs that US markets in particular are leveraged not just in monetary terms, but also in structure, algorithmic bias and investor psychology, such that the historic tendency to “herd” becomes potentially even more exaggerated in scope. For example, if trading algorithms are designed by human beings (as they still are) and those human beings share the same experiences and biases, the speed of algorithms once put into use can overwhelm markets given the fact that the majority of US stock trading (and probably increasingly that in many other markets) is actually initiated and conducted by algorithms.

Turning to government (and no matter what one’s political affiliations may be), there are also worrying signs of a deterioration in the quality and rationality of policy- and decision-making in many jurisdictions. Of course, politicians acting irrationally and for partisan purposes is not exactly a new phenomenon, and by the standards of history, political discourse is actually quite restrained in most true democracies. However, in a complex world, where new media enable the dissemination of thoughts almost instantaneously, the risk of a statement or assertion causing disruption rises inexorably. By the time anyone actually stops to think it is too late. The Latin tag “festina lente” (literally “hurry slowly”- more haste less speed) is worth bearing in mind in this context.

And what of the world of (re)insurance? In a still consolidating industry, as market power becomes ever more concentrated within the traditional business models (and perhaps more volatile in the realm of alternative capital), there is a risk (always present in the industry) of doing what everyone else is doing, because “the market” cannot be wrong, when clearly it can. As we have written before, unbridled enthusiasm for certain types of risk (e.g., cyber) can lead to a deterioration in the quality of thought being applied to understanding, defining and managing the risks entailed.

We are in no sense saying that the “end is nigh”. However, we do think that, for example, (re)insurers should constantly re-assess and test the robustness and continuing fitness for purpose of their decision-making systems and processes to minimize the probability of tipping over into the abyss of fundamental error or misguided belief.

The Awbury Team