The concept of an AI-based reinsurer with the name “Bayesian Re” is an intriguing one. Unfortunately, the name “DeepMind Re” is probably off-limits, although perhaps Alphabet/Google has the name registered and warehoused somewhere?
Of course, any self-respecting (re)insurance industry actuary or statistician should be familiar with the now canonical theorem posited in the Eighteenth Century by the Reverend Thomas Bayes, a Presbyterian minister, and then developed by his friend, Richard Price when it was published posthumously in 1763. For years the theorem languished in obscurity; but, having been “rediscovered” independently by famed French mathematician, Pierre Laplace, who published his own version of the theorem in 1774, it began to receive wider attention, and today is one of those “terms of art” which one is expected to have at least a passing acquaintance with.
In Laplace’s words, “the probability of a cause (given an event) is proportional to the probability of the event (given its cause)”. In plain English, and to use Bayes’ approach: Initial Belief + New Data -> Improved Belief. New information leads to new or modified conclusions.
Like many things, it seems obvious in hindsight. As Keynes is supposed to have said: “When the facts change, I change my mind. What do you do, sir?”
Yet, we know from experience that human beings actually find changing their minds, even when they should, remarkably difficult. We are prey to numerous cognitive biases, and over-committed to already-acquired worldviews, or probability assessments, and notoriously unwilling to change a belief or opinion, even in the face of new evidence.
A reinsurer based upon AI and incorporating Bayesian reasoning would, in theory, be able to overcome the fallibility, inconsistency and contrariness of human judgement through an iterative learning process freed from emotion (the Lloyd’s follow-form start-up, Ki, is perhaps a modest precursor). However, for now at least, algorithms are largely created in our own image, and so have embedded biases, even if that may not be evident. This is not so much an issue when dealing with areas where a statistical basis already exists and the aim is to speed up and systematize decisions- motor insurance is probably the best current example of that. However, in other areas, such as, say, evaluating and pricing cyber risk, are underwriters really going to be comfortable relying upon a system whose processes are likely to become externally unfathomable, even if the outcomes may, in fact, be better than those derived from purely human reasoning?
And when it comes to credit, economic and financial risks, is a specific AI yet capable of “thinking” in terms that reflect and take into account human behaviour, and could one be built that had broad application? That would seem a moot point for now at least, although one could envisage it being useful, as we are sure it already is, in categories such as consumer credit, where the volumes and timeframes of available historical data are material and can be used to create and train algorithms.
In reality, perhaps the best advantage that corporeal general intelligences (i.e., human beings!) have is being self-aware and having the ability to recognize that they are fallible and can never assume that the search for the perfect probability assessment is complete. In other words, we should always look at the world in Bayesian terms, and question the reliability and relevance of new evidence to update our judgement.
However, we do like the idea of Bayesian Re!
The Awbury Team