We are sure that our readers will have come across the famous Turing Test, named after one of the acknowledged founders of computing as we now understand it, Alan Turing; who introduced it in a famous paper from 1950, “Computing Machinery and Intelligence”. In very broad terms, a non-human intelligence (or Artificial Intelligence- AI) would pass the test if it could fool a human interlocutor into believing it was, in fact, human; or, as Diderot stated some centuries earlier in his Pensées: “If they find a parrot who could answer to everything, I would claim it to be an intelligent being without hesitation”. If you could not see the parrot, and its answers came via some intervening medium, how could you be sure it was not human?
So, let us pose further questions: How do you know, in a world of financial models and the use of “big data”, that an underwriting decision was actually made by a human being? And, if it were not, would you, or should you, care?
Some may think we are being flippant, or facetious. We are not. Consider how much has changed since Turing’s paper was published; and how algorithms and self-learning machine intelligence are not fantasy, but fact.
So, another question: why would you even need a human underwriter? What advantage do they provide? We are being completely serious when we state that in at least some insurance products, there is no longer any obvious benefit in having a sentient being make the underwriting decision; and that the days of the human underwriter are numbered, if not already gone. We shall leave it to our readers to fill in the blanks on which lines of business we might be referring to!
Of course, we all like to think that, if not individually indispensable, we at least have a sustainable economic value in the (re)insurance market as a human agent, better able to carry out an underwriting role than an algorithm. But is that really so? Again, in an environment, such as NatCAT, where margins are being eroded; and human beings may be the most significant “cost element”, at what point does a model-driven, data intensive, underwriting process become the better alternative? Of course, at least for now, it is likely that a human being or two actually designed the model and arranged for the collection of the data; but is it really that difficult to imagine a scenario in which the entire process becomes self-governing, self-learning and “aware”?
We shall now attempt to calm those of our readers who are readying the Molotov Cocktails to burn down the data centres and incinerate the servers!
We are firmly of the opinion that it is naive and foolish to expect the world to continue as before, or to assume that underwriting any risk must by definition require a human agent. Massive datasets; patterns; and the law of large numbers mean that, in many cases, a human underwriter is no better, and may in fact be worse, than an AI. However, (sighs of relief all round…) we would argue that there are still lines of business, with our E-CAT range being one, in which a human intellect, with appropriate education, knowledge and experience, supported by appropriate models, remains essential to the proper identification, analysis, execution and management of risk.
So that’s all right then!
– The Awbury Team