The uncanny valley… A future too perfect…?

There is a term used in robotics and now in Artificial Intelligence (AI): the uncanny valley.

The phrase was created in 1970 by a Japanese engineering professor, Masahiro Mori, to suggest that a person’s response to robots would shift from affinity and feeling comfortable to one of revulsion as robots became increasingly lifelike, passing through the “uncanny valley” whose far side leads to a “normal” human being.

Now, of course, increases in computing power and regular advances in AI, are creating a situation in which the digital equivalent is becoming real- whether written, verbal or visual. The Turing Test would certainly seemed to have been superseded and passed when it comes to written interactions online; while the ability to manipulate images, both static and moving, coupled with the synthesization of voice capabilities and “normal” speech, can make it very difficult to distinguish whether a “person” is real (“authentic”), or a realistic digital human avatar.

If you could present yourself as an image of your “perfect self”, with the semblance of reality far outstripping “Photoshop”, what would you decide?

Avatars that are clearly caricatures, or represent specific tropes, are not seen as threatening; but what of a perfectly-sculpted face that might or might not be “real”, such that one is not sure who or what one is interacting with?

As much business in now conducted remotely and digitally (including via the infamous Zoom/Teams/Webex meeting), especially in the worlds of finance and (re)insurance, how can one be sure whether that broker or underwriter one is dealing with is truly human, or just plausibly so? What or who is controlling and directing the process?

Ironically, of course, it is the real humans who are imperfect, in appearance, computation, communication, and speech.

Until fairly recently, all of this need for assessing “realty” would have been seen as implausible or unlikely- a science fiction fantasy.

However, over the past year or so, we have moved on from some (re)insurers touting their “algorithmic capabilities”, to one in which generative AI systems can execute very sophisticated processes- especially so-called “multi-modal” ones (that can process and communicate by, say, both text and images); and which increasingly have also been given the ability to use tools (such as a version of ChatGPT-4, with access to a Wolfram Alpha plug-in, which can perform complex mathematical calculations).

And given that all such models rely upon vast amounts of data, as well as access to significant processing capacity (all of which is expensive, even as efficiencies improve), one can see the potential for a “winnowing”, in which only the largest and/or the most specialized businesses will be able to survive and thrive, including in (re)insurance- yet another example of the hollowing out of the “great middle”. One can already see that the more and the more complex your data archives are, the more valuable they are.

While it is far too early to make any definitive predictions as to how the impact of generative AI will pan out within the industry, trying to pretend that it does not matter, or is irrelevant, would simply be foolish.

The whole point of the industry is to assess and understand new risks or other factors as they arise. Whether that will include self-analysis of whether existing business models remain viable remains to be seen.

In the meantime, you may care to ask yourself whether that underwriter with whom you have been communicating is actually human, or merely plausibly so…

The Awbury Team

Standard

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.