Artificial Intelligence- the need for Counter-AI

The term Artificial Intelligence (AI) has become something of a cultural trope in the past few years, with arguments over its potential, its dangers and the probability and likely timing of “the Singularity” (when AI overtakes human intelligence and becomes General AI) becoming more heated.

And now a report co-published by, amongst others, The Electronic Frontier Foundation (EFF- entitled “The Malicious Use of Artificial Intelligence”) sets out in some detail the ways in which those with malign intent could use and develop AI with the potential to cause great harm.

Of course, it is easy to express concern. However, the report makes cogent arguments that the risks of harm are increasing, because many AI capabilities, even now, are “dual-use”; meaning that not only can they be used for military as well as civilian purposes, but also that they have both offensive and defensive capabilities. Compounding the problem, some of the inherent characteristics of AI are that it is efficient, rapidly scalable and easy to diffuse- in that sense resembling existing software. However, various AI systems already possess capabilities beyond those of even the most capable human practitioner, let alone the average, whether in terms of speed, accuracy, or skill. So far, these systems have been constructed to be benign (or, at least, those disclosed are), but it is not hard to envisage AI systems being developed which are deliberately “weaponized” to affect digital, physical or political security.

Such issues should be of considerable concern to the (re)insurance industry, where cyber-risk is something of an obsession as a source of premium and AI could lead to more efficient processes, but where the risks posed by AI appear to have been given insufficient weight as a distinct threat vector which could affect multiple lines of business.

After all, it will be cold comfort if a “viral” video which purports to have been issued by an influential public figure and leads to, say, civil strife, property damage, or even terrorism, turns out to be a very skillful fake produced by an AI algorithm, whose sponsorship and attribution are unclear. Such occurrences are becoming ever more probable, as the impact of systems which use machine learning to create facsimiles almost impossible to distinguish from the “real thing” becomes ever more visible. It is the Turing Test gone rogue!

Additional areas in which malign AI is likely to proliferate include precision “spear-phishing”, in which individuals are targeted for sensitive information; impersonation through voice (and eventually appearance); or the creation of so-called “adversarial examples” in which those using AI (such as autonomous vehicles) are fed disruptive misinformation intended to cause damage or mayhem (e.g., car crashes.) In fact, the possibilities are probably only limited by the capabilities of the AI systems deployed.

In the face of such risks, we believe that, in the same way that counter-intelligence is a recognized discipline if the area of defence, the business world (including (re)insurance) needs to create an approach that we would term “Counter-AI”, which consists of rather more than the fashionable White Hats and Red Teams. It will require tools that can detect the onset of an AI-directed attacks in real-time, assess the nature and scale of the threat, and develop and deploy counter-measures, also in real time.

And, in the same way that (re)insurers currently audit and advise clients upon physical and cyber-security, they are going to have to extend that capability (out of self-interest, if nothing else) to try to control future loss experience, because one can envisage scenarios in which the use of malign AI could trigger a cascading series of events that could literally overwhelm the capacity of a (re)insurer to meet its obligations.

Naturally, at Awbury, given our focus on credit, financial and economic risks, we are constantly looking for evidence that the parameters of what were once “stable” assumptions are shifting, to make sure that the risks that we cover retain an appropriate risk/reward ratio.

The Awbury Team

Standard

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.