Intuitively, most of us would understand that those human beings who have a low level of self-control are more likely to indulge in riskier behaviours and may become a greater threat not only to themselves but others. Of course, in our messy and inconsistent world, some behaviours are considered less acceptable than others and so are subject to greater legal sanction or moral opprobrium- consider drugs vs. alcohol, or smoking vs. over-eating.
However, because of advances in neuroscience and the insurance (re)insurance industry’s rising obsession with “cybersecurity”, the issue of an individual’s self-control is becoming somewhat more relevant.
No doubt many of our readers will have come across the Marshmallow Test, considered iconic in the field of psychology; and undertaken by Walter Mischel in the 1960s: could a pre-school child delay gratification for a short period (and receive 2 marshmallows as a reward), or not (and grab the single marshmallow nearby.) The test was regarded as one of self-control, or the ability to exercise will-power.
So, what you may ask, have marshmallows to do with cybersecurity and insurance?
The world has moved on since the 1960s (and from the publication of Orwell’s “1984” in 1949); so, not only are there now batteries of psychometric tests for such factors as self-control, but neuroscientists are better able to map brain functions to behaviours.
In a recent study, said to be the first of its kind to be documented, conducted by a team led by Professor Qing Hu of Iowa State University, a test group of volunteers who had been screened and then selected to be at either the low- or high-self control ends of a spectrum were given various scenarios describing system security breaches; and had their brain activity monitored while they decided how to respond. This methodology helped overcome the probability that at least some low-self-control individuals would seek to mask their true intentions (something considered a feature of traditional criminology studies). According to the results, it seems that those with low-self-control made decisions about major security breach scenarios more quickly, as if they were not considering the consequences as deeply as their more self-controlled peers.
Naturally, this has led to suggestions that those in particularly sensitive positions, should not only be subjected to the more traditional psychometric tests, but also to an “EEG Test”. The problem, of course, is that such testing cannot provide definitive conclusions about likely behaviour; and may condemn perfectly acceptable candidates to be labeled as “untrustworthy”; and, therefore, bar them from a particular role. It begs the question as to where one can or should draw the line in trying to predict human behaviour. Those already concerned about intrusive, and perhaps hidden, surveillance will become yet more vocal about the need for boundaries; yet the research raises a question for (re)insurers of cybersecurity risks as to the nature and level of controls an Insured should have in place if it wishes to place cover, particularly in critical systems.
At Awbury, we do not provide such covers; but we are always interested in trying to understand the consequences of human beings’ thought and decision-making processes; and how they may affect behaviours and risk.
Just pay no attention to the EEG machine in the corner of our offices. It is there merely to encourage appropriate, risk-aware behaviour…
– The Awbury Team