No, we are not setting a question for a philosophy exam- of the sort designed to torture the mind of the poor examinee; and make him or her wonder why on earth studying the subject seemed “a good idea at the time”!
Rather we are alluding to two existential, and related, questions, which we believe will become of increasing importance to all of us. So, Dear Reader, bear with us.
The concept of a “stranded asset” is one that is by now well-understood in the realm of business and risk management; for, example by German power utilities operating nuclear power plants. What were perfectly good and productive assets were rendered essentially worthless (or worse) by political fiat.
Similarly, “virtual reality” is becoming an increasing element of many people’s lives, as technology becomes able to provide individuals with sensory experiences that appear “real”, but are not.
What if, however, we human beings risked becoming “stranded assets” ourselves; or if you could create a “virtual life” that had no physical existence, but appeared to have an impact on society, business and government? Such concepts may appear to be from the realms of science fiction; but they are not.
The topic of Artificial Intelligence (AI) is one that is receiving greater attention, not just in terms of the impact of “Big Data” and the potential obsolescence of certain occupations, but also in terms of the tail risk of AI actually achieving true consciousness. Not surprisingly, visions of The Terminator are being conjured up (“2001: a Space Odyssey” being so “yesteryear”), as well as ones of a networked consciousness, able to replicate and enhance itself in environments in which human beings cannot survive. Of course, no-one really knows whether, when or how such an outcome might occur; and many dismiss it as science-fantasy, the product of paranoid minds. Nevertheless, the rate of development of neural networks; expert systems; and learning algorithms should not be ignored; nor our increasing dependency upon them. One morning, we may wake up and find that we have been superseded.
However, the question of creating a “virtual life” (in the sense of an identity that could be used to fool government bureaucracies and financial institutions in ways that make “identity theft” seem so second-rate), is one that should be of more immediate concern. So much of modern life is conducted without physical interaction, that the potential for deception is, paradoxically, very real. At the latest DefCon hacker-fest, a security researcher named Chris Rock demonstrated that he could create what amounted to a “virtual life”; one that could be made to appear to develop, borrow, trade stocks and then “die” in order to collect life insurance. Given the concurrent ability to create digital avatars, is it really that far-fetched to imagine a “recluse”, seemingly averse to human interaction, being created and manipulated? It gives a whole new meaning to KYC, or Know Your Customer!
After all, DNA is simply a form of code.
So, why should we at Awbury care about such matters? For a very simple reason: in our world of trying to identify, assess and manage complex risks, it is the ones that can seem “implausible”, or “improbable” that often lead to disaster and oblivion; and we have a distinct aversion to the prospect of becoming “stranded”, while believing strongly in the continuing value of real, face-to-face human contact! Being able to discern and connect seemingly disparate or uncorrelated risks can make all the difference between profit and failure.
So, are you sure you are real…? [Which is, of course, the age-old question that does torture the apprentice philosopher!]
The Awbury Team