It ain’t necessarily so…

As underwriters, it is all too easy for our thought processes to become rather “hard-wired” or linear; because, through passage of time and accumulation of experience and knowledge, we begin to assume that we “know the answers”, or that our predictive abilities are more accurate than they actually are.

While, quite clearly, there are parameters and boundaries around many outcomes, or, given a sufficiently large portfolio, one can take a stochastic approach (again within a likely range), there is always the risk of a surprise or significant outlier.

As human beings, we operate as prediction engines in literally everything we do. Our brains can only function through the veil of our sensory perceptions. Nothing is direct, or un-mediated. Naturally, this creates issues that extend into all areas of our lives, including that of trying to make predictions and decisions as an underwriter- and there is no escaping that. After all, each of us “sees” a different reality, because our minds are physically isolated- we are not the Borg!

Of course, as human beings, we have developed very sophisticated verbal and symbolic communication systems, although our ability still to misunderstand each other or miscommunicate is remarkable! After all, where would lawyers be if the meaning of a specific wording was always absolutely clear and unarguable?

So, underwriters (like everyone else) are always dealing with problems of ambiguity and uncertainty- and even when they are “100% sure” they are often wrong.

Neuroscience is beginning to provide some tentative explanations of what is going within our “wetware” (or brains). For a start, as mentioned above, we are prediction engines. It is how we function. In that sense, the brain is Bayesian, always updating what it believes it knows with further observations or inputs. In that sense, when an underwriter assigns a probability to an outcome, he or she is simply performing explicitly what the brain does implicitly.

However, there is a potential and interesting “wrinkle” to this: that, in processing and updating its “knowledge”, the brain may often “privilege” what it already knows versus the additional information it subsequently receives- in other words, assigning a greater weight to its existing knowledge. Quite clearly, this could cause conflict, or cognitive dissonance. It is easier to follow the well-worn paths than to try to create new ones.

The consequences of this would be that our prior expectations sculpt how data are processed and weighted in forming conclusions or taking action. In essence, because our brains have a view on how they expect things to turn out, they can have an unrecognized bias in terms of predicting future outcomes.

One can see the risks of this, as amply demonstrated by the continuing consequences of the pandemic.

At Awbury, one of our institutional defence mechanisms against bias or complacency is always to ask ourselves: “But what if we are wrong? How extreme an outcome is feasible?”

We find that it definitely helps to be intellectually humble, rather than assuming that what we expect is what will happen.

The Awbury Team

Standard

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.