In our previous post, we made the point that, while financial models are an essential part of decision making, nevertheless (as pointed out in a recent Bank of England paper), they can often be surpassed in usefulness in unstable or chaotic markets by relatively simple heuristics, or so-called “fast-and-frugal” decision trees (FFTs).
Such an approach can be particularly useful in circumstances where there is concern about systemic failures and which institutions are most likely to fail; or when, as in the case of capital ratios, financial models tend to smooth out the tail distributions, such that a risk might be considered a “1-in-1,000 year” event (i.e., assessed at a 99.9% confidence level), whereas experience teaches that, for example, highly leveraged institutions such as fractional-reserve banks, experience stress and risk of failure with a much higher frequency. One only has to look back at the financial history of the past 7 years, let alone past century, to know that reality conflicts with theory!
The Bank of England’s researchers reviewed the performance of the relatively crude risk-weighting measures of the Basel I and Basel II Standard models and compared their robustness with the results of the Internal Rating Based (IRB) approach used by most of the supposedly sophisticated banks; and found that in most cases (particularly using Basel II), the Standard model was a much better basis for calculating the levels of capital needed to withstand actual default experience. Naturally, we at Awbury are shocked (shocked!) that banks would seek to minimize their capital needs. Of course, the advent of Basel III and all the other regulatory measures currently being promulgated and imposed are intended to minimize the probability of future failures, or at least to ensure that they do not come at significant expense to the public purse.
One of the key conclusions of the exercise undertaken by the researchers was that complex models work best when information is plentiful and robust, and data-generating processes generally stable. Intuitively, this seems rational; which tends to emphasize the need for experience and the maintenance of longer “institutional memories” in order to counteract the blandishments of quants trying to fit outcomes into their models.
Another instructive analysis undertaken was to create a possible FFT for assessing the vulnerability of banks to failure, to complement the complex regression analyses beloved of economists and analysts for such purposes. In essence, the researchers tried to determine which relatively simple metrics might be used to flag a higher potential for failure. Perhaps not surprisingly (and, again, intuitively), they came up with 4 metrics: one for “raw” leverage; one for risk-weighted capital; and two related to liquidity (percentage of wholesale funding and loan to deposit ratio.) Of course, the choice of these metrics may explain why bank regulators are introducing a similar combination of key metrics under Basel III and its equivalents. No one approach is foolproof, but the research demonstrates that a fairly simple FFT is generally as robust as much more complex models; and, importantly, does not need significant amounts of data to be effective, thereby assisting in swift decision-making; while their relative simplicity makes them much more difficult to game.
The lessons that we at Awbury take from this are that in analyzing risk one needs to strike a judicious balance between financial models and relatively simple heuristics; that a combination of the two produces a more robust outcome than either in isolation; and that experience matters, because only with experience comes the ability to recognize and understand need for such judgement.
-The Awbury Team