From 2010 heading this way the use of artificial intelligence (AI) by active managers has been increasing in a most absorbing manner. To use the Roman historian Suetonius, “AI investing is not going away.”

In a 2017 conference organised by J. P. Morgan, the bank asked 237 investors about big data and machine learning, and the resulting data found that “70 per cent thought that the importance of these tools (of AI) will gradually grow for all investors. And a further 23 per cent said they expected a revolution, with rapid changes to the investment landscape”.

But this investor interest with AI also signals a certain frustration with current active, and specifically quant, managers and the nascent promise shown by AI hedge funds. Whatever the cause, AI investing presents consultants and asset owners with a serious challenge.

The AI investment industry takes as a universal truth that investment processes should be understandable. As part of the kabuki theatre, which is what investment due diligence is often called, asset owners and consultants require that a manager be able to explain all strategies and models used.  Many managers reveal just enough of their investment processes to provide allocators with a cairn from which they can then orient themselves and continue their assessment.

We are heading into a black future, full of black boxes

It should be recognised that a traditional manager’s degree of disclosure is a reflection of a wilful act. The manager should reveal more, but often chooses not to because, it is claimed, doing so would put his process at risk. It is probably truer that sharing too much would reveal the paucity of the process itself. However, though an AI managing company  can provide a general overview of his approach (“We use a recursive neutral network”), it can provide no authentic narrative of its investment process, not because of willful deflection but because, unlike a traditional manager, it has not hand-built its investment model. The model builds itself, and the manager cannot fully explain that model’s investment decisions.

Consider Deep Blue, a human-designed programme that used such preselected technologies as decision trees and also as tools to defeat world chess king Garry Kasparov in 1997. Think of an AI manager as Deep Mind’s AlphaGo, which used deep learning to beat some of the world’s best Go players. (Go is an ancient Chinese board game that is much more complex than chess and has more possible moves than the total number of atoms in the visible universe). Without explicit human programming, AlphaGo created its own model that allows it to make decisions better than its human opponents.

With enough time and training it was possible to explain why Deep Blue made certain chess moves at a certain time. Although it was possible to observe how AlphaGo plays Go, it is not possible to explain why it made a specific move at a specific point in time.

 Yoshua Bengio, a pioneer of deep-learning research, described it: “As soon as you have a complicated enough machine, it becomes almost impossible to completely explain what it does. This is why an AI managing organisation often cannot explain its investment processes. The requirement of interpretability brings the assessment – and by extension the investment in – AI strategies to a screeching halt.

With AI investing allocators faced a new choice. Currently, in an act of complexity they choose access over knowledge – accepting a manager’s wilfully limited disclosure of its narrative but naively believing that the narrative did exist and is known to the manager’s illuminati. And so the new choice facing all AI consumers is more fundamental. According to Aaron M. Bornstein, a researcher from the Princeton Neurscience, the choice is, “Would we want to know what will happen with high accuracy, or why something will happen at the expense of accuracy?”

Requiring interpretability of investment strategies is a vestige of old-world assumptions and is entirely unsatisfactory for reasons that transcend investing. We either forswear certain types of knowledge (e.g. deep learning-generated medical diagnosis) or force such knowledge into conformity, thereby lessening its discovered truths (do we really want our smart cars to be less smart or our investment strategies to be less powerful?). Moreover, this requirement smacks of hypocrisy: Given what Erik Carletron of Textron calls “the often flimsy explanations” of traditional active managers, investors really don’t know how their money is invested. And, conversely, who would not suspend this criterion given the opportunity to invest in the Medallion Fund?

There is a serious need to better invest beneficial assets. AI investing can help, but its adoption compels us to judge AI strategies not by their degree of interpretability but by their results. As scientist Selmar Bringasjord puts it: “We are heading into a black future, full of black boxes.” Embrace it. There is no alternative route. If there is a question of trying to find out what is impeding the rise of AI investors, then the answer must perhaps lie in the old-school way in which we evaluate them.

Dr John Consiglio teaches in the Faculty of Economics, Management and Accountancy at the University of Malta.

Sign up to our free newsletters

Get the best updates straight to your inbox:
Please select at least one mailing list.

You can unsubscribe at any time by clicking the link in the footer of our emails. We use Mailchimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to Mailchimp for processing.