Without attempting to separate hype from genuine opportunity, the reality is that for many industries, the resulting algorithms and technologies coming from this AI-inspired shift will be game changing and will equate to completely new business processes. For other industries, and particularly those that are traditionally heavily driven by information, the adoption of AI and machine learning will likely be more of an evolutionary extension of existing business processes.

Investment decision making

The investment management industry squarely falls into this information driven category. Without entering into an Efficient Markets Hypothesis (EMH) debate, it is fairly well accepted that asset markets are at least efficient enough that substantial amounts of information (data), coupled with skilful interpretation of that data is prerequisite to beat them.

The data used and the interpretation layer employed will differ by asset class and by the style of the investment process but it is the combination of these, coupled with portfolio construction, risk management and execution skill that ultimately differentiate investment managers.

Technology, in many forms has always been a dominant force in investment management driving constant evolution of investment strategies. Even the most traditional of investors have adapted their approach to accommodate more data, more tools and even adjust their approach to interpreting data based on experience, judgement and evidence; albeit sometimes reluctantly. Rationally though, this adaptation tends to be in ways that remain consistent with the investor’s prevailing underlying beliefs. The same is true for quantitative investors who look to leverage the potential of AI and Machine Learning.

The bold potential of AI and Machine Learning in investment strategies is in both allowing much larger volumes and types of data to be used to inform investment decisions and potentially improving on the interpretation of that data into investment decisions. Much debate is taking place on both of these fronts. Early adopters are investing significantly and entire product categories are forming to promote alternative data sources, tools and expertise in interpreting it using previously fringe Machine Learning techniques.

So it is no surprise that quantitative investors are also taking interest in this space. More data and new techniques to help uncover and validate perceived relationships between data and subsequent asset returns is what gets quants out of bed each day. That and coffee.

There is a fair degree of similarity between traditional quantitative investing approaches and purely AI/Machine Learning approaches. But also some fundamental differences that are important to understand.

Traditional Quantitative versus Machine Learning 

Traditional quantitative investing

A typical quantitative investment strategy is based on identifying “factors” (or measurable characteristics) of assets that are predictive of subsequent asset returns. An essential ingredient in this endeavour is that a factor typically first emerges from human intuition or experience about what drives asset returns. Subsequent analysis, or model building, is then focused on confirming the evidence in the data is consistent with the intuition.

The intuition behind a factor may be fundamental (i.e. based on a company’s current or forecast financial position), behavioural (i.e. based on the observed behaviour of market participants) or structural (i.e. based on the dynamics of markets), but the case for why a factor helps to predict asset returns is grounded on a sound pre-formed human insight. Think back to your high school statistics and science lessons and hypothesis testing. This approach is synonymous with proposing a hypothesis and using available data (evidence) to attempt to reject your hypothesis as unsubstantiated. It is also similar to a typical scientific investigation process.

The typical analysis technique for a quantitative analyst is to use a backtest, where a hypothetical strategy is devised that allocates between assets based on the proposed factor or set of factors. This is tested back through time using as much available historical data as possible and the historical outcome of that strategy is observed. The robustness of the evidence gained from this process is obtained by techniques such as using long periods (over multiple business cycles), different investment universes (countries, regions, and potentially other asset classes) and assessing the performance in different market or economic environments.

Bloomberg reporter, Dani Burger1 recently published an amusing but very pointed note suggesting a strategy of investing in companies with “cat” in their name. The backtests for this strategy showed this strategy had been a real winner – based on the historical data. This was done to make the precise point that without a solid rationale or intuition behind a chosen factor, it’s not terribly hard to find strong, apparently predictive, relationships through data mining. Investing in such strategies amounts to a leap of faith (or delusion).

Given this requirement for intuition and rationale, the data used and the factors that have tended to make their way into traditional quantitative investment strategies have had quite a direct relationship with the assets in question. It is far easier to get behind the intuition that a company’s future return is in some way linked to its reported financial accounts or the history of the company’s share price than it is to draw conclusions from the unstructured and noisy information embedded in twitter mentions of the company’s products. Importantly that is not to suggest such a link doesn’t exist and isn’t significant.

AI/Machine Learning approach to investing

The underlying goal of a purely Machine Learning driven investment strategy is essentially no different to that of a traditional quantitative investment strategy – that is to find strong relationships between elements of data and future asset returns and exploit them. However, the approach to finding these data relationships, often referred to in Machine Learning models as “features”, differs in that they tend to be entirely driven by data (evidence) and do not require upfront rationale or intuition from the analyst.

This purely data-led outcome is both the strength of Machine Learning techniques and the greatest challenge. Machine Learning models can derive unique insights from the data that may not have been considered by a quantitative analyst working with the same data. However it can also identify relationships in the data that are spurious or temporary. The field of machine learning focuses much of its attention on techniques for dealing with such problems. This is also one of the reasons that the “Man + Machine” approach is more commonly being pursued by the investment industry to date.

Another key challenge and the focus of much energy in the Machine Learning field for decades is that with some Machine Learning techniques, such as Deep Neural Networks, the features that the network identifies as having meaningful explanatory power are not always easily interpreted by the human analysts that created them – they are the ultimate “black box”.

A recently publicized example of this problem was Man Group’s2 decision to hold what appeared to be a highly profitable strategy in a testing phase for a prolonged period of time before finally allowing it to make live investment decisions. They were observing strong results from the strategy, but without an ability to interpret what exactly the model was using in the data to derive its decisions under the hood, they opted to use the passage of time to provide a model robustness check.

One very interesting aspect of machine learning techniques is their ability to work with vast amounts of unstructured data like text, audio, video and images. In fact we already see many incarnations of this in areas like speech recognition and image recognition in software and online services we use today (e.g. Alexa, Siri, Facebook). The applications for this are far reaching and this field is advancing at a rapid pace. 


Traditional quantitative versus machine learning strategies
 
Traditional versus machine learning quantitative strategies 

Evolution for quantitative investment strategies

The pursuit of new factors for quantitative investment strategies using Machine Learning presents some significant challenges, but is a rational extension of traditional quantitative approaches. The desire to exploit new data sources and data types and the need to work with them differently to traditional financial industry datasets is forcing quants to build out their toolkits and hire specific new skillsets.

An early example is seen in analysis of sentiment in text, for example, news articles, analyst notes, AGM presentations and analyst calls, which require Machine Learning techniques, specifically, Natural Language Processing. Sentiment analysis is now a fairly well trodden path for many quants and while it carries its own unique challenges and success has been mixed, the rationale behind the notion that the sentiment conveyed in text or speech can have a link to the fundamentals of a company is sound.

The abundance and rapid growth of alternative datasets that have direct or indirect relevance to financial asset returns is opening up a golden age not just for AI, but also for traditional quantitative approaches. New testable hypotheses will be able to be tested on more data and with more techniques. These will require quants to use new techniques to work with this new data. The line between quant and AI/Machine Learning will blur and perhaps it’s safe to assume the quants won’t be replaced by the robots. At least not just yet.

 

Ben Dunn 
Ben Dunn 

Head of Quantitative Solutions Group,

Eastspring Investments

  • ARTICLE
  • ARTIFICIAL INTELLIGENCE
  • EQUITY
  • GLOBAL
  • INSIGHTS
  • MACHINE LEARNING
  • QUANT