12:27 PM
Telemetry & the Trader: Transforming Technology Performance
Just how well are your algorithms and trading platforms performing? Exactly how much value are you deriving from your trading technology? These questions naturally dominate the technology-savvy trading community as market participants and their technology providers jostle for business and seek new ways to distinguish themselves from the competition. Unfortunately they are questions without easy answers.
Traditionally, performance measurement is something observed at the macro level when reviewing a portfolio. This assessment is a very integral and critical component of any successful investment strategy. With the application of telemetry we are able to break open the black-box trading strategies and review discrete decision logic like never before.
What is currently missing is the ability to distinguish between trading skill and technological capability. There has been no real objective method of assessing whether a given algorithm actually achieves the desired outcome. In other words, there is no way of knowing whether underperformance is caused by a bad workman blaming his tools -- or actual bad tools. Also, there has been no way of understanding the real impact of a new or upgraded algorithm. This is where telemetry can fill a void and provide insight into the effectiveness of the technology employed by the trader.
The arrival of telemetry in the trading community is therefore something to be applauded. Very simply, telemetry is the process by which specific pieces of data are gathered automatically from specific points within a system, and then transmitted electronically for monitoring, measurement, and eventual storage. Already widely deployed in industries as diverse as energy and healthcare, telemetry has the potential to transform the way trading businesses view and assess performance.
The ambiguity around measuring trading technology performance is undesirable both for traders and developers of the technology. With no truly objective means of assessing the individual elements that constitute good trading performance, it is much harder to make the necessary improvements to refine that performance. As the recent renewal in interest in the emotional and psychological drivers behind trading decisions suggest, there is no real objectivity when human decision-making is involved. Instead we have a situation where all participants are overly dependent on inferences drawn from observed behavior and outcomes, rather than rigorously acquired, empirical evidence.
This lack of objectivity is not the only problem presented by common measurement techniques. Although questions of performance often come from business users, finding the answers is the responsibility of a limited number of technology experts. Given that the details of the algorithm and its decision logic is in the hands of only a few people, any investigations into product performance can be inefficient and the results untimely, due to the volume of data that needs to be reviewed.
Finally, and perhaps more crucially, current methods of measurement are generally performed by third-party components or solutions. This is problematic for two reasons. First of all, third-party observers cannot distinguish between the multiple pieces of discreet functionality that reside within a single algorithm. They observe a message at the point it enters the system, and can show when that same message exits the system some time later.
However, they are unable to pinpoint what happens in between and they cannot make a judgment on whether the observed latency is an acceptable payoff for other elements of performance. There is, of course, a very good reason for the limitations of third party observation: the understandable and underlying fear of losing or compromising IP. The specifics of the algorithm are rightly proprietary to the firm using it. Opening it up to third-party inspection is never going to be an option. Secondly, the lack of transparency provided to third-party systems allows them to only make generic observations regarding the speed and the number of messages passing through the system and the information that was present at the time an order was being processed. Understanding the role and impact of the decision logic upon these observations is not generic.
In effect, telemetry means that a system is monitoring itself -- immediately dispensing with third-party observation and its associated difficulties. It makes actionable data available in easily accessible formats in near real-time with no material effect on the speed or latency of the system it is measuring. It provides transparency to portfolio managers, compliance teams, and product managers. It also provides a repeatable process -- as measurements are taken in exactly the same way, regardless of the time of day or who is evaluating the results -- to eliminate much of the subjectivity currently obscuring performance measurement.
One specific case of telemetry in action is ensuring that an algorithm’s IOC hit rates stay above a specific threshold. Changes in these levels are correlated with changes in the software, in the infrastructure, and on the exchange. Reparations are done immediately if necessary. Another case is immediately identifying increases in exchange-side latency and providing evidence that the exchange can use to make necessary adjustments to load balancing. Developers and product managers can use internal displays with pertinent algorithm statistics to review their code, and provide timely answers to user questions regarding performance on specific orders.
In effect, telemetry is the high-tech canary in the coal mine. It’s a clear, swift, and essential indication of all crucial success factors at any given time. It enables users to track performance across software updates, validate any functional changes, and ensure they perform as intended. It enables trading decisions to be displayed to a broader audience for review and validation. Do you know how well your technology is performing? Telemetry finally makes it easier to answer that question.
Daljit Bhartt is the Managing Director and Head of Technology at ITG Canada Corp. He has been with ITG since 2000, prior to ITG he worked for Deutsche Bank in London, UK and then Toronto. View Full Bio