09:33 AM
Electronic Trading: The Latency Challenge
Since electronic trading began to dominate the capital markets, low latency has become one of the main concerns for financial organizations, and a crucial weapon in the battle for competitive edge. Increasing demand has given rise to a noticeable gap between what the market expects and what vendors can deliver " and this is widening all the time.
For example, at a recent conference, vendors claimed to have been asked by clients to reach latencies of sub-microseconds (10-9), if not pico seconds (10-12). Low latency methods such as proximity hosting are expanding beyond the realm of large banks to hedge funds who are now locating their automated trading systems a few yards from the exchange. Despite such intense levels of urgency and interest, there are still a number of myths and misconceptions surrounding the issue of low latency. For example, popular opinion holds that latency is mainly generated by the quality of the network and everything else is negligible.
Using real-world tests in the trading environments of top-tier investment banks in London, we have discovered that the majority of latency is generated within the banks' own applications. We found that the network, on average, contributes to just 13% of a system's overall latency which is now the second smallest contributor, below messaging at just 2%. So in real, quantifiable terms network latency and messaging are no longer major concerns. What our recent research makes clear is that there are new latency culprits, and these are the applications at 65%, and firewalls at 20%; which together contribute to 85% of overall latency today.
It is essential for financial institutions to ensure time and resources are used to analyze their own business needs and choosing the right solutions for different areas of the business. There are achievable and highly effective latency strategies to improve trading entities performance.
Firstly know the environment " look step by step what components are traversed through the entire trade flow. Look at the network, hardware and applications and put measurement metrics in place: ideally end-to-end where possible, but inferred latency step-by-step is a good starting point. That may mean some applications have to be modified to generate latency statistics. Sort out each step in the decreasing order of latency contribution, and compare each one of them against the industry benchmarks.
Secondly, keep it simple. Eliminate every non necessary step in the business flow as complex architectures are never fast and demutualise each bit of the infrastructure which has a reasonable cost.
Finally, optimize what you have. Take the big latency contributors (usually pricing or order management applications) and break them in two flavors: a reduced, low latency version for the instruments/activities that are latency sensitive, and a full version with rich functionality for the other products. But if that is still not enough, consider moving some of your chain outside of the firewall (risk/performance trade-off). Also consider lateral thinking such as sensitivity pricing instead of accurate calculations for pricing apps and only use multicast middleware where strictly necessary (7+ subscribers per item).
Frederic Ponzo is managing director at Net2S.