Last year volatility returned to markets, which was good news for market makers and other trading firms. High volatility in a given security, or group of securities, can increase the potential to generate an above-average profit within a below average period of time. Trading firms consuming fast-changing market data more quickly stand to make the best deals.
As firms chase those opportunities, trading volumes increase, which in turn increases the volume of market data being transmitted by each exchange. More volatile stock markets mean that market data rates are spiking more often than before and putting more stress on trading infrastructures.
Even a super efficient, high performing algorithm will miss opportunities if its incoming data feeds have gaps, or if the data is arriving slower than it should. This is particularly true when the algorithms were not built to handle bursty data. Similarly, algorithms that depend on seeing a complete market picture can suffer when market data feed gaps are present due to internal network inefficiencies or service provider hiccups. If these misses start happening every time prices begin to change rapidly, you are in trouble.
In my experience, it’s during those highly volatile times when firms start to suspect that they are experiencing gaps or slow downs in their market data feeds, but without Corvil they cannot confirm exactly when, where and which ticks they missed (let alone why).
Without Corvil, the hunt for evidence often starts with looking for spikes in bandwidth in the connection between the data provider and their trading infrastructure and work their way through each hop in the infrastructure. Armed only with traditional network monitoring tools, they are often looking at at bit rates averaged over minutes (switches in particular show 5 min averages), which often appear to indicate no issues. Without an obvious spike indicating when or where to look, the team then begins the tedious and time-consuming task of manual packet analysis to find answers, which can stretch into days with inconclusive results.
The problem is traditional monitoring tools are not granular enough to see microburst congestion issues causing gaps or slowdowns. This is illustrated in the figure below. The first chart shows the gapping occurring for a market data feed. The yellow band in the second chart shows the feed’s bandwidth consumption averaged over a minute, which is well below the network’s capacity of 50Mbps.
Corvil solved this microvisibility problem for all types of market participants -- market makers, banks, service providers, etc.. This is also illustrated in the figure below. The maximum microburst band in the second chart tells an entirely different story. It’s easy to see that the flat-top, mesa-like pattern at the 50Mbps mark. This is a clear indication of the market data feed’s traffic bursting beyond network capacity, resulting in dropped packets and the missing market data.
Figure: Bandwidth Consumption Averaged Over Minutes Fails to Diagnose Market Data Gaps
If a switch doesn’t have the capacity to handle a traffic burst it will start queuing the packets. Even if a low latency switch has the capacity, the network on the other side might not. For example, some of the most voluminous feeds can burst up to 20-24Gbps, but many trading firms operate on 10Gbps connections with a steady traffic flow of 6-8Gbps. As a result, packets are queued because the firm’s connection can’t propagate them fast enough. If queues form and the buffer overflows then the switch will drop packets.
Because market data is delivered over the UDP protocol there is no retransmission option – if queued packets are dropped because the buffer filled up there is no way of getting that market data contained in those packets. Those packets are gone for good which means there is a gap in the market data. If that gap contains pricing data for an actively traded security, it looks like there was a sudden jump in pricing data. Being on the wrong side of a price jump that didn’t really happen is not healthy for trading profitability.
Of course, the application, or human trader, suspecting a gap will request a retransmission from the exchange. However, that will take time – human time of minutes, if not hours, and you are officially trading off of stale data.
Trading tends to be a winner-takes-all competition, so the only way you win on stale data is if no other trader on Earth will act on the market data lost when that microburst caused your packets to drop.
What makes these microbursts difficult to detect is that the time it takes a queue to fill is measured in microseconds. Therefore, if you’re not monitoring, analyzing and reporting peaks with that microsecond level of timestamping precision, you will never see them.
Corvil timestamps packets with nanosecond precision, and our analytics are optimized to identify and report bitrates in microsecond time frames making it easy for you to validate your suspicions and act on them.
Most market participants are taking multiple market data feeds at the same time, some with duplicate information. A rapid increase in market data rates about a security, or group of securities, can be mirrored in multiple feeds – all trying to be sent through the same network infrastructure at the same time. As a result, the combination multiple feeds and a rapidly changing information can overwhelm the network capacity more frequently, leading to more unexplained gaps and slowdowns -- unless you have Corvil.
However, that’s not the end of the story because there are several other issues beyond network utilization that can wreak havoc on market data quality and performance -- and Corvil has analytics to find them as well.
Learn more about Corvil Analytics for Market Data.