Latency in the Cloud: You Can’t Eliminate It, But You Can Manage It

Performance is one of the top-ranked concerns that enterprise executives have when it comes to fully trusting the cloud. But performance is a little tricky to nail down because it encompasses such a wide range of disciplines – everything from resource scale and availability to system and architecture governance. But while most detriments to performance can be addressed through increasingly advanced technology, there is one factor that is exacerbated by the mere use of the cloud: network latency.

If you think of the cloud as a giant pool of discrete resources, you start to see the problem latency can pose as data architectures become distributed across town, across the country and around the world. Advanced fiber optic networks have managed to reduce latency on the web to mere milliseconds, but at the end of the day, there is no technological fix for the fact that the farther a bit has to travel, the longer it takes to reach its end point.

But while latency can never be eliminated (not without quantum mechanics, anyway), there are ways to keep it to the barest minimum. For one thing, says networking consultant John Shepler, you need to change the way you think about your wide area network (WAN). All that “network transparency” you built into your LAN? That all needs to be pushed onto the WAN where capacity is limited and carrier protocols are not as flexible. Private line connectivity to the cloud will be a must for high-performance applications, ideally in the form of a Dedicated Internet Access link that combines private connectivity with core Internet backbone performance.

Some cloud providers are already stressing their low-latency capabilities in order to woo top-end enterprise functions. Microsoft Azure, for example, offers a range of tools like the ExpressRoute private line and content deliver solutions (CDNs) that incorporate near-site cache and acceleration services. The company’s Ashish Thapliyal also stresses that changes to application level protocols and network utilization modules will also reduce latency, particularly for apps that are migrating from local data centers to the cloud.

But it’s not just the distances involved in the cloud that contribute to latency, says tech journalist and author David Strom, it’s the chaos. Cloud infrastructure is highly dynamic, encompassing a swirling miasma of many-to-many connections, and is only expected to become more so as software-defined networking (SDN) and network functions virtualization (NFV) usher in end-to-end abstracted architecture. User endpoints now come in a variety of fixed and mobile devices, and applications will be tasked with negotiating an increasingly diverse networking environment separating individual nodes.

If you can’t control the latency characteristics of external infrastructure, it helps to kick your own network speeds into the highest gear possible. As Redis Labs’ Yiftach Shoolman points out, average Internet latency is about 50 ms and the end-to-end expectation for modern apps is 100 ms at least. That leaves 50 ms for the app to touch web-facing systems like firewalls and load balancers, then processing by the various web, application and database tiers, with most functions requiring multiple calls to the database for a single response. This is the primary reason why Hadoop clusters and other Big Data architectures are increasingly turning to in-memory solutions for more than just cache.

In the cloud, then, latency cannot be addressed simply by adding bandwidth. The distances involved, the architectures in play, and the ability to engineer application environments capable of capturing and employing a wide variety of moving parts will all influence latency, and thus performance.

In this light, latency is not a problem to be solved, but a condition to be managed. And the most effective tool at your disposal is the ability to spin up new environments in the cloud and then dispose of them quickly and easily if they do not live up to your expectations.

Editor’s Note: This issue of managing latency is exactly why NuoDB’s geo-distributed database management capabilities are so intriguing. They help minimize the amount of data that has to travel back and forth over the Internet to complete a business transaction.  Hence, such innovative approaches contribute to improved application performance while minimizing latency in global applications.  

Add new comment