Research shows that direct connections to the cloud can reduce latency by at least 50 times.

Summary: For enterprises using public cloud as part of their data center solution, network latency is a significant issue. Some organizations may not have considered this delay until they have already committed to using public cloud, at which point it becomes a costly problem. Public cloud can only be fully leveraged when latency is not a critical concern.

The problem lies with the internet itself. While internet speeds are fast, they are not instantaneous. Even under optimal conditions, the data traveling to and from servers (whether on-premise or in a data center) will take time, potentially slowing down processes or making them unworkable. If a bottleneck occurs, often depending on location, the entire system may become nearly unusable.

Security may also be a concern, as organizations are cautious about sending sensitive data over the public internet, regardless of whether it is encrypted.

Large cloud providers have long recognized this issue and offer direct wired access to enterprises under programs like DirectConnect, ExpressRoute, and Interconnect (for a fee). Data center providers have quickly followed suit, offering direct connection services that can link directly to multiple cloud providers. Customers seem to favor this option.

It is clear that sending data via direct connection is far more efficient than using the internet, though there has not been a standardized metric to measure the difference—until now. A study released by Krystallize Technology on behalf of data center operator Digital Realty compared IBM’s direct connection hosting service to its internet-connected Bluemix servers housed in their data centers.

Chris Sharp, CTO of Digital Realty, stated, “We wanted some actual metrics because I don’t think many companies are truly using it for Use Case A versus Use Case B, and there’s a difference between the two. That’s why we thought this would be so different—educating the market with real metrics.”

The study focused on IBM’s Bluemix, as the company has at least five data centers that directly connect clients to Bluemix servers on-site. While the research compared internal direct connections and those made via the internet, it also included measurements of other direct connection methods, such as those through metropolitan area networks (MAN).

The study examined three issues: first, “file read latency,” or the time it takes to send a data packet and return to the file system; second, “file read throughput,” which measures the bytes of storage transmitted per second; and finally, “application performance,” described as how the application performs in the test configuration.

The differences were somewhat akin to comparing driving on a track versus driving on a highway.

  • File Read Latency: Internet connection time was 0.3 seconds, while direct connection time was 0.0044 seconds, showing that direct connection reduced latency to 1/50th of internet latency on average. Using a metropolitan area network (MAN) reduced it to 0.088 seconds, still an improvement.
  • File Read Throughput: Direct connection delivered 55.4 times the throughput. The internet speed was 413.76 kB/s, the MAN speed was 6,739.10 kB/s, and the Bluemix server direct connection speed was 22,904.26 kB/s.
  • Application Performance: In this test, a 5.5 MB non-optimized page was rendered in 0.3 seconds over direct connection, compared to 25.8 seconds over the internet. Under optimal conditions with full caching and parallel processing, direct connection reduced the rendering time to 0.2 seconds, while the global internet took 13.3 seconds.

Although the study did not test security, direct connection is clearly more secure.