NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

Reduce Network Latency: How to Beat the Internet at its Own Game

image
Recently, Mirantis announced that we are collaborating with Equinix, which provides data centers all over the world. In some ways, this isn't surprising; we have customers who need computing resources, and Equinix provides those resources in more than 18 data centers around the world, including the ability to provision bare metal servers on demand.
Now, joint customers will be able to use Mirantis Container Cloud to provision Kubernetes clusters on these bare metal servers, on demand -- just as Container Cloud enables on virtual infrastructures like AWS, VMware, and OpenStack, and on bare-metal servers installed at your premises.
One exciting part of this announcement has to do with the fact that this collaboration will enable Mirantis customers to create data centers that essentially use "wormholes" to create shortcuts to the rest of the internet by taking advantage of internet exchange points, or interchanges. These interchanges are normally the province of large enterprises and internet providers, but with this announcement virtually any organization can take advantage of them.
Let's look at how this works.

How "distance" works on the internet

The first thing we need to do is understand what we mean when we say that something is "close" or "far away" on the internet. To do that, we need to understand how an internet request actually works.
The internet was originally designed to be resilient, so that if one "path" was inoperable for some reason, the request could take an alternate route.  That means that when you make a request to, say, Google, it doesn't look like this:

Instead, it looks like this:

Because of this, if any of these machines is down, the request could take an alternate route to the destination:

You can see this for yourself if you open a command prompt on your computer and type:
 % traceroute www.google.com
traceroute to www.google.com (172.217.4.100), 64 hops max, 52 byte packets
 1  linksys17676.home (192.168.1.1)  5.082 ms  4.613 ms  2.497 ms
 2  192.168.0.1 (192.168.0.1)  3.922 ms  5.477 ms  5.877 ms
 3  65-131-218-254.mnfd.centurylink.net (65.131.218.254)  22.779 ms  21.116 ms  21.967 ms
 4  mnfd-agw1.inet.qwest.net (75.160.81.1)  22.217 ms  23.351 ms  21.894 ms
 5  cer-edge-22.inet.qwest.net (205.171.139.22)  31.832 ms  32.097 ms  33.238 ms
 6  208.47.121.146 (208.47.121.146)  32.513 ms  32.893 ms  38.344 ms
 7  108.170.244.1 (108.170.244.1)  32.879 ms
    108.170.243.225 (108.170.243.225)  34.953 ms  33.656 ms
 8  108.170.233.109 (108.170.233.109)  32.374 ms  34.908 ms
    108.170.233.111 (108.170.233.111)  34.074 ms
 9  ord36s04-in-f4.1e100.net (172.217.4.100)  30.803 ms  32.879 ms  33.090 ms
(For Windows machines, the command is tracert.)
As you can see, my request started at my home router (linksys17676.home), took about 4 ms to reach my cable modem (192.168.0.1) then another 15ms or so to CenturyLink, my ISP (65.131.218.254). Then we bounce around for a little while within the Qwest network, and then back out.  Eventually, we make it to our final destination, 172.217.4.100, which is actually Google.
Traceroute tests each hop 3 times, which gives us an interesting perspective; not only do we see the three times for each hop, in hops 7 and 8 we can see that the system took an alternate route for some of the attempts.
Most importantly, we can see that for every hop, we're adding additional time. To go from my laptop to my router was only 5ms, but all the way to Google was about 32ms.  It gets worse if I try to hit something further away:
 % traceroute vu.ac.th
traceroute to vu.ac.th (110.164.131.109), 64 hops max, 52 byte packets
 1  linksys17676.home (192.168.1.1)  6.326 ms  3.739 ms  3.630 ms
 2  192.168.0.1 (192.168.0.1)  4.806 ms  7.126 ms  5.091 ms
 3  65-131-218-254.mnfd.centurylink.net (65.131.218.254)  26.450 ms  28.679 ms  22.221 ms
 4  mnfd-agw1.inet.qwest.net (75.160.81.1)  20.465 ms  27.064 ms  23.255 ms
 5  cer-edge-20.inet.qwest.net (205.171.93.154)  34.171 ms  35.752 ms  50.300 ms
 6  ae-11.edge2.chicago10.level3.net (4.68.71.117)  41.908 ms  50.376 ms  58.911 ms
 7  ae-0-11.bar1.sanfrancisco1.level3.net (4.69.140.145)  85.388 ms  79.626 ms  89.848 ms
 8  jastel-netw.bar1.sanfrancisco1.level3.net (4.35.160.122)  102.251 ms  116.314 ms  131.003 ms
 9  mx-ll-110.164.0-248.static.3bb.co.th (110.164.0.248)  241.559 ms
    mx-ll-110.164.0-22.static.3bb.co.th (110.164.0.22)  245.553 ms
    mx-ll-110.164.0-42.static.3bb.co.th (110.164.0.42)  237.278 ms
10  mx-ll-110.164.0-206.static.3bb.co.th (110.164.0.206)  292.938 ms
    mx-ll-110.164.0-48.static.3bb.co.th (110.164.0.48)  291.441 ms
    mx-ll-110.164.0-154.static.3bb.co.th (110.164.0.154)  292.022 ms
11  mx-ll-110.164.1-106.static.3bb.co.th (110.164.1.106)  305.651 ms
    mx-ll-110.164.1-20.static.3bb.co.th (110.164.1.20)  324.819 ms
    mx-ll-110.164.1-106.static.3bb.co.th (110.164.1.106)  408.247 ms
12  * * *
From here, traceroute packets were blocked by the firewall, but you get the idea; after we leave the Level3 network, our times slow down considerably.  
In this case, the request took about 300ms for a round trip (as far as we got). Now, that doesn't sound like much, but look at the room you're in right now. A car traveling at 55 miles per hour can barrel through that entire room in that time. 
Twice.
So for situations that require very low latency, such as video conferencing, real-time device control, gaming, or even applications that are heavily based on microservices (as many cloud native applications are), reducing the length of that trip is crucial.

How to reduce network latency by taking a shortcut

All of this means that for every hop you can cut out of the trip, the distance gets shorter, and the time for that round trip goes down.
Sometimes that means being physically closer to the target. This is why so many computerized traders pay large amounts of money to have their servers physically located inside the Stock Exchange. When a few milliseconds can mean your trade beats out the competition, those milliseconds can mean hundreds of thousands of dollars, or even more.
This is the same reason that in the physical world, many companies are located in the Nashville, TN area -- near the Fedex hub.
With the Internet, however, geography isn't the only way to get closer to the user. There's also the issue of how many stops you make.
Think about it this way: let's say you were traveling from San Francisco to New York. If you were to fly from SFO to LaGuardia airport (LGA), you'd wind up making a stop in Orlando, Charlotte, Dallas, or Denver, depending on what airline you're using and where their hub was located. The trip would take about 8 hours from the time you left SFO to the time you arrived at LGA.
If, on the other hand, you flew from SFO to Newark, NJ (EWR), you can fly non-stop, because you can find an airline that uses EWR as its hub. The trip would take 5 and a half hours. Even allowing for a cab to LGA in New York City you're saving 2 hours door-to-door.
Frequent travelers are very familiar with these airline "hubs", such as Atlanta, Dallas, Charlotte, and Chicago. Once you get to one of these cities, you can get virtually anywhere non-stop.
The internet equivalent of these hubs is an interchange. These are the "neutral locations" in which major internet companies exchange traffic. If we go back to our tracert:
% traceroute www.google.com
traceroute to www.google.com (172.217.4.100), 64 hops max, 52 byte packets
 1  linksys17676.home (192.168.1.1)  5.082 ms  4.613 ms  2.497 ms
 2  192.168.0.1 (192.168.0.1)  3.922 ms  5.477 ms  5.877 ms
 3  65-131-218-254.mnfd.centurylink.net (65.131.218.254)  22.779 ms  21.116 ms  21.967 ms
 4  mnfd-agw1.inet.qwest.net (75.160.81.1)  22.217 ms  23.351 ms  21.894 ms
 5  cer-edge-22.inet.qwest.net (205.171.139.22)  31.832 ms  32.097 ms  33.238 ms
 6  208.47.121.146 (208.47.121.146)  32.513 ms  32.893 ms  38.344 ms
 7  108.170.244.1 (108.170.244.1)  32.879 ms
    108.170.243.225 (108.170.243.225)  34.953 ms  33.656 ms
 8  108.170.233.109 (108.170.233.109)  32.374 ms  34.908 ms
    108.170.233.111 (108.170.233.111)  34.074 ms
 9  ord36s04-in-f4.1e100.net (172.217.4.100)  30.803 ms  32.879 ms  33.090 ms
That traffic had to get from CenturyLink to Qwest; chances are it went through an interchange.

How to get closer to your users

An interchange, or internet exchange point, is "the physical infrastructure through which Internet service providers (ISPs) and content delivery networks (CDNs) exchange Internet traffic between their networks." Like a busy airport, an interchange provides more direct access to other parts of the internet. 
For example, let's say I have a website, and it's hosted on a commercial hosting provider, MyHostCo. Traffic between my users and my website might look like this:

If, on the other hand, my web server were near an interchange, the traffic would look like this:
By decreasing the number of hops, I'm shortening the request time. I can shorten it even more by placing my server near an interchange that's physically close to my user, or that's a small number of hops away from one.
This is the same strategy used by Content Distribution Networks (CDNs), which place content close to interchanges so it can be more quickly accessed by users.

How this all relates to multi-cloud

All of this internet geography becomes even more important when your "users" are components of a multi-cloud application. If your application is running on multiple Google Cloud servers, you're not going to experience much latency. But if you have some components in AWS, and others on Google, and others in your local data center, now you're getting into complex routing.
Normally, you don't ever have to think about it. That's the beauty of the internet; things get where they're going.
Eventually.
But what if you could put your data center in the middle of a hub where all of these networks pick up traffic? By locating your servers near an interchange, you're essentially creating a shortcut between the different clouds you're using, increasing speed and reliability and decreasing latency.

How the Mirantis/Equinix collaboration fits in

Which brings us back to where we started: the collaboration between Mirantis and Equinix.
Equinix Metal datacenters are logically close to key markets like Finance and Advertising, and to huge aggregations of both customers and compute. Equinix has sophisticated backbones and interchanges among their datacenters, so anything you put there is going to fly.
In other words, this 'internet closeness' makes this a different kind of offer from bare metal clouds provided by other hosting providers, solving the problem of strategic datacenter location, while also solving the problem of having to mind physical machines.
Finally, with Mirantis Container Cloud added to the picture, you're getting both Kubernetes on bare metal -- the most-performant way of running Kubernetes -- and the benefits of 'cloud' in the sense that you can start up machines and K8S clusters on demand.
So if you ever wanted to see what it was like to sit at the center of everything, now's your chance! If you're interested in hearing more about these solutions, please don't hesitate to contact us.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW