Reduce Network Latency: How to Beat the Internet at its Own Game
Now, joint customers will be able to use Mirantis Container Cloud to provision Kubernetes clusters on these bare metal servers, on demand -- just as Container Cloud enables on virtual infrastructures like AWS, VMware, and OpenStack, and on bare-metal servers installed at your premises.
One exciting part of this announcement has to do with the fact that this collaboration will enable Mirantis customers to create data centers that essentially use "wormholes" to create shortcuts to the rest of the internet by taking advantage of internet exchange points, or interchanges. These interchanges are normally the province of large enterprises and internet providers, but with this announcement virtually any organization can take advantage of them.
Let's look at how this works.
How "distance" works on the internetThe first thing we need to do is understand what we mean when we say that something is "close" or "far away" on the internet. To do that, we need to understand how an internet request actually works.
The internet was originally designed to be resilient, so that if one "path" was inoperable for some reason, the request could take an alternate route. That means that when you make a request to, say, Google, it doesn't look like this:
Instead, it looks like this:
Because of this, if any of these machines is down, the request could take an alternate route to the destination:
You can see this for yourself if you open a command prompt on your computer and type:
% traceroute www.google.com traceroute to www.google.com (126.96.36.199), 64 hops max, 52 byte packets 1 linksys17676.home (192.168.1.1) 5.082 ms 4.613 ms 2.497 ms 2 192.168.0.1 (192.168.0.1) 3.922 ms 5.477 ms 5.877 ms 3 65-131-218-254.mnfd.centurylink.net (188.8.131.52) 22.779 ms 21.116 ms 21.967 ms 4 mnfd-agw1.inet.qwest.net (184.108.40.206) 22.217 ms 23.351 ms 21.894 ms 5 cer-edge-22.inet.qwest.net (220.127.116.11) 31.832 ms 32.097 ms 33.238 ms 6 18.104.22.168 (22.214.171.124) 32.513 ms 32.893 ms 38.344 ms 7 126.96.36.199 (188.8.131.52) 32.879 ms 184.108.40.206 (220.127.116.11) 34.953 ms 33.656 ms 8 18.104.22.168 (22.214.171.124) 32.374 ms 34.908 ms 126.96.36.199 (188.8.131.52) 34.074 ms 9 ord36s04-in-f4.1e100.net (184.108.40.206) 30.803 ms 32.879 ms 33.090 ms(For Windows machines, the command is tracert.)
As you can see, my request started at my home router (linksys17676.home), took about 4 ms to reach my cable modem (192.168.0.1) then another 15ms or so to CenturyLink, my ISP (220.127.116.11). Then we bounce around for a little while within the Qwest network, and then back out. Eventually, we make it to our final destination, 18.104.22.168, which is actually Google.
Traceroute tests each hop 3 times, which gives us an interesting perspective; not only do we see the three times for each hop, in hops 7 and 8 we can see that the system took an alternate route for some of the attempts.
Most importantly, we can see that for every hop, we're adding additional time. To go from my laptop to my router was only 5ms, but all the way to Google was about 32ms. It gets worse if I try to hit something further away:
% traceroute vu.ac.th traceroute to vu.ac.th (22.214.171.124), 64 hops max, 52 byte packets 1 linksys17676.home (192.168.1.1) 6.326 ms 3.739 ms 3.630 ms 2 192.168.0.1 (192.168.0.1) 4.806 ms 7.126 ms 5.091 ms 3 65-131-218-254.mnfd.centurylink.net (126.96.36.199) 26.450 ms 28.679 ms 22.221 ms 4 mnfd-agw1.inet.qwest.net (188.8.131.52) 20.465 ms 27.064 ms 23.255 ms 5 cer-edge-20.inet.qwest.net (184.108.40.206) 34.171 ms 35.752 ms 50.300 ms 6 ae-11.edge2.chicago10.level3.net (220.127.116.11) 41.908 ms 50.376 ms 58.911 ms 7 ae-0-11.bar1.sanfrancisco1.level3.net (18.104.22.168) 85.388 ms 79.626 ms 89.848 ms 8 jastel-netw.bar1.sanfrancisco1.level3.net (22.214.171.124) 102.251 ms 116.314 ms 131.003 ms 9 mx-ll-110.164.0-248.static.3bb.co.th (126.96.36.199) 241.559 ms mx-ll-110.164.0-22.static.3bb.co.th (188.8.131.52) 245.553 ms mx-ll-110.164.0-42.static.3bb.co.th (184.108.40.206) 237.278 ms 10 mx-ll-110.164.0-206.static.3bb.co.th (220.127.116.11) 292.938 ms mx-ll-110.164.0-48.static.3bb.co.th (18.104.22.168) 291.441 ms mx-ll-110.164.0-154.static.3bb.co.th (22.214.171.124) 292.022 ms 11 mx-ll-110.164.1-106.static.3bb.co.th (126.96.36.199) 305.651 ms mx-ll-110.164.1-20.static.3bb.co.th (188.8.131.52) 324.819 ms mx-ll-110.164.1-106.static.3bb.co.th (184.108.40.206) 408.247 ms 12 * * *From here, traceroute packets were blocked by the firewall, but you get the idea; after we leave the Level3 network, our times slow down considerably.
In this case, the request took about 300ms for a round trip (as far as we got). Now, that doesn't sound like much, but look at the room you're in right now. A car traveling at 55 miles per hour can barrel through that entire room in that time.
So for situations that require very low latency, such as video conferencing, real-time device control, gaming, or even applications that are heavily based on microservices (as many cloud native applications are), reducing the length of that trip is crucial.
How to reduce network latency by taking a shortcutAll of this means that for every hop you can cut out of the trip, the distance gets shorter, and the time for that round trip goes down.
Sometimes that means being physically closer to the target. This is why so many computerized traders pay large amounts of money to have their servers physically located inside the Stock Exchange. When a few milliseconds can mean your trade beats out the competition, those milliseconds can mean hundreds of thousands of dollars, or even more.
This is the same reason that in the physical world, many companies are located in the Nashville, TN area -- near the Fedex hub.
With the Internet, however, geography isn't the only way to get closer to the user. There's also the issue of how many stops you make.
Think about it this way: let's say you were traveling from San Francisco to New York. If you were to fly from SFO to LaGuardia airport (LGA), you'd wind up making a stop in Orlando, Charlotte, Dallas, or Denver, depending on what airline you're using and where their hub was located. The trip would take about 8 hours from the time you left SFO to the time you arrived at LGA.
If, on the other hand, you flew from SFO to Newark, NJ (EWR), you can fly non-stop, because you can find an airline that uses EWR as its hub. The trip would take 5 and a half hours. Even allowing for a cab to LGA in New York City you're saving 2 hours door-to-door.
Frequent travelers are very familiar with these airline "hubs", such as Atlanta, Dallas, Charlotte, and Chicago. Once you get to one of these cities, you can get virtually anywhere non-stop.
The internet equivalent of these hubs is an interchange. These are the "neutral locations" in which major internet companies exchange traffic. If we go back to our tracert:
% traceroute www.google.com traceroute to www.google.com (220.127.116.11), 64 hops max, 52 byte packets 1 linksys17676.home (192.168.1.1) 5.082 ms 4.613 ms 2.497 ms 2 192.168.0.1 (192.168.0.1) 3.922 ms 5.477 ms 5.877 ms 3 65-131-218-254.mnfd.centurylink.net (18.104.22.168) 22.779 ms 21.116 ms 21.967 ms 4 mnfd-agw1.inet.qwest.net (22.214.171.124) 22.217 ms 23.351 ms 21.894 ms 5 cer-edge-22.inet.qwest.net (126.96.36.199) 31.832 ms 32.097 ms 33.238 ms 6 188.8.131.52 (184.108.40.206) 32.513 ms 32.893 ms 38.344 ms 7 220.127.116.11 (18.104.22.168) 32.879 ms 22.214.171.124 (126.96.36.199) 34.953 ms 33.656 ms 8 188.8.131.52 (184.108.40.206) 32.374 ms 34.908 ms 220.127.116.11 (18.104.22.168) 34.074 ms 9 ord36s04-in-f4.1e100.net (22.214.171.124) 30.803 ms 32.879 ms 33.090 msThat traffic had to get from CenturyLink to Qwest; chances are it went through an interchange.
How to get closer to your usersAn interchange, or internet exchange point, is "the physical infrastructure through which Internet service providers (ISPs) and content delivery networks (CDNs) exchange Internet traffic between their networks." Like a busy airport, an interchange provides more direct access to other parts of the internet.
For example, let's say I have a website, and it's hosted on a commercial hosting provider, MyHostCo. Traffic between my users and my website might look like this:
If, on the other hand, my web server were near an interchange, the traffic would look like this:
By decreasing the number of hops, I'm shortening the request time. I can shorten it even more by placing my server near an interchange that's physically close to my user, or that's a small number of hops away from one.
This is the same strategy used by Content Distribution Networks (CDNs), which place content close to interchanges so it can be more quickly accessed by users.
How this all relates to multi-cloudAll of this internet geography becomes even more important when your "users" are components of a multi-cloud application. If your application is running on multiple Google Cloud servers, you're not going to experience much latency. But if you have some components in AWS, and others on Google, and others in your local data center, now you're getting into complex routing.
Normally, you don't ever have to think about it. That's the beauty of the internet; things get where they're going.
But what if you could put your data center in the middle of a hub where all of these networks pick up traffic? By locating your servers near an interchange, you're essentially creating a shortcut between the different clouds you're using, increasing speed and reliability and decreasing latency.
How the Mirantis/Equinix collaboration fits inWhich brings us back to where we started: the collaboration between Mirantis and Equinix.
Equinix Metal datacenters are logically close to key markets like Finance and Advertising, and to huge aggregations of both customers and compute. Equinix has sophisticated backbones and interchanges among their datacenters, so anything you put there is going to fly.
In other words, this 'internet closeness' makes this a different kind of offer from bare metal clouds provided by other hosting providers, solving the problem of strategic datacenter location, while also solving the problem of having to mind physical machines.
Finally, with Mirantis Container Cloud added to the picture, you're getting both Kubernetes on bare metal -- the most-performant way of running Kubernetes -- and the benefits of 'cloud' in the sense that you can start up machines and K8S clusters on demand.
So if you ever wanted to see what it was like to sit at the center of everything, now's your chance! If you're interested in hearing more about these solutions, please don't hesitate to contact us.