Mirantis and Equinix have combined forces to create cloud solutions that comply with GDPR privacy regulations but still enable performant and scalable applications. In this webinar, Rick Pugh, Sr. Product Manager at Mirantis, and James Malachowski, Head of Solution Architects at Equinix Metal, discuss those solutions and other topics related to maintaing GDPR compliance with a global cloud infrastructure.
Below is a transcript. You can access the full webinar, including footage of the demo, here.
- The business problem: Secure, global access to GDPR data for e-commerce
- The solution: Mirantis Container Cloud on Equinix Metal
- Metal fills the gap between colocation and public cloud
- About Equinix Fabric
- About Mirantis Container Cloud
- Demo overview
- Demo Part 1: Private network configuration
- Demo Part 2: Kubernetes cluster management
- Demo Part 3: Geographically distributed e-commerce app
The business problem: Secure, global access to GDPR data for e-commerce
Rick Pugh: We’re gonna talk a little bit about the business problem that’s been set up, the solution overview that helps to solve this problem of GDPR data transit security. We’ve got a demo and we’ll have time at the end for some questions and answers.
Setting up this problem – late last year, several opportunities came to us that had a very similar stature and set of use cases where several of them had containerized e-commerce applications running Kubernetes. They held user data in European Union countries that needed to be accessed outside of the EU, and therefore, they needed to comply with GDPR regulations. And they needed a global solution, they wanted to run on an on demand bare metal compute infrastructure globally, they wanted to figure out how they could secure this data. And the solutions that were proposed to those folks were very similar to today’s demonstration.
For layman’s terms for the GDPR — basically, it applies to the transfer of personal data to non-EU countries, and this is all quite summarized and there’s many chapters written in the GDPR book about this. It’s about how the businesses must implement a robust set of security controls to protect that data in transit. There’s encryption and network access controls required. Best practices dictate that businesses should use a private network to create these, the data in transit should be encrypted, and data can only be accessed by an authorized entity.
The solution: Mirantis Container Cloud on Equinix Metal
What we’re gonna show today is a setup that looks like this, where we’ve got the Mirantis Container Cloud in the Equinix Metal data center in Silicon Valley. There are two child Kubernetes clusters—one in Silicon Valley and one in Frankfurt, Germany.
Those two data centers have been connected with a private network by way of Equinix Fabric, and through that private network, the e-commerce application has been deployed such that the front end services are running in Silicon Valley and the back end databases, and specifically, the user data is running in Frankfurt, Germany and all the connections are going through this private network, which is a 10 Gigabit with dedicated ports network. We’ll talk more about all this as we go.
The enablers for these solutions are: Equinix Metal, which is the on demand bare metal cloud brought to us by Equinix. They have 18 global metros around the world; Equinix Fabric, which is a flexible global interconnection mechanism between Equinix Metal data centers, other Equinix colo data centers, public clouds, and in fact, many third party data centers can also connect to this. It’s very low latency network interconnect, it’s very, very fast, and you’ll see through the demo there’s absolutely no lag or delay regardless of where the data is coming from.
Mirantis Container Cloud on Equinix Metal — Container Cloud is a hybrid cloud/ multi-cluster Kubernetes management platform that supports on-prem and public cloud, so you can have it with Equinix Metal, our Premier Partner. You can also run it within AWS as well as VMware, OpenStack, and bare metal, and it gives you that single pane of glass, the full cluster lifecycle management with automated upgrades, logging, monitoring, and alerting. I’ll turn it over to James here to talk a little bit about Metal.
Metal fills the gap between colocation and public cloud
James Malachowski: Metal fills the gap between colocation and public cloud. What we’re providing is an Mirantis-Equinix alternative to public cloud infrastructure as a service. The goal is to give you the control and efficiency of colocation – the choice around hardware and management of your own infrastructure, but the real time experience and consumption model of cloud.
What we provide is a global operated interconnect behind that. We handle all of the top of rack switching, the rack and stack, the cabling, and then we give you nice, easy, simple automated access via APIs and more specifically access to partner solutions such as Mirantis Container Cloud. We really focused on delivering all the right metros—18 sites today, expanding to almost 30 this year in 2022. Today, we’re spread throughout the globe and have some new sites opening in Seattle, Helsinki, and a few other places as well.
With Mirantis, what we’re really providing here is a strategic alternative to public cloud, and I’ll turn it back over to Rick to really talk about the value of what we’re bringing here with the offering.
Rick Pugh: Container Cloud running on top of Equinix Metal has become one of our supported providers for the automation that Mirantis Container Cloud brings to Kubernetes orchestration management. You can see that the Container Cloud has a very similar model where it’s got monthly and annual subscriptions, there’s multiple support options, 24/7 or fully managed, and as James talked about, the Equinix Bare Metal gives you that compute, storage and the network self-provisioning bare metal at software speeds, and the on-demand self-service model. It’s a great union between two great service offerings.
About Equinix Fabric
James Malachowski: One of the things that makes Equinix Metal especially powerful is, right behind it, we have a global interconnect — both a global interconnected fabric as well as the ability to simply direct connect fiber straight into Equinix Metal.
What this allows us to do is support a variety of use cases that are really about extending your cloud, your data center as though it was in the same place. Bare metal to cloud being a use case, and if I have resource in public cloud and perhaps I want to burst or provide some DR or a second provider, Fabric allows you to do that. You simply create a virtual circuit, and it’s up.
Bare Metal to colocation — maybe you have specific dedicated resources in colo, and you want to connect that with fiber. Direct connect into metal, we support that as well. And then we also include the Internet. Best of breed, best of class, multi-provider Internet, batteries included. It’s just there for you to consume per Gigabyte as you need it, but you can also bring in your own circuits, your own providers if it makes sense.
Rick Pugh: I’ll just say one more thing on this one, which is that today’s demo is all about the Equinix Metal data center deck on its metal data center private connect.
About Mirantis Container Cloud
Just an introduction for those that don’t know about Container Cloud, it’s a continuously updated, multi-cloud, self-service full stack Kubernetes platform, which means that it automates everything to do with Kubernetes and putting down our Mirantis Kubernetes Engine on any of our supported providers. As you’ll see today, with Equinix, you can choose any of the locations that Equinix has in that map of around the world, and you can deploy child clusters in any of them. Through the networking connections supported by Equinix, you can have the whole variety of connections that we just saw on the previous page.
Quick differentiators – we’ve got the single pane of glass so you can manage all of these clusters in both a single cloud like all in Equinix Metal, or it can be a hybrid in combination of Metal and AWS and maybe on-prem VMware or OpenStack.
We’ve got integrated monitoring built in with our logging, monitoring, and alerting solution. A powerful part of Container Cloud makes it trivial to add LMA to Kubernetes clusters. We do the full stack lifecycle management, so everything that you can do with Kubernetes clusters relative to creating, adding, and removing worker nodes as required for scaling purposes, creating and deleting clusters is very, very simple.
Full access control integration with the identity management and role based access controls. It’s all meant to be self-service, so the whole purpose was really to democratize the use of Kubernetes. Nobody has to be a Kubernetes expert or know how to edit a dozen different YAML files to make things happen. It’s all fully automated for you, and the multi-cloud support that we’ve got through the partners that we listed here. There’s also an on-prem bare metal solution offered by Mirantis.
A little bit of the overview about the webinar. We did some pre-set up which was, we created that private network via Fabric between Silicon Valley and Frankfurt, and we deployed Container Cloud in Silicon Valley. Then we used Container Cloud to deploy to Mirantis Kubernetes Engine child clusters—those are the Kubernetes engines—one in Silicon Valley and one in Frankfurt.
We’ve then established the endpoints between those two clusters. We’ve got secure data communication between all of the microservices running in those two clusters. Then we deployed a demo application called Sock Shop, which is a simple but effective e-commerce app that is selling socks, where we’ve got all the front end and many of the microservices running in Silicon Valley, but the critical use of data as well as a cataloged database are running in Frankfurt. All the access for that critical data, that user data is coming from Frankfurt.
Critical data basically means anything where you can correlate a name to an email address, name to an address, name to a phone number. That becomes critical personal data, as well as any kind of transaction history or purchases and those kinds of things are all sensitive data for GDPR.
Here’s the view of the demo setup. We’ve got Mirantis Container Cloud running in the Silicon Valley Metal data center. We’ve got two child clusters, one in Silicon Valley, one in Frankfurt. We’ve got that private interconnect network between those two data centers set up with a 10 Gigabit dedicated port, and you can see the access points up on the top, left and right. The Container Cloud itself has a user interface and an API, and those endpoints are in Silicon Valley. Equinix Fabric has its own portal and API setup for access through the Equinix Fabric portal, and we’ll be showing all of those things.
A little bit deeper look into the network view, what we’ve got in Silicon Valley and Frankfurt. In Silicon Valley, we’ve got our management Container Cloud cluster, that then has created and is managing the child cluster here in Silicon Valley, that’s housing on these worker nodes the front end of that e-commerce application.
All the communication, whether that’s control plane information or actual microservice communications, is going through a router back over this private network into another router and then distributed on to the controllers and the workers that are running here in Frankfurt. We have a seed node here. That’s a day one activity to deploy Container Cloud. That machine can be reused once the deployment is complete and is no longer needed. It’s only needed for the bootstrap component of the day one activity.
The e-commerce app itself has got a number of microservices, as I’ve said before. Everything’s running in Silicon Valley with the exception of these two databases: the catalogue Mirantis-Equinix database, which is a MySQL, and the user database, which is a MongoDB application running in Frankfurt. Everything else is running in Silicon Valley. Now I’m going to turn it over to James, and he’s going to walk through setting up that private network configuration within Equinix Fabric and Metal.
Private network configuration
James Malachowski: What you’re looking at here is the Equinix Metal portal. I’m specifically looking at the Connections tab where we are showing basically two connections, what we call a dedicated port, which is in Silicon Valley. This is basically giving Mirantis their own physical 10 Gig port in Silicon Valley that they can then add virtual circuits to, which represent different networks that you want to pass over our Fabric from Silicon Valley all the way back to Frankfurt.
And then we have the same thing on the Frankfurt side, which represents the remote end, where we have a single connection that represents a single VLAN we’re passing over to that side from Silicon Valley. Just to give a quick overview of Metal and what it is, I am a collaborator on Mirantis’ org. I have participation in many different orgs. We allow you to affect policy around your participation in different orgs and projects, and they’ve given me access to a single customer demo environment where we can collaborate on things like this.
Deploy a server on demand
If I wanted to, for example, create a new server, we have three different models: on demand, reserved, and then spot market. Reserved would be servers that you’ve contracted and ordered with your name on them dedicated to you. On demand is basically grabbing something from our physical pool that we have deployed globally at our 18 plus sites. I’m a super user, so I not only get to see our main sites, but also our private sites and other things that we’ve built. Sometimes, it takes a little while to come up because it’s quite the inventory.
The other component is our Equinix Fabric portal. This is where you effect connectivity across our fabric. Think of Metal as the compute side. This is really the WAN side of things: How do I get remote connectivity from my environment anywhere? As I come back to the screen to get ready to deploy a server, I can pick Silicon Valley as my site.
Let’s say I wanted to expand that existing cluster. I can grab one of these c3.smalls, which is one of the standard nodes that we use in this type of a deployment model. And I can grab an Ubuntu image, which will deploy in about a minute. Effectively, what we do for you is make it really easy and fast to grab bare metal. This is a real computer we’re going to grab here in a second.
We give you a whole variety of operating systems, everything from VMware, Windows, Linux, and everything you could imagine with Custom iPXE, but all deployments are here for the demo. You can deploy multiple servers, and one of the really powerful things for GDPR and security is the ability to deploy without a public IP.
One of the things you may want to do in your deployment model is have servers that never come up on the public Internet. Maybe they quickly attach to your private network back over the Fabric or the private connectivity that you’ve established to your data center or to a remote site. This would allow you to do this without ever exposing that asset.
But I’ll stick with our generic deployment model for now. Effectively, we’re going out and grabbing a server from our pool, a c3.small. It’s a physical server. It has a 3 plus Gigahertz Intel processor. It’s got 32 Gigabytes of memory – looks like it has a couple of disks.
Create a VLAN
While that’s coming up, I’m going to create a VLAN, so, I can, for example, create a new connection from Silicon Valley back to Frankfurt. It’s as simple as creating a new VLAN that is local to that site, and I would call this my Silicon Valley to FR connection. I would give it a VLAN ID. I can pick anything I want.
If I want to, let’s say, match tags on both sides or make things really simple from a documentation perspective, and it’s as simple as clicking the button. All of this can be automated via Terraform. Everything I’m doing is interfacing with our RESTful API. From the browser, you can grab all of these calls and see exactly what I’m doing on the back end.
Request a connection
Now that I have that VLAN, the next step is to request a connection. We have two of these here. I have a dedicated 10 Gig port in Silicon Valley and then I have a shared connection in Frankfurt. We’ll go ahead and request a new shared connection in Silicon Valley, just as an example.
We have Fabric everywhere we have metal. If we have metal there, you can quickly get on Fabric. It’s as simple as giving it a name, and I can choose if this is redundant or not, and then I can choose if I want that dedicated port or not.
I’ll just go ahead and submit that. Once I submit that, I’m given this token, and this token is what we’re going to use to connect the metal connection to the Fabric connection.
Create a Layer 2 connection to Equinix Metal
Once I came back to Fabric, it’s really as simple as creating a connection. When I come to the portal here, I will be greeted with a whole bunch of options. In this case, we’ve got a bunch of different service providers. You can directly connect into any number of public cloud providers — Google, Amazon, Azure, you name it, including private connectivity to other customers that you may have.
We’ll select Equinix Metal in this case. I’m going to create a Layer 2 connection on Equinix Metal. If I was making a redundant one, I would’ve selected that other option. And if I had my port existing from my data center, that would show up here. If this was your colocation, you could select one of your ports that was already on Fabric, or I could perhaps set up a virtual router within this environment and connect from there, but for this demo, I’ll do a port.
In this case, I’m coming from Silicon Valley and I select the port I want to use. I’ll just grab one of these test ports and click Next. And then over here, I’m gonna select my remote site. So, let’s say I wanted to go to Frankfurt. It’s 140 milliseconds away – not too bad, going all the way across the U.S. and the Atlantic. That’s, again, private over our global backbone. This is not public Internet. This is the average round trip delay between these metros within the last moments. If I click Next, I’ll be greeted with some options. This is where that token goes that I created. This is where that VLAN goes that I created — same tag. Then this is my Silicon Valley to Frankfurt connection, and I would pick my speed or this virtual circuit.
A 10 Gig is fairly expensive, but again, that’s a 10 Gig private connection from Silicon Valley to Frankfurt all the way across the United States across the Atlantic Ocean, and it’s as simple as clicking Next and reviewing that order. If I click Submit, Mirantis will get charged a lot of money, so, I won’t click that button just yet.
That is creating connectivity in a nutshell. Once this gets submitted, you effectively have what we see here, which is an active connection between Silicon Valley and Frankfurt. I’ll turn it back over to you, Rick.
Kubernetes cluster management
Create clusters and machines
Rick Pugh: Thank you, James, that was great. I’m going to move over to Container Cloud and talk a little bit about some of its features and work back into the demo that we’ve got running in Frankfurt and Silicon Valley.
This is the UI for Container Cloud. A couple of things to point out. We have the notion of multi-tenancy through something we call Projects, and there are about 20 different projects of which I have access to two, the Product Management Team and the Services Team.
I can’t see any of the other projects, clusters, or any other management functions and they can’t see mine. If somebody was added to this team, they would then be able to see it. If I wanted to change the context and switch to a different project, I would simply do that here.
Since Container Cloud is automating all the creation of all this infrastructure to create the Kubernetes child clusters, we’ll need the credentials for the API to go in. I’ve got Equinix Metal credentials set here. All of the other credentials are also set. If you wanted to add SSH keys, you could simply upload the SSH key. Then, when you were deploying your child clusters, you could indicate that you wanted to use those keys and they would be put on each of the machines that gets deployed through Container Cloud.
We’ve got the clusters, and most of the time is spent in this view. We’ve got some pre-existing Kubernetes child cluster setup, two of them on metal, one in Amsterdam and one in Dallas, and we’ve got a cluster running in AWS. If I was to go into Amsterdam, I could go to the actual child cluster’s UI, which is the Mirantis Kubernetes Engine, and we’ve got a single sign on enabled from Container Cloud to its child MKE clusters.
We’re in that child cluster’s MKE and you can do anything that you can normally do within such an MKE installation. One of the things I can show is the user. When Container Cloud creates child clusters, it creates two users, a local admin, and the person who actually was running Container Cloud, which was me. All the activities you can do within MKE you can do through this with my shortcut the Container Cloud provides for that.
Let’s go back and see how easy it is to create a cluster. I’m going to choose, as a provider, Equinix. I’m going to give it a name. Maybe I’ll go back to Dallas today and we’ll create a cluster there. With the release version SSH keys, again, I can pick a key to put down the provider. Here’s where I can select any of those Equinix Metal data centers that we were looking at.
I’ll go to Dallas 11 today. Kubernetes, the network CIDR blocks and things – we’ve got some very sane defaults where we need changing on the LMA side. Here is if you were to enable monitoring through Prometheus, logging through Elastic Search, and all the alerting is done with Alerta, so you’d be able to set thresholds and define the routing of those alerts through a number of different mechanisms, whether that’s Slack or e-mail or PagerDuty or ServiceNow or any of the standard kind of routing for those kinds of alerts.
I’ve got that set up. I’ll create that. Now I’ve got a cluster that has no machine. Let’s add machines to it. We’re going to need three manager nodes for it. We’re going to use Ubuntu 18.04. As James was pointing out when he was going through his metal UI, you can see that there are the same machines that are available here. This is all automated, so you don’t have to go through that portal to do this, and it also shows you what the hourly charge rate is for that particular machine.
If you had reserved hardware with Equinix Metal, you could put the hardware ID here, and that would be used instead of the on demand type. I’ve got these three manager nodes set up, and we’ll create those. We can do the same thing for worker nodes. We’ll say two worker nodes. I’ll pick small here, and I’ll also add the StackLight nodes. You should be able to put down StackLight to those worker nodes they create. In about 25 minutes, these will all turn green and be a fully functional Kubernetes cluster.
Automatically upgrade Kubernetes cluster
The other thing I’d like to show, one of the things we spoke about for Container Cloud is its ability to do automated upgrades. We’ve got a cluster that was created with an older version. It’s saying that a newer version is available.
To do that upgrade is super simple, I simply see this release update notification. I look and see what the differences are. Okay, I’ve got a change log. I want to make this change, I say update. That will take that cluster, and first of all, it’ll do a backup, then it will take each of the manager nodes one by one and do a cordon drain, do the upgrades, and then bring it back. It does that in a serial fashion, all the managers, and then all the workers.
That allows the cluster to stay up and running and the workloads. Typically, all the microservices are able to stay up and running with no downtime for the cluster or the workload itself. That’s a quick overview of Container Cloud.
View clusters, services, applications and pods
Let me log out and log back in. Now, this is the one that is in Silicon Valley. Go back to our picture and we have the Silicon Valley and we have the Container Cloud management cluster as well as two child clusters, one in Silicon Valley and one in Frankfurt. If we go and look at the Frankfurt machines, you’ll see that they’re all running in Frankfurt.
I can also go into the MKE in Frankfurt and we can look at what are the services that are running there. I’ll log in and sign in with Keycloak. I’ll tell it to ignore the licenses. And if I go down and I look at the services here in Frankfurt, we see that there’s two services running as we had said in the setup. There is a catalog database and a user database.
We’ll also bring up Lens, which is our open source operational IDE or Kubernetes. It’s super popular, over 400,000 users using it on a daily basis. We’re looking at Frankfurt from the view of Lens, and it’s showing that there are the same microservices that are running. There’s no smoke and mirrors that we’re doing. Here are the other applications running in Silicon Valley. We’ve got these microservices running in Silicon Valley. Those two match. I’ve got the same thing for the Silicon Valley.
One of the things we can look at is Grafana, which is the Prometheus side of our logging monitoring alerting. I could go look at, for example, the pods, choose the Sock Shop name space, and we’re specifically looking at all the pods that are running for that application in Silicon Valley. It’s super simple to do that. The dashboards are pre-set up for you.
Geographically distributed e-commerce app
With that, let’s roll over to the actual e-commerce application. This is a simple application written by WeWork to show microservices in a containerized fashion. This is running under the Kubernetes as we just saw. The first thing I can do is log in with a password. Now that I’m logged in, I can look at their catalog. I can choose some holey socks. If I wanted to add that to my cart, I’d simply say “Add to cart.”
Again, all this functionality is running split between the catalog database that is in Frankfurt and all the front end services running in Silicon Valley. If I want to look at that data that we’re talking about, because this whole webinar is about securing this GDPR data where you have access to user names and things like addresses, emails, phone numbers, those kinds of things, I can see that this data is coming from Frankfurt over that private network and was the whole reason why we had to follow the GDPR regulations and guidelines in order to secure this data where I’ve got a person’s name that is correlated now with their address.
That was all collected in Frankfurt and stays in Frankfurt. The data is going over that secure pipeline through that private network back to the front end to be displayed here. There’s virtually zero chance that somebody who doesn’t have authorized access would be able to gain access to this critical data.
We saw the Equinix Fabric being set up, as James showed. It’s very simple to do that. All of the things that you saw with Container Cloud, you didn’t see the deployment action. We did that prior to the webinar, but you can see all of the activity, how you would create new clusters, how you can manage and scale them, you can add machines and remove machines – you can do all that.
We happened to use Lens to deploy the application, but you can use any tool to do the actual workload deployment onto those clusters, knowing that it’s secure between the databases running in Frankfurt and the front end accessing those in Silicon Valley. You also noticed how fast it was. There was absolutely no lag in getting any of that data back that was coming from Frankfurt.
Moderator: So, Rick, how do users get access to Mirantis Container Cloud hosted trials?
Rick Pugh: It’s very simple. Go to our website. There’s a “Try It Now” button featured on the front page. There’s two methods that you can use. One is our hosted trial, which is actually running on Equinix Metal. After you sign up, you get a token through the email. It’s very simple – just log in and you’ve got a fully functional Container Cloud that you can then deploy MKE child clusters within the Equinix Metal global set of data centers, fully functional if you wanted to. You could also deploy a workload and play around with that. There’s also a free trial that you can download and deploy on any of the supported providers that we’ve got, which were listed previously — public cloud, AWS, Azure, GCP coming this year, on prem for Bare Metal by Mirantis as well as OpenStack and VMware. All that’s available in two methods. One is a hosted trial, and the other is a download trial.
Maybe this is on people’s minds before we sign off. It was a quesiton that we got yesterday, and that was, how long did it take us to set up everything for this demo? It took about an hour. The first thing we did was set up the private network, the stuff that James had talked about between Frankfurt and Silicon Valley. Then there was a 24 hour piece where an actual, physical network cable has to be plugged in and then they let you know when that’s done.
Once that was completed, the deployment of Container Cloud into this environment took about an hour, then another hour for various configurations and setups, although it probably didn’t take that long to do the application deployment, splitting the databases into Frankfurt and the rest of it into the front end. Once we got the connection, it probably took about three hours to get this thing fully functional.
Thanks for reading! You can access a recording of the webinar, including footage of the demo, here.