Any application running at production scale should have an “Ingress” to expose itself to the outside world. While Kubernetes provides the “Ingress” resource for this purpose, its feature set is limited depending on the kind of Ingress Controller (usually nginx) being used.
Alternatively, you can leverage Istio and take advantage of its more feature-rich Ingress Gateway resource, even if your application Pods themselves are not running purely Kubernetes. We can do so by incrementally adopting Istio’s feature: Ingress Gateway, which uses Envoy proxy as the gateway (as opposed to nginx). We explore these two approaches in the following webinar, which include demos so you can see how Istio helps manage your ingress traffic.
Nick Chase: All right. Good morning, good afternoon, good evening wherever you are. I am Nick Chase from Mirantis and I am here to welcome you to today’s webinar: Your app deserves more than Kubernetes Ingress. Kubernetes Ingress versus Istio Gateway. I want to thank you all for being here, and I especially want to thank today’s presenter, Andrew Lee. Andrew, tell us a little bit about yourself.
Andrew Lee: Hello, everyone. Thank you for joining on your Tuesday morning or afternoon. My name is Andrew. I’m the technical instructor here at Mirantis. I’ve been in our training team for the past four years. And more recently we have been delivering Kubernetes courses. So we have our Istio course, our first Istio course coming up, and we wanted to give you some overview about, you know, some of the features of Istio, which is at today’s topic, Istio Ingress versus Kubernetes Ingress. So please enjoy the presentation. I think Nick has a few housekeeping items.
Nick Chase: Yes, a little bit of housekeeping before we go on. Yeah, so as you will notice, there is a panel usually to the right-hand side of your screen. If you have questions as we go along, please go ahead, feel free to go ahead and add them to the questions pane. We’re gonna go ahead and save the questions for the end, but you don’t have to. I’m gonna go ahead and answer the most common question right now. Yes, you will get a link to the slides and to the video of the webinar after we are done. So with that, Andrew, I’ll ask you to tell us a little bit about what we’re gonna learn today, and then jump right in.
What is Kubernetes Ingress
Andrew Lee: Sure. All right. So today we’re gonna have a short presentation, about 10 to 15 minutes, about what is Ingress, specifically the Kubernetes Ingress and the Ingress Controller. And then we’ll discuss Istio’s version of Ingress, which is called Gateway. After we have this discussion, I’m gonna spend the rest of the time doing a demo of these two topics. All right?
Entrypoint for your application
So first we have to understand what is Kubernetes Ingress. Right? And there are many ways to expose your app in Kubernetes. And if you have just started using Kubernetes, maybe you’re familiar with a NodePort type of service. Right? So you get your node up and running, and you expose a port on that note, which maps your service, which then maps to your pod. Right? And the more advanced scenario, you may have deployed a load balancer type service using your call provider, or even a Ingress with the Ingress Controller.
So we will look at these three types just in case you’re not familiar with them. So here’s what a NodePort entry point for your app might look like. So from the Internet, I can access my Kubernetes node. And when I create a service type NodePort, it takes one of the ports from all of my Kubernetes nodes. It’s showing one in this diagram, but all your nodes will have this node port taken. Now when I talk to this node IP on this NodePort, by the power of IP tables, my request is forwarded to my application container.
Entrypoint for your Application: NodePort
So it’s a very simple and easy way to get up and running to expose your app. Right? Now the problem comes when we start looking at, okay, what are my end users seeing? Right? I have this really cool application. And then I tell my users, okay, my app is available at 22.214.171.124:30126. So obviously that’s not really an ideal scenario to give to your end users. So one of the issue it’s that it’s not a standard port range? Right? The default port range for this NodePort is 30,000, so we will have about 32,000. Okay?
The other one is we can give one NodePort per service. So I just have one application exposed. If I had another one, I would have to create another NodePort service to expose that set of pods. Right? So now is this limitation is typically not for production use. So now looking at Ingress, okay, we still have our traffic coming into the node except now our Ingress Controller, which is a pod in this case, our Ingress Controller is going to be the one doing the proxy. And that proxy is choosing which service to forward my traffic to or the user traffic to. Okay?
Entrypoint for your Application: Ingress
So in this case, I now have two applications, service A and B. And you specify some rules about how to forward traffic to its service, and that’s called Ingress. So in fact there’s many components to Kubernetes Ingress that we’ll look at. One being the Ingress rule itself, and then the controller who is actually doing the forwarding. Okay? So in a full-blown production environment, we just wanted to give you an idea about the architecture.
So typically you’ll have some load balancer which is the single-point-of-entry for users. That load balancer will pick one of your Kubernetes worker nodes, and then your Ingress Controller will be exposed by through that load balancer service. Okay? But what we’re gonna do in the demo today will look more like this previous diagram where we exposed a host port. When I talk to the host port, I’m actually talking to the Ingress Controller. So keep that in mind. Okay?
Kubernetes Ingress Advantages
What’s great about Ingress is that we can do path and host-based routing. So I mentioned earlier that we can choose or we can define how our traffic can be routed to service A or B. Well, if service A is some blog application and service B is some shopping cart, right, I can say if a request comes in for my site dot-com/blog, let’s send it to service A.
And that’s come from the Ingress rule. It comes in for my site dot-com/shop. Let’s send it to service B. That’s defined through the rules, and we can just expose one port. But depending on the user request, we can send it to the correct service. So multiple services behind a single cloud load balancer. And that’s the key. To do this, you need three components. So we have our Ingress rule, which is the Kubernetes’s resource where we define that path-based routing.
So we’ll see that shortly in the lab with the Ingress Controller, which is the actual pod doing the proxy. Okay? And the most common and the recommended tool for that proxy is Nginx. Okay? Otherwise if you’re using cloud providers, they may actually implement this Ingress Controller in some different way in conjunction with a load balancer. So this acts on the Ingress definitions provided. And then the default backend is where if none of the rules match, if the request doesn’t match any rules, it will be sent to this backend pod. Okay?
Now this, too, has some limitations. So out of the box, okay, if I don’t get some Nginx plus subscription, there is limited observability toolsets available to my Kubernetes cluster. There’s also no good way to do advance traffic controls or release strategies. So if you think about, okay, let’s send ten percent of my traffic to version two, and 90 percent to version one, there’s no way to specify percent base releases. Okay?
Istio Ingress Gateway
So you have some limited capabilities about traffic control. Okay? And also there’s no service resiliency features either. All right. So keeping that in mind, let’s take a look at Istio and how it can solve some of those shortcomings. All right? So primarily Istio is the control plane for Envoy, and that’s the way I like to think about Istio. And Envoy is going to be our proxy for the Ingress gateway. All right?
Think about Envoy as sort of the direct replacement for Nginx. So Envoy being more featureful, okay, let’s put envoy at the edge and then Istio control and program Envoy. So rather than having Ingress Controller here, we now have a resource called Istio Ingress gateway which is another pod with Envoy container running. And this gateways program is by creating gateway, Kubernetes resource, and virtual service Kubernetes resource. Now these are defined as a custom resource and will also see that in the demo.
In similar fashion, we have service A and B where we define the rules about how to forward traffic to which service. And you can imagine we can also specify percent-based traffic routing as well. Now what I’ve done here is highlight or put an envoy in application A here as well. If you’re not familiar with Istio, you will have what’s called a sidecar container in every pod in your workload.
In fact, all of my application pods will have this Envoy container in each of them. Okay? And that’s going to be your data plan. And we’ll see why in just a little bit later. To utilize the Istio and Ingress Gateway, you need three resources. Okay? You need the gateway resource. This is a custom resource definition where you configure ports, the protocol certificates, if any, as well as a virtual service. So this is where you’re defining your routes.
So, virtual service is very similar to the Ingress Kubernetes resource results we saw from earlier, or it will be a replacement to that resource. And then lastly of your Ingress Gateway, which is the pod with Envoy. That does the routing. Okay? So Ingress Gateway then is a replacement for a Ingress Controller in the native Kubernetes. All right?
Istio Gateway Advantages
The advantages are that Envoy can handle layer seven traffic. So can Ingress, but Envoy has additional advanced features as listed here. Okay?
So things like more advanced routing rules, percent-based routing distributed tracing, limiting the rate of traffic, checking policies. And one of the very important feature sets of Istio is that we’re able to observe our microservices better, more holistically. So metrics collection is also very simple when you have Envoy. It also natively supports gRPC. So if your applications use gRPC to communicate, then Envoy and Istio is a good choice.
And secondly, there is dynamic configuration. So no longer do you update a config map and then restart your pod. So Envoy has a feature called hot reload where you don’t have to drop any traffic before loading a new config. Lastly, Ingress rules are also supported.
Istio Gateway Disadvantages and Alternatives
And some disadvantages you may think of is that it requires yet another control plane component.
So Envoy by itself, you can use it as sort of a data plane component. But it’s easier to do so with a control plane like Istio. So then it requires you to sort of learn this new tool and new commands, and how it works. Another thing is that if you talk to a company or product like Ambassador, they’ll say, “Well, Istio is primarily for internal traffic management. It’s for your service mesh. It’s not specialized or exposing traffic to the outside. It’s not for north to south, rather it’s more specialized for east to west communication.” So that may be a disadvantage.
So some alternatives. There are toolsets called API gateways, which do specialize in this north to south edged traffic chart. Ambassador, and you may have heard of Traefik or Kong. Now these are mainly commercial products, whereas Istio and Envoy specifically is a completely opensource product under the CNCF. All right?
Installation and Configuration of Kubernetes Ingress Demo
Okay. So without further ado, let’s take a look at the demo overview. Okay?
We’re gonna have an application which gives us some cat gif. And we’re going to progressively expose it by better and better methods. Okay? So we’ll start with NodePort, and then we’ll move on to Kubernetes Ingress, and we’ll take a look at what domain name is configured. And we will live install and configure Istio into our Kubernetes node. And, lastly, we’ll look at the Istio Ingress Gateway. And as a bonus, we’ll take a look at what observability features are available using Istio.
All right. So here I have my environment. This is just a single node Kubernetes deployment. Right now, there’s nothing deployed on here except the Kubernetes control plane. So if we do get the pods the cube system, we have our Kubernetes components running. What I’m gonna do is navigate to the manifest directory. And here I have an application called Flask. Application called Flask that’s ready to be deployed.
So the basics of this app, it’s very simple. We have a frontend Flask service, which serves the Flask app pod. And it’s a Python code which retrieves cat gif URLS from Redis. So we’ll go ahead and deploy this and see what it looks like. The first thing we’re going to deploy the backend first to make the cat gifts available. So we’ll apply the Redis app first. And while it’s deploying, let’s take a look at the deployment spec. There you can see a service type cluster IP.
This Redis service does not need to be exposed outside, so cluster IP type is fine. Now we have a deployment of the Redis app. Okay? We’re gonna have a mounted volume, which then is going to serve a Redis configuration that we passed in a config map earlier. And this is not part of the recording, but I have created this config map with the Redis com inside. Okay? So let’s take a look at that pod. It’s up and running.
Deploying Our Flask App Deployment
And let’s go ahead and deploy our Flask app deployment.
And the Flask app deployment yaml contains the NodePort type service, which we will use to access this application, as well as the Flask app container. Okay. So let’s look at our pods now. Now we have our Flask deployment and the Redis deployment running. What that means is we can now go to get services, and we’ll notice that our Flasks service has a node port associated to it. And that means if I go to my IP address, which I have assigned a domain name for.
So it’s Andrew dot-Megarantis dot-com, and I go to the port 32015, which matches the NodePort, then I should get my application. So this is my cat application, later we’ll deploy the dog. For now we just have a cat application. Later, we’ll deploy a dog gif. But for now, we just have the cat gif of the day. Okay? So that’s a very simple way to get up and running with Kubernetes to expose my application.
But as we discussed in the webinar, one of the issues is that, well, this is not a standard port. Okay? And we can have different ports or we must have different ports for different applications. So now what we’ll do is create an Ingress resource. And that’ll allow us to do path-based routing. Okay? So let’s take a look at the Ingress manifest, Ingress cat/dog. And this is saying anything destined for the slash cats path, let’s send it to the Flask service. Anything for slash dogs, let’s send it to the Flask dogs service.
But that doesn’t mean we’re going to go to two different hosts. We’re going to still be able to go to Andrew dot-Megarantis dot-com. But depending on the path that we request, we’ll be forwarded to a different service. So what we’ll do now, we’ll deploy our dog application. Flask app dog. Okay? We’ll make sure that’s running. Been created, and the last one is running. So let’s go ahead and apply the Ingress rule. This one. Ingress cat/dog. And now Ingress is created.
Because we already have the Ingress controller, this is the name of the Ingress controller, we have this pod up and running. It’s ready to enforce this rule. And so if we do describe Ingress called cat/dog Ingress, we should see that this is being served by the controller. Okay? And then this Ingress resource also has an address. Now it’s ready to be used. So then let’s go back to here. Okay? Go to port A. Standard port. So my browser doesn’t quite like the fact that it’s a standard port, but it doesn’t really recognize it. But we will proceed and ignore that error for now. So since I didn’t specify any path right now, we’re getting the default back here. So that’s the desired behavior.
But if I specify cats, okay, I get my cat gif of the day. That’s our cat service. If I go to dogs, right, and I get our dog gif of the day. Right? So that’s the power of Ingress, having the single entry point on a standard port, but being able to do path-based as well as the host-based routing. So it’s very simple to do. And all it took was the deployment of the Ingress controller, as well as the Ingress rules that we created. Right? Okay.
Deploying and Creating Ingress Rules for Istio
So then how do we go about deploying and creating Ingress rules for Istio? So what we’re going to do is first take a look at the Ingress and Nginx namespace. And this Ingress, default Ingress and Nginx will actually interfere with Istio Ingress gateway. It’s going to try to take the same port 80 from the host. We’re going to just get rid of this namespace entirely as well to meet the Ingress resource. So do some housekeeping clean-up with Ingress. We’ll delete the cat/dog Ingress, and then we’ll delete the Ingress namespace as well. Ingress and Nginx.
That’ll take a second just to delete all the resources. Afterwards, we’ll take a look at the installation of Istio using helm charts. Just a few more seconds and our Ingress namespace should be deleted. There we go. Okay. So the next step then is to download this repository. Now if you are following along from home, this is just a standard Istio version 119 that we’ve just downloaded from the Internet. Okay? And it comes with these helm charts. The first one we’re gonna run is for – is called Istio Init, and that’s going to initialize my custom resources.
And by custom resources, I mean if we do kubectl get CRV, Istio System…let’s see. That’s not the list I was looking for. Right? If we do get CRD, you see that there’s a lot of these custom resources with the Istio suffix, and these are all related to Istio. Right? So this will allow us to then later on install Istio and have Istio manage these resources. The next helm chart to run is the actual control plane components themselves. Right?
And so this one will spawn all of the Istio control plane components with the following options: gateway type NodePort, Kiali-enabled, Prometheus-enabled, tracing, Grafana, and so on. And so pretty much everything enabled. We’re gonna run that. And that’ll take us a few moments to install as well. So eventually what we want to get at is Istio control plane and take a look at the Istio Ingress gateway, and be able to forward traffic to our existing dog and cat apps. Instead of using Nginx, we want to us Envoy. So it looks like it’s been deployed. Let’s look at the Istio system namespace.
So a couple of my pods are still coming up. Some are still being created so let’s give it a moment. Hit the pods. Now the Ingress gateway is running. The sidecar injector is still being created, so is Prometheus. Prometheus is running, and Ingress gateway. So we should have all pods now fairly quickly. Okay? The first thing I’ll do right now is if you take a look at my existing workload, right, they each have one container. But in an Istio-enabled environment, all of my work load pods should have two containers because each of them will run on Envoy’s sidecars.
And Istio makes it really easy to automatically enable injection, and that is the following. We’re gonna label whatever namespace my workload is running at. In this case, it’s default. We’re gonna and put a label on it called Istio dash injection is enabled. Okay? Now that’s not gonna change anything just yet. What we need to do is actually delete the pods if they already exist, and we’ll just let the replica set controllers recreate the pods. And that are some dependencies between these applications.
The Redis should be up and running first. So we will see – take a few seconds, a couple minutes to wait for these pods to come back up.
So let’s take a look right now. Okay. So we now have all the pods with two containers. If you describe them, you’ll be able to see what I’m talking about. Describe Redis app. Describe pod that is. And you see that in the containers field. There is a new container called Istio proxy. That’s the one that is the side car which is handling your traffic or this pod. Okay? And that is the Envoy container that’s running within the pod. Okay.
Our workload is ready to go with Istio. And so we’re going to go ahead and start working on the Ingress gateway of Istio. Now the Ingress gateway has already been deployed for you. If we go to Istio system namespace. You can see that we have an Istio Ingress gateway pod. Right? Now of this demo specifically, let me show you one thing. We’re going to edit the deployment spec of Istio and Redis gateway.
And we’ll actually add, we’ll actually map the host port 80 to this pod directly, to the Ingress gateway directly. That way any traffic coming to the host will be forwarded to this gateway pod. Otherwise we would probably create something like a load balancer service, and then ensure that the load balancer forwards it to this gateway. But this is just one way of doing it, and this is for demo purposes. Okay? But we will put a host port here and save that, save that spec.
Configuring the Istio Gateway
All right. Our Ingress gateway pod is set up. What we’ll do next is start configuring the gateway. So let’s go back to our Manifest directory. I said that in order to use Istio gateway, you need to create both a gateway spec and a virtual service spec. Okay? Gateway will configure my host and port for this pod, and then virtual service will configure the rules. All right? So I have both of those here. My cat/dog gateway will expose port 80 protocol HTTP, and my host name will be Andrew dot-Megarantis dot-com.
My virtual service will look like the following. It’ll utilize that gateway, but also we have a couple rules. Right? The first part of the rule says if the user request header, user agent header, matches Chrome, then let’s send it to the dog service. Okay? Otherwise for any other request, let’s send it to the regular cat service. All right? Let’s go ahead and apply those two resources on Kubernetes. Kubectl apply. Cat/dog gateway. And apply dog Chrome virtual service.
Okay. So we now have our gateway resource and the virtual service resource. So now if I go back to my website, just the root directory. So I navigated to enter the Mirantis.com. My browser is actually forcing me into an HTTPs. So what I’ll do is open the new incognito window. And go to the site directly again. So because this is Chrome, I am getting dog gif of the day. And if I open a new browser that is not Chrome, here’s Safari. Okay. Safari will get a cat gif of the day.
How to Retrieve the Kiali Dashboard
And this is all thanks to our virtual service, which is routing our traffic to respective services. Right? That is the power of virtual service. All right. So the last thing that we kind of looked at in the live demo, what I want to show you again just in case is to retrieve the Kiali dashboard. And Kiali will allow us to visualize our microservices. So as a bonus, let’s take a look at the Kiali observability tool. I would just expose it as a NodePort.
So that’s go ahead and do that. Edit service. Kiali. Okay. Change the type. The NodePort like so. And that service is live. And let me make sure that I have my secret. It’s called Kiali. Okay. I have my Kiali secret with the admin dashboard credentials. So the location of Kiali is then a service, a service IP and then NodePort. So that NodePort/Kiali. And here’s that console. Login with admin. Admin. And that’s from the secret that we created earlier.
And what’s great about Kiali is we have various views we can select. Today’s focus will be on the graph view. And so we want to select the namespace, and a workload is that default namespace, so we’ll select that one. Now when no traffic is flowing, it doesn’t look particularly flashy. So what we can do is actually run some commands to simulate traffic. So we’ll do a while loop of curling the domain. And we will also curl as Chrome, all right, on Andrew Megarantis dot-com.
And let’s curl another resource. And this time, we will pass on an invalid URL just to see what happens. While that’s going, it’s running some curl commands against my service, and shortly I will see some response from Kiali. What’s great about Kiali is I can see exactly where the failures are occurring. So for example, my 400 errors, which is the path not found errors, are happening at the Flask service going to the Flask app. Right?
And we can see that our dog app is returning fine, so we know that we can pinpoint our efforts to debug into our Flask service and the Flask app. So in fact, we want to cancel that for a sec. What we can do then is get the logs of one of our flask app deployments to see what is going on with this deployment. You could kubectl logs, not get logs. And we will select the flask app container.
And so because we’ve been able to easily pinpoint where it is, we can see that a couple users have been pointing to this location called invalid, slash invalid, otherwise our requests have been returned with 200. Okay. So you can start to imagine as you have bigger applications, bigger microservices, more dependencies, this graph here will come in very handy. And even if you are an operator who is, I guess, new to the team, new to the application, you can get a very good overview of what the architecture looks like.
For example, we can see here that our cat application actually retrieves from Redis, whereas our dog application does not. Okay? So the dog gif URLs are embedded in this HTML, whereas the Flask app is not. So that’s all I wanted to show for today. Thank you for watching.
Nick Chase: All right. So let us go on – because you’re here, we want to thank you for being here. And we’re going to get into questions in a minute. Andrew, if you can go to the next slide. The training department has offered a 15 percent off discount. You can use this coupon code. This is your reward for sticking around to the end. You can see here we have Kubernetes classes.
Actually, Andrew, you could probably speak to what these classes are a bit more than I. I know we’ve got some syllabus, syllabi, available in the handout section. But do you want to tell us a little bit more about these courses, Andrew?
Andrew Lee: Yeah, we have the handouts available. Basically if you’re new to the Kubernetes world and you want to get an introduction, then the KD 100 class is a great place to begin. I know that CKA is and has been a hot topic, and so we have a course to prepare you for the CKA as well. So if you sign up for the 250, KD 250, that’s a great comprehensive course starting from knowing nothing about Docker and Kubernetes to being able to pass the CKA.
And more recently, we have our Istio training coming up this week. And so the first course will take place this Thursday. So that’ll be our introduction to a Istio and Service Mech. And then talking more about –
Nick Chase: You’ll be teaching that, won’t you?
Andrew Lee: That’s right. I’ll be teaching this one. And the second day we’ll focus on security, resiliency, and monitoring using Istio.
Nick Chase: All right. Fantastic. Okay. So, again, on https://training.mirantis.com/. You can use that code WEBMIR2019 for a 50 percent discount. All right. So let’s get into Q&A if you have not, submit any questions now. It’s a good time.
Questions and Answers for Istio Ingress Gateway vs. Kubernetes Ingress
I see we’ve got a bunch of them. Let me get to the list and we’ll take a look. All right.
Question: “So how resource-intensive is it to run Istio in your Kubernetes cluster?”
Andrew Lee: So I found that in the demo environment, specifically, it was pretty intensive for me. So let’s take a look. We can do describe node. Right now, it’s using about two virtual CPUs and about 70 percent of my provision memory. Okay? So I found it to be pretty intensive to run additional control plane component, but of course it will help to have your additional Kubernetes nodes. But your master nodes may become up quite a bit larger because of the additional control plane components you’re deploying.
Nick Chase: Okay. All right.
Question: “Is the only way to have Envoy proxy at the edge?”
Andrew Lee: No, because let’s see. So when we talked about some of the alternatives, these guys, like Ambassador is a product where it’s still using Envoy, but it’s just a different control plane component. So Istio is not the only way. There are definitely alternatives on the market.
Question: “Are these classes online or at a campus?”
Andrew Lee: These classes are both online and available onsite. So we have public classes which are periodically scheduled throughout the weeks, and those are both virtual and onsite. So for the same session, you can pick and choose whichever method of delivery you’d like.
Nick Chase: And I’ve actually take it some of these class virtually. It’s very convenient.
Andrew Lee: Yeah, absolutely.
Question: “If the main difference between Kubernetes, Ingress, and Istio Ingress is Nginx versus Envoy, why is Envoy better?”
Andrew Lee: “If the main difference is Nginx versus Envoy, why is Envoy better?” So Envoy is more featureful than Nginx. And it’s not as sort of blocked behind the paywall, where it’s more community-driven. And many of our current big companies like Google and Lyft are adopting Envoy and contributing back to its code. And so this is a great ecosystem to get into currently. So we can see that when we talked about advanced routing rules, the tracing, the metrics view that we saw in Kiali, these are all available because of Envoy’s features
Nick Chase: No, I was just gonna say you said that Envoy is more featureful. Could you give us some examples of that?
Andrew Lee: Yeah. So some of these metrics that we’re gathering about how much traffic is being sent, for example, the request percentage, these are all kept track by Envoy. Because Envoy is right at the data plane and it is the medium in which your microservices communicate with each other, it can see all the communication channels between all of your services. Okay?
Whereas with just pure Kubernetes, maybe that you want to, you’ll have to install some additional metrics toolsets in order to see this kind of info. And the same thing applies to the edge. If you just had Nginx, well, maybe you don’t have as much visibility and features as Envoy.
Question: “Could you just use Envoy as the Ingress Controller without Istio?”
Andrew Lee: So I think that’s a similar question as before. There’s alternatives to Istio. Another one that I found is called Contour. So without having other custom resource definitions from Istio, Contour is a good project. We’re still using Envoy proxy, but it’s going to be implemented in a different way than Istio. So it’s a little bit more lighter weight. If that’s what you’re looking for, then maybe that’s a good option for you to look at.
Question: “Will it be possible to deploy Istio pods with MCP 2.0 GUI?”
Andrew Lee: Yeah, definitely. Well, when you say MCP 2.0, I’m guessing you’re talking about our CaaS. And so in our CaaS product, we will have a checkbox to deploy Istio, and that’s about all you need to do.
Nick Chase: Gotcha. Okay. All right. We have a couple other questions that need a bit more elaboration that we could probably handle this Q&A. So we’ll have to deal with them separately. But if anyone else has any other questions, I would encourage you to go ahead and drop them in now so that we can get to them before we wrap up for the day. So if we didn’t get to your question, we will get to your question. We will get you an answer
All right. Okay. Well, going once. Going twice. Sold. Well, I want to thank today’s speaker Andrew Lee for giving us a really nice informative presentation today. And I want to thank all of you for joining us here. We know how busy you are and we appreciate you taking the time to be with us today. And as always I wanted to thank our super producer Michelle Yakura, without whom we could never do any of this.
And I want to thank all of you again for putting up with my cold. And we’ll see you next time when hopefully I will be able to speak. But again, thank you again, Andrew, and we’ll see you all again next time.
Andrew Lee: Thank you, Nick. Thanks, everyone.