Introducing the NGINX Ingress Controller in Mirantis Kubernetes Engine 3.5.0
Most Kubernetes services don’t just exchange information within their cluster — outside users and services need to be able to access them. The Kubernetes Ingress
resource makes this possible, describing a set of routing rules according to hostname and/or path. An ingress controller implements those routing rules to route the incoming HTTP traffic to a service in your cluster.
With the release of Mirantis Kubernetes Engine (MKE) v3.5.0, we are switching the ingress controller from Istio to NGINX Ingress Controller (ingress-nginx
), an open-source ingress controller for Kubernetes maintained by the Kubernetes community.
Why NGINX? The Ingress controller is an essential piece of the Kubernetes puzzle, and NGINX provides a world-class open source solution, driving scalability, security, and high performance for your clusters as they interface with the outside world — all with the dependability of industry-standard open source tooling.
What is the NGINX Ingress Controller?
The NGINX Ingress Controller is based on the vendor-neutral Kubernetes Ingress
resource. NGINX Ingress Controller (ingress-nginx
) is a community-supported ingress controller for Kubernetes. It configures and manages a L7 reverse proxy (nginx server) according to the rules defined in the Ingress resource to route the traffic.
NGINX provides a number of features that benefit Mirantis Kubernetes Engine users:
- High speed, performance, and scalability
- Ease of monitoring with Prometheus, enabling high observability and uptime, whether by an in-house team or through Lens DevOpsCare
- The dependability of industry-standard, community-supported open source software
See the official Kubernetes Ingress documentation for more information on the Ingress specs.
What does this mean if I’m using, or wish to continue using, Istio?
If you upgrade to Mirantis Kubernetes Engine 3.5.0, you will need to migrate your existing Istio resources to use the inbuilt Ingress resource in order to take advantage of the embedded NGINX Ingress. There are tools to help with this, such as https://github.com/istio-ecosystem/istio-ingress-migrate.
Alternatively, if you wish to continue to use Istio, you can install your own Istio, or license Tetrate Istio’s enterprise version via Mirantis and the resources can be carried forward (possibly with some modifications if your Istio version is newer than ours). Mirantis will continue to support Istio via Mirantis Kubernetes Engine version 3.4.x until its End-of-Life on April 11, 2023.
How to enable the Ingress Controller in MKE 3.5.0
Enabling the ingress controller in MKE 3.5.0 is as easy as flipping a switch. Follow the steps below to enable and configure the ingress controller:
- Log into MKE.
- Click <username> → Admin Settings → Ingress.
- Under Kubernetes, click the slider to enable HTTP Ingress Controller for Kubernetes Next, you can configure the proxy so Kubernetes knows what ports to use for Ingress Controller Service. (Note that for a production application, you would typically expose your services via a load balancer created by your cloud provider, but for now we’re just looking at how Ingress works.)
- Set specific ports for incoming HTTP requests.
- Add External IP address of the MKE server (if necessary).
- Click Save to save your settings.
That’s it! In a few seconds, the configuration will be applied and MKE deploys and configures the ingress controller.
Note on Ingress resources: HTTP Ingress Controller in MKE will target Ingress resources from any namespace with IngressClassName of nginx-default. Also, any new Ingress that is created without an IngressClassName after enabling HTTP Ingress Controller will have a default IngressClassName of nginx-default Note on upgrading from v3.4.x and prior: There is no upgrade path from the Istio-based ingress controller to NGINX Ingress Controller. Before upgrading, Istio ingress needs to be disabled (from UI, or using MKE Configuration file). After upgrading, the NGINX Ingress Controller can be enabled. All the previous configuration used for Istio ingress will be ported over to NGINX Ingress Controller. |
Now, let’s deploy a sample application and route traffic to that service using Ingress.
Deploying a sample application with NGINX Ingress Controller
In this example, we will deploy the httpbin
app which enables you to experiment with HTTP requests. You can deploy the application via either the MKE GUI or CLI.
Installing the application via the MKE user interface
To install the application using the UI, log into Mirantis Kubernetes Engine and follow these steps:
- Go to Kubernetes → Create.
- Select the default namespace and paste the following YAML into the editor:
apiVersion: v1
kind: ServiceAccount
metadata:
name: httpbin
---
apiVersion: v1
kind: Service
metadata:
name: httpbin-svc
labels:
app: httpbin
spec:
ports:
- name: http
port: 8000
targetPort: 80
selector:
app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 1
selector:
matchLabels:
app: httpbin
version: v1
template:
metadata:
labels:
app: httpbin
version: v1
spec:
serviceAccountName: httpbin
containers:
- image: docker.io/asankov/httpbin:1.0
imagePullPolicy: IfNotPresent
name: httpbin
ports:
- containerPort: 80 - Click Create
This creates a Deployment with one replica of httpbin
, a Service to create a stable IP and domain name within the cluster, and a ServiceAccount under which to run the application.
Click on Kubernetes → Controllers to check if the application has been deployed successfully:
Now that we have our application running in the cluster, we can expose it to the outside world using Ingress.
Create Ingress
Kubernetes Ingress defines rules to route external traffic to an application running inside the cluster. In this step, we will create an Ingress with rules to route the traffic to our httpbin application:
- Go to Kubernetes → Ingresses
- Click Create and create a new Ingress named httpbin-ing that routes any traffic on the root path (
/
) to be routed to our httpbin-svc on port 80. - Click on Generate YAML,
- Select default namespace
- Click Create to create this Ingress.
Now, you can access your application by opening the browser to the proxy address and the node port you specified when we set Ingress controller:
<PROTOCOL>://<IP_ADDRESS>:<NODE_PORT>
In my case, the proxy is set up as:
We set up the service as the HTTP protocol, so the URL would be: http:///34.221.92.31:33000
Install the application via CLI
In order to use the CLI with MKE, you first need to download a client bundle and run the environment script to set kubectl
to point to the current cluster. (You can get instructions for how to do that in the MKE Documentation)
Now we are ready to start.
Start by deploying the httpbin
our app, a service account and a service.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: httpbin
---
apiVersion: v1
kind: Service
metadata:
name: httpbin-svc
labels:
app: httpbin
spec:
ports:
- name: http
port: 8000
targetPort: 80
selector:
app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 1
selector:
matchLabels:
app: httpbin
version: v1
template:
metadata:
labels:
app: httpbin
version: v1
spec:
serviceAccountName: httpbin
containers:
- image: docker.io/asankov/httpbin:1.0
imagePullPolicy: IfNotPresent
name: httpbin
ports:
- containerPort: 80
EOF
Next, create an Ingress to route the traffic
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: httpbin-ing
spec:
ingressClassName: nginx-default
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: httpbin-svc
port:
number: 80
To view the ports and verify that all your services are functioning properly, run the following in the command line:
kubectl get svc --all-namespaces
Now that you have all of this in place, you can reach your application at:
<PROTOCOL>://<IP_ADDRESS>:<NODE_PORT>
Next Steps
At this point you know how to use the Ingress Controller to route external traffic to your applications. To see a step-by-step example of more complex situations such as canary deployment, traffic splitting, stick sessions, and more complex situations, see the MKE documentation - as well as the examples in the Ingress Controller Documentation - for even more use-cases and scenarios related to ingress traffic.
If you don’t have Mirantis Kubernetes Engine installed, you can try our free trial here.