Mirantis acquires amazee.io, the only ZeroOps Application Delivery Hub.   Read Blog Post  |  View Press Release  |  Visit amazee.io

Using Kubernetes Services, Part 2

Eric Gregory - July 27, 2022

One of the biggest challenges for implementing cloud native technologies is learning the fundamentals—especially when you need to fit your learning into a busy schedule. 

In this series, we’ll break down core cloud native concepts, challenges, and best practices into short, manageable exercises and explainers, so you can learn five minutes at a time. These lessons assume a basic familiarity with the Linux command line and a Unix-like operating system—beyond that, you don’t need any special preparation to get started.

In the last lesson, we began our survey of the Service abstraction in Kubernetes, which serves as a sort of domain name for a workload. Services help us to connect either users or apps to functionality that may be served across any number of identical, ephemeral Pods.

In order to understand the Service Kubernetes Object, we started an exercise in which we will connect two pieces of a simple book recommendation app across a cluster: a backend API and a web interface. Last time, we created a Service for our API. Today, we’ll complete the exercise by launching our web interface.

Table of contents

  1. What is Kubernetes?

  2. Setting Up a Kubernetes Learning Environment

  3. The Anatomy of a Kubernetes Cluster

  4. Introducing Kubernetes Pods 

  5. What are Kubernetes Deployments?

  6. Using Kubernetes Services, Part 1

  7. Using Kubernetes Services, Part 2← You are here


These lessons assume a basic understanding of containers. If you need to get up to speed on Docker and containerization, download our free ebook, Learn Containers 5 Minutes at a Time. This concise, hands-on primer explains:

  • The key concepts underlying containers—and how to use core tools like image registries
  • Fundamentals of container networking that are essential for understanding container orchestrators like Kubernetes
  • How to deploy containerized apps in single-container and multi-container configurations


Exploring Service types

In the last lesson, we launched our Randomreads API Service with a command in kubectl. We deleted the Service at the end of the lesson, so we’ll need to launch the Service again—but this time, we’re going to do it a little differently.

As before, we’ll create our Deployment with the following command. (Remember, if you wish to use a pre-created container image rather than your own, you can substitute <Your Docker Hub ID> with mine, ericgregory.)

% kubectl create deployment randomreads-api --image=<Your Docker Hub ID>/randomreads-api --port=80 --replicas=3

This time, we’re going to launch our API Service with a YAML manifest. In your randomreads base project directory, create a new directory called manifests. (The dedicated directory isn’t strictly necessary–your manifests can be located anywhere—but it will help us keep organized.) Here, create a new file called service.yml and include the following:

apiVersion: v1
kind: Service
   app: randomreads-api
   env: prod
   tier: backend
 name: randomreads-api
 - port: 80
   protocol: TCP
   targetPort: 80
   app: randomreads-api
 type: ClusterIP

Take special note of the ClusterIP type on the last line. This is where we’re making a change—and now I have to make a confession:

In the first part of this exercise, we specified the NodePort type and didn’t dwell on why. Here’s the reason: I wanted to end that lesson with a satisfying, tangible sign that our API was running on the cluster. Using a NodePort Service type made it easy to see our API output in a web browser. That’s nice for learning and development purposes, but it’s not how we want our Service to work in production.

So how would we want it to work? Well, there are four types of Services in Kubernetes. Let’s consider them one by one.


A ClusterIP Service has a virtual IP address accessible within—and only within—the Kubernetes cluster (hence the name). This is well-suited for many backend Services that simply don’t need to be available beyond the cluster, affording a layer of isolation from the wide world outside. 

In practice, this is probably the Service type you will use most often. It’s also the default Service type.


Of course, sometimes we do want to be able to access a Service from outside our cluster. Perhaps we’re exposing a Service to outside users or clients, or perhaps we simply want to test a piece of functionality in development. The NodePort Service type is one way to do this, but it’s not ideally suited to production. 

When you create a NodePort Service, every node on your cluster will listen for requests on a given port. External requests can reach the Service at the IP address of a node followed by the specified port:

<IP address of the node>:<NodePort>

The NodePort API we launched last lesson, for example, had an address like:

When requests come in, the Service will route them to a ClusterIP, which in turn will route the request to an appropriate Pod. So a NodePort Service isn’t so much an alternative to ClusterIP as an addition.

When you expose a NodePort Service, all of your nodes will need to have a port open to the outside world for each Service. If you’re running multiple Services, every node will need to open a port for each. This is fine for development and learning, but it’s a blunt approach to the problem of Service exposure that doesn’t scale well and won’t help you route traffic intelligently between your nodes. For that, you’ll need to use…


The LoadBalancer Service type is designed to help your cluster efficiently apportion external traffic across nodes. It will help you operate at scale and it’s appropriate for production. 

When your cluster’s load balancer receives an external request, it selects an appropriate node and routes the request to that node via an internally available NodePort Service, which routes the request to a ClusterIP, and then finally to an appropriate Pod. (So again, we’re building on top of the other types, not replacing them.)

Notably, the LoadBalancer type depends on external load balancing solutions from a cloud provider. LoadBalancer is designed for clusters on clouds like AWS, Azure, or Google Cloud, all of which have their own tools for load balancing. As we’ll see in a moment, Minikube also has its own way of handling LoadBalancer Services.

This extensible approach follows directly from the design philosophy of Kubernetes, which aims for flexibility and portability. The particulars of a given load balancer implementation are abstracted away. For the most part, all developers have to do is interact with the Service object. Your code and your manifests will be the same regardless of which cloud provider or load balancing tool the system is using.


Now, what if we wanted to create a Service object that routed not to a workload on our cluster, but to some external application? There’s a Service type for that: ExternalName. 

Suppose that we wanted an application running on our cluster to be able to resolve requests to an external API using a consistent alias. An ExternalName Service enables us to do just that, specifying an external CNAME like example-app.com. 

There is an important caveat here, however: ExternalName Services don’t communicate over HTTP or HTTPS very well, since the target names in request headers sent from your Services won’t match the destination domain. 

Building the web client

Now that we’ve surveyed our Service types, we understand why ClusterIP is the best choice for our Randomreads API: it only needs to be accessible to the web client, which is also part of the cluster. Let’s go ahead and start our Deployment from the YAML manifest.

% kubectl apply -f service.yml

Now the API is running again, and it’s available to other Pods within the cluster via the randomreads-api domain. Let’s build our web client.

We’ll use the same tools as last time, Node.js and the express module, but remember that we could just as readily use a different toolkit here–if we were working on a different team than the developers of the backend API, we wouldn’t necessarily be beholden to their favored languages and frameworks. In this case, however, we’ll stay consistent for simplicity’s sake. 

As before, if you would rather skip the coding and deploy from my container image, you’re welcome to do so; likewise, for an additional challenge, you may wish to re-create the front-end with the language of your choice. My code is available on GitHub, and the container image for the web client is on Docker Hub

In your randomreads project directory, create a new sub-directory called rr-web. Then, inside your new directory, initialize the project, create a new file called index.js, and install the express and express-handlebars modules:

% npm init -y
% touch index.js
% npm i express express-handlebars

Add the following code to index.js:

Below we're requiring our two dependencies and setting 
a very important constant: the API endpoint that we wish 
to consume. We're simply using the name of our 
randomreads-api Service.
const express = require('express');
const { engine } = require('express-handlebars');
const ENDPOINT = 'randomreads-api'
const app = express();

Express-Handlebars gives us a super-simple view engine 
to render our webpage. Here we're configuring it to look 
for webpage files in a directory called 'views' and to use 
a wrapper file called 'main' as the default layout.

app.engine('handlebars', engine({
   helpers: {
       isCompleted: function (status) {
           if (status == "completed") {
               return true
           } else {
               return false
   defaultLayout: 'main',
app.set('view engine', 'handlebars');
app.set('views', './views');

Now we've reached the juicy stuff. This will trigger 
when a GET request is made to the server. Here we're using 
fetch (an experimental but functional feature in Node 
at the time of this writing in July 2022), and we're using 
it to grab the JSON available at the API endpoint we 
defined up top. Then we're funneling the values inside the 
JSON payload into two variables that we can utilize when 
we build the index page.

app.get('/', async (req, res) => {
       .then(response => response.json())
       .then(data => res.render('index', {
           title: data.title,
           author: data.author
Like the API, the web server is running on port 80, 
which is the expected default.
app.listen(80, () => {
   console.log('The web server has started on port 80');

Now we’ll create a new directory for views (with a layouts sub-directory inside). Here we’ll add two simple files.

% mkdir views
% mkdir views/layouts
% touch views/index.handlebars
% touch views/layouts/main.handlebars

Add the following code to main.handlebars:




(Told you it was a simple file.)

Now add this code to index.handlebars:


Need a read? How about...


        by {{author}}  


Again, pretty simple. Here we’re making use of the title and author variables.

Our web client is done. Now we can write a Dockerfile (review the last lesson if you need a refresher) then build and publish a container image the same way we did before:

% docker build . -t <Your Docker Hub ID>/randomreads-web
% docker push <Your Docker Hub ID>/randomreads-web

If you prefer to use my image for the next step, it’s available at ericgregory/randomreads-web. Create a Deployment with three replicas:  

% kubectl create deployment randomreads-web --image=<Your Docker Hub ID>/randomreads-web --port=80 --replicas=3

Now both of our deployments are running on the cluster. All that remains is to expose the Service for the web client.

Understanding labels and selectors

Return to your existing service.yml file (located at randomreads/manifests). We’re going to make the following changes to the manifest:

  • Our new Service will be called randomreads-web

  • The “app” label will have the value “randomreads-web”

  • The “env” label will have the value “dev”

  • The “tier” label will have the value “frontend”

  • The app selector will point to the randomreads-web Deployment, which it will find using the label on that Deployment

  • The Service type will be LoadBalancer

With the changes incorporated, your service.yml file should look like this: 

apiVersion: v1
kind: Service
   app: randomreads-web
   env: dev
 name: randomreads-web
 - port: 80
   protocol: TCP
   targetPort: 80
   app: randomreads-web
 type: LoadBalancer

Now go ahead use kubectl to add this Service to the cluster:

% kubectl apply -f service.yml

Note that we’re using the same file we used before, but we’re creating an entirely new Service. The service.yml file is totally decoupled from any particular entity on the cluster; it’s simply a vehicle for our intentions. 

The env and tier labels in these manifests are completely arbitrary keys–they don’t carry any inherent meaning to the system, but we can use them to organize and manage Services. Imagine that randomreads-web is a dev deployment, not yet ready for production. Labels could help us keep track of which Services are operating in which environments. 

Labels are simple but powerful tools that are essential to how Kubernetes works. Suppose we want information on all of the services that are part of our overall app’s backend. We can use selectors to identify those services via labels (specified here using the -l argument):

% kubectl get services -l env=dev
NAME             TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)       AGE
randomreads-web  LoadBalancer    80:31126/TCP  2m

What if we only want to see backend services in production?

% kubectl get services -l tier=backend,env=prod
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
randomreads-api   ClusterIP   <none>        80/TCP    14m

The same technique works for Pods or Deployments. We can use selectors to organize and select objects through the system.

All right, that’s enough delay. We’ve deployed our web Service, so let’s see if everything works correctly! Minikube enables you to access LoadBalancer Services with the following command:

% minikube tunnel randomreads-web

This will start a running process in your current terminal tab (and may require sudo permission). Once you start the tunnel, you should be able to access the web client on your local machine’s localhost:

Et voilà! We have two Services connected across our Kubernetes cluster, one of which is available externally.

That’s it for today. Stop Minikube—your Services and Deployments will be automatically deleted. 

% minikube stop

We’ve spent the last two lessons building apps specifically for Kubernetes, but a great deal of cloud native development entails taking monolithic applications apart. In the next lesson, we’ll start decomposing an actual monolith—and what’s more, it’ll be a stateful monolith. To run a persistent application on our cluster, we’ll need to explore how volumes work in Kubernetes.

See you next time!