Cloud Native 5 Minutes at a Time: Using an Image Registry

Eric Gregory - February 25, 2022

One of the biggest challenges for implementing cloud native technologies is learning the fundamentals—especially when you need to fit your learning into a busy schedule.

In this series, we’ll break down core cloud native concepts, challenges, and best practices into short, manageable exercises and explainers, so you can learn five minutes at a time. These lessons assume a basic familiarity with the Linux command line and a Unix-like operating system—beyond that, you don’t need any special preparation to get started.

In the last lesson, we learned how to create container images from Dockerfiles. This time, we’ll practice using a public container registry to store and share our container images.

Table of Contents

  1. What is a Container?
  2. Creating, Observing, and Deleting Containers
  3. Build Image from Dockerfile
  4. Using an Image Registry←You are here
  5. Volumes and Persistent Storage
  6. Container Networking and Opening Container Ports
  7. Running a Containerized App
  8. Multi-Container Apps on User-Defined Networks

Digging deeper into registries

Here’s a secret: we’ve been quietly using an image registry all along. Whenever we’ve built on top of container images like alpine, the Docker engine has downloaded those images from Docker Hub, a public container registry managed by Docker, Inc. It can seem like magic—the software we want appears almost as quickly as we can summon it by name!

But what do we mean, exactly, when we say that Docker Hub is a public image registry? In short, it’s a repository (or collection of repositories) that anyone can access to download or upload container images.

In our first lesson, we noted that Docker Hub isn’t the only container image registry. There are public registries managed by other entities, and it is also possible for organizations to create private registries using tools like Mirantis Secure Registry. Anyone who needs to establish a secure software supply chain will need to be able to trust the provenance and contents of their container images, and should therefore use a private registry of some form.

The swiftness, accessibility, and built-in nature of Docker Hub make it a natural choice for learners, but it’s a good idea to cultivate security-consciousness and other best practices from the outset. One way to do this is to investigate images you intend to use through Docker Hub’s web interface at There are several labels here that can help you to identify vetted images.

Let’s take a look at alpine, for example, at

Docker Hub page for alpine

This has the Official Image label, meaning it is part of a set of images curated and published by Docker. These are very commonly used images that most beginners will want to use—verified as the upstream official version and exemplifying container best practices in design and documentation. Many are OS base images like alpine or ubuntu, but there are also images for popular languages like Python, Go, or Node; data stores like MySQL or Redis; web servers like Nginx; and much more.

The Docker registry also provides another label for a Verified Publisher. Docker has confirmed that images with this label are published and maintained by the entities that produced them. For example, users of Amazon Web Services (AWS) can download a container image for their command-line interface (CLI) that is confirmed to come from Amazon.

There is no such thing as perfect security—by accident or malice, vulnerabilities can creep into even official images—but as you get started using containers, develop a habit of asking questions about the provenance of your images and seeking out validated sources. The catalog of Docker images that are either designated Official Images or from Verified Publishers is a good place to start.

Exercise: Uploading our first container

If you haven’t done so already, you’ll need to sign up for a Docker ID, which will serve as your credentials for Docker Hub. If you’re using Docker Desktop on macOS or Windows, this is the same ID you created to log in. (If you don’t have an ID yet, you can sign up for one at

On the command line, type:

docker login

You’ll be prompted for your username and password. Once you've successfully entered your credentials, you’re ready to get started.

First, we’ll create a new container based on the official Python image and enter an interactive session with a bash shell:

docker run -it python bash

Now we should be working in a bash shell within our new container. Here, let’s use the apt package manager to download the nano command line text editor.

apt update
apt install nano

Now we’re going to write a simple Python program within our container. Use nano to create and open a new Python file:


In this file, we’re going to write a simple Python program that produces a random integer between 1 and 6, inclusive. In other words, we’re writing a program that rolls a virtual die! Don’t worry if you’re unfamiliar with Python—you can simply copy and paste the code below into your file:

#import module
from random import randint
#assign a random integer between 1 and 6, inclusive, to a variable
roll = randint(1, 6)
#print the variable

Press CTRL-O to write to the file, enter to confirm, and then CTRL-X to exit nano. You may wish to test our new program by running:


In your container’s bash shell, you’ll receive a randomized result from 1 to 6.


Without exiting the container, open another command line session on your host machine. (In Terminal on macOS, for example, this is a simple matter of pressing command-T to open another tab.) Your Python container should still be running, so you can enter…

docker container ls
…to retrieve its container ID.
<Your ID here>  python  "bash"    1 hour ago   Up 4 minutes  <Name>

Now we’re going to commit the current state of our container to a new image—including the Python program we just wrote.

docker commit -m “First commit”  d6:1.0

Let’s take a moment to walk through this commit step by step. The -m flag enables us to append a short descriptive message; this is a good practice for recording the headline changes made within a commit, whether you’re working on a team or leaving breadcrumbs for yourself or others in the future.

Next, you’re specifying the container ID that forms the basis of your commit.

Finally, you’re assigning the image name—in this case, “d6”—and tagging it with a version number (here, that’s 1.0).

The commit has created a new image on our local machine. We can see it on our system with…

docker image ls

You should see the new image name at the top of the listing, with output something like this:

REPOSITORY         TAG         IMAGE ID         CREATED         SIZE
d6                 1.0         <Your ID here>   1 hour ago      939MB

Let’s take that one step further and upload our new image to Docker Hub. We’ll start by tagging our image for upload, and then push it online:

docker tag d6:1.0 <Your Docker ID>/d6:1.0
docker push <Your Docker ID>/d6:1.0

The command line output will show you each of the layers being pushed:

The push refers to repository [<Your Docker ID>/d6]
e06d6e649287: Pushed 
51a0aba3d0a4: Mounted from library/python 
e14403cd4d18: Mounted from library/python 
8a8d6e9f7282: Mounted from library/python 

Now your containerized die-rolling app is available for anyone to download as an image through the Docker registry. You should see it when you navigate to:<Your Docker ID>
d6 Docker Hub page

In this lesson, we’ve created a very simple “static” app—it produces output, but doesn’t have to store any data for later use. Next time, we’ll explore how to handle persistent data stores with containers.

  "$experimentIndex": 0,
  "$variantIndexes": [
  "$activeVariants": [
  "$classes": [
  "name": "alternate-ad-placement",
  "experimentID": "ca62VGC4QDaNqECV8gH-kg",
  "variants": [