Creating and accessing a Kubernetes cluster on OpenStack, part 3: Run the application

Nick Chase - November 14, 2016 - ,

In part 2, you created the actual cluster, so finally, you’re ready to actually interact with the Kubernetes API that you installed. The general process goes like this:

  • Define the security credentials for accessing your applications.
  • Deploy a containerized app to the cluster.
  • Expose the app to the outside world so you can access it.

Let’s see how that works.

Define security parameters for your Kubernetes app

The first thing that you need to understand is that while we have a cluster of machines that are tied together with the Kubernetes API, it can support multiple environments, or contexts, each with its own security credentials.

For example, if you were to create an application with a context that relies on a specific certificate authority, I could then create a second one that relies on another certificate authority. In this way, we both control our own destiny, but neither of us gets to see the other’s application.

The process goes like this:

  1. First, we need to create a new certificate authority which will be used to sign the rest of our certificates. Create it with these commands:
    $ sudo openssl genrsa -out ca-key.pem 2048
    $ sudo openssl req -x509 -new -nodes -key ca-key.pem -days 10000 \
    -out ca.pem -subj "/CN=kube-ca"
  2. At this point you should have two files: ca-key.pem and ca.pem. You’ll use them to create the cluster administrator keypair. To do that, you’ll create a private key (admin-key.pem), then create a certificate signing request (admin.csr), then sign it to create the public key (admin.pem).
    $ sudo openssl genrsa -out admin-key.pem 2048
    $ sudo openssl req -new -key admin-key.pem -out admin.csr -subj "/CN=kube-admin"
    $ sudo openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial \
    -out admin.pem -days 365

Now that you have these files, you can use them to configure the Kubernetes client.

Download and configure the Kubernetes client

  1. Start by downloading the kubectl client on your machine. In this case, we’re using linux; adjust appropriately for your OS.
    $ curl -O \
    https://storage.googleapis.com/kubernetes-release/release/v1.4.3/bin/linux/amd64/kubectl
  2. Make kubectl executable:
    $ chmod +x kubectl
  3. Move it to your path:
    $ sudo mv kubectl /usr/local/bin/kubectl
  4. Now it’s time to set the default cluster. To do that, you’ll want to use the URL that you got from the environment deployment log. Also, make sure you provide the full location of the ca.pem file, as in:
    $ kubectl config set-cluster default-cluster --server=[KUBERNETES_API_URL] \
    --certificate-authority=[FULL-PATH-TO]/ca.pem

    In my case, this works out to:

    $ kubectl config set-cluster default-cluster --server=http://172.18.237.137:8080 \
    --certificate-authority=/home/ubuntu/ca.pem
  5. Next you need to tell kubectl where to find the credentials, as in:
    $ kubectl config set-credentials default-admin \
    --certificate-authority=[FULL-PATH-TO]/ca.pem \
    --client-key=[FULL-PATH-TO]/admin-key.pem \
    --client-certificate=[FULL-PATH-TO]/admin.pem

    Again, in my case this works out to:

    $ kubectl config set-credentials default-admin \
    --certificate-authority=/home/ubuntu/ca.pem \
    --client-key=/home/ubuntu/admin-key.pem \
    --client-certificate=/home/ubuntu/admin.pem
  6. Now you need to set the context so kubectl knows to use those credentials:
    $ kubectl config set-context default-system --cluster=default-cluster --user=default-admin
    $ kubectl config use-context default-system
  7. Now you should be able to see the cluster:
    $ kubectl cluster-info
    
    Kubernetes master is running at http://172.18.237.137:8080
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Terrific!  Now we just need to go ahead and run something on it.

Running an app on Kubernetes

Running an app on Kubernetes is pretty simple and is related to firing up a container. We’ll go into the details of what everything means later, but for now, just follow along.

  1. Start by creating a deployment that runs the nginx web server:
    $ kubectl run my-nginx --image=nginx --replicas=2 --port=80
    
  2. deployment "my-nginx" created
  3. Be default, containers are only visible to other members of the cluster. To expose your service to the public internet, run:
    $ kubectl expose deployment my-nginx --target-port=80 --type=NodePort
    
  4. service "my-nginx" exposed
  5. OK, so now it’s exposed, but where?  We used the NodePort type, which means that the external IP is just the IP of the node that it’s running on, as you can see if you get a list of services:
    $kubectl get services
    
    NAME         CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
    kubernetes   11.1.0.1              443/TCP   3d
    my-nginx     11.1.116.61          80/TCP    18s
  6. So we know that the “nodes” referenced here are kube-2 and kube-3 (remember, kube-1 is the API server), and we can get their IP addresses from the Instances page…
    screenshot of four nodes from Instances page
  7. … but that doesn’t tell us what the actual port number is.  To get that, we can describe the actual service itself:
    $ kubectl describe services my-nginx
    
  8. Name:                   my-nginx
    Namespace:              default
    Labels:                 run=my-nginx
    Selector:               run=my-nginx
    Type:                   NodePort
    IP:                     11.1.116.61
    Port:                    80/TCP
    NodePort:                32386/TCP
    Endpoints:              10.200.41.2:80,10.200.9.2:80
    Session Affinity:       None
    No events.
  9. So the service is available on port 32386 of whatever machine you hit.  But if you try to access it, something’s still not right:
    $ curl http://172.18.237.138:32386
    
    curl: (7) Failed to connect to 172.18.237.138 port 32386: Connection timed out
  10. The problem here is that by default, this port is closed, blocked by the default security group.  To solve this problem, create a new security group you can apply to the Kubernetes nodes.  Start by choosing Project->Compute->Access & Security->+Create Security Group.
  11. Specify a name for the group and click Create Security Group.
  12. Click Manage Rules for the new group.
    screenshot of access and security page with manage rules function
  13. By default, there’s no access in; we need to change that.  Click +Add Rule.
    expanded view of manage rules dropdown menu
  14. In this case, we want a Custom TCP Rule that allows Ingress on port 32386 (or whatever port Kubernetes assigned the NodePort). You  can specify access only from certain IP addresses, but we’ll leave that open in this case. Click Add to finish adding the rule.
    screenshot of add rule window expanded
  15. Now that you have a functioning security group you need to add it to the instances Kubernetes is using as worker nodes — in this case, the kube-2 and kube-3 nodes.  Start by clicking the small triangle on the button at the end of the line for each instance and choosing Edit Security Groups.
  16. You should see the new security group in the left-hand panel; click the plus sign (+) to add it to the instance:
    screenshot of new rule about to be added to specific instance
  17. Click Save to save the changes.
    screenshot of new rule added to specific instance
  18. Add the security group to all worker nodes in the cluster.
  19. Now you can try again:
    $ curl http://172.18.237.138:32386
    
  20. 
    
    
    Welcome to nginx!
    
    
    
    

    Welcome to nginx!

    If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

    For online documentation and support please refer to nginx.org.
    Commercial support is available at nginx.com.

    Thank you for using nginx.

    As you can see, you can now access the Nginx container you deployed on the Kubernetes cluster.

Coming up, we’ll look at some of the more useful things you can do with containers and with Kubernetes. Got something you’d like to see?  Let us know in the comments below.

Sound interesting? If you live in Austin, Texas, you’re in luck; we’ll be presenting Kubernetes 101 at OpenStack Austin Texas on November 15, and at the Cloud Austin meetup on Nov 16, or you can dive right in and sign up for Mirantis’ Kubernetes and Docker Boot Camp.

banner-img
From Virtualization to Containerization
Learn how to move from monolithic to microservices in this free eBook
Download Now
How is Cloud Native Changing the Landscape of Edge and 5G? [Recording]

Late last year, Mirantis hosted a Cloud Native and Coffee panel featuring CTO Adam Parco, Global Field CTO Shaun O’Meara, Director of Technical Marketing Nick Chase, and special guest Darragh Grealish, CTO of 56K Cloud. Below are highlights of the discussion that touch on what edge is and how developers can bring cloud native innovation to edge computing and 5G. …

How is Cloud Native Changing the Landscape of Edge and 5G? [Recording]
Moving to Cloud Native: How to Move Apps from Monolithic to Microservices

Enterprises face the challenge of consistently deploying and managing applications in production, at scale. Fortunately, there are more technologies and tools available today than ever before. However, transitioning from a traditional, monolithic architecture to a cloud native one comes with its own unique challenges. Below, you will find a list of the critical first steps you need to take when …

Moving to Cloud Native: How to Move Apps from Monolithic to Microservices
Mirantis Newsletter - January 2022

Every month, Mirantis sends out a newsletter chronicling top industry and company news. Below you’ll find links to blogs, tutorials, videos, and the latest updates to our enterprise, open source, and training offerings. If you don’t currently receive the newsletter, you can subscribe by clicking the button on the top right. Mirantis Brings Secure Registries to Any Kubernetes Distro Launched earlier this …

Mirantis Newsletter - January 2022
FREE EBOOK!
Service Mesh for Mere Mortals
A Guide to Istio and How to Use Service Mesh Platforms
DOWNLOAD
Technical training
Learn Kubernetes & OpenStack from Deployment Experts
Prep for certification!
View schedule
Mirantis Webstore
Purchase Kubernetes support
SHOP NOW