How to deploy Spinnaker on Kubernetes: a quick and dirty guide

It would be nice to think that open source applications are as easy to use as they are to get, but unfortunately, that’s not always true. This is particularly the case when a technology is very new, with little idiosyncrasies that aren’t always well documented. In this article I’m going to give you all the steps necessary to install Spinnaker, including the “magic” steps that aren’t always clear in the docs.

In general, we’re going to take the following steps:

  1. Create a Kubernetes cluster. (We’ll use a Google Kubernetes Engine cluster, but any cluster that meets the requirements should work.)
  2. Create the Kubernetes objects Spinnaker will need to run properly.
  3. Create a single pod that will be used to coordinate the deployment of Spinnaker itself.
  4. Configure the Spinnaker deployment.
  5. Deploy Spinnaker

Let’s get started.

Create a Kubernetes cluster

You can deploy Spinnaker in a number of different environments, including on OpenStack and on your local machine, but for the sake of simplicity (and because a local deployment of Spinnaker is a bit of a hefty beast) we’re going to do a distributed deployment on a Kubernetes cluster.

In our case, we’re going to use a Kubernetes cluster spun up on Google Kubernetes Engine, but the only requirement is that your cluster has:

  • at least 2 vCPU available
  • approximately 13GB of RAM available (the default of 7.5GB isn’t quite enough)
  • at least one scheduleable (as in untainted) node
  • functional networking (so you can reach the outside world from within your pod)

You can quickly spin up such a cluster by following these steps:

  1. Create an account on http://cloud.google.com and make sure you have billing enabled.
  2. Configure the Google Cloud SDK on the machine you’ll be working with to control your cluster.
  3. Go to the Console and scroll the left panel down to Compute->Kubernetes Engine->Kubernetes Clusters.
  4. Click Create Cluster.
  5. Choose an appropriate name.  (You can keep the default.)
  6. Under Machine Type, click Customize.
  7. Allocate at least 2 vCPU and 10GB of RAM.
    Create a new Kubernetes cluster
  8. Change the cluster size to 1.
    Create a new Kubernetes cluster
  9. Keep the rest of the defaults and click Create.
  10. After a minute or two, you’ll see your new cluster ready to go.
    Final Kubernetes cluster

Now let’s go ahead and create the objects Spinnaker is going to need.

Create the Kubernetes objects Spinnaker needs

In order for your deployment to go smoothly, it will help for you to prepare the way by creating some objects ahead of time. These includes namespaces, accounts, and services that you’ll use later to access the Spinnaker UI.

  1. Start by configuring kubectl to access your cluster.  How you do this will depend on your setup; to configure kubectl for a GKE cluster, click Connect on the Kubernetes clusters page then click the Copy icon to copy the command to your clipboard.
  2. Paste the command into a command line window:
    gcloud container clusters get-credentials cluster-2 --zone us-central1-a --project nick-chase
    Fetching cluster endpoint and auth data.
    kubeconfig entry generated for cluster-2.
  3. Next we’re going to create the accounts that Halyard, Spinnaker’s deployment tool, will use.  First create a text file called spinacct.yaml and add the following to it:
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: spinnaker-service-account
      namespace: default
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: spinnaker-role-binding
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - namespace: default
      kind: ServiceAccount
      name: spinnaker-service-account

    This file creates an account called spinnaker-service-account, then gives assigns it the cluster-admin role. You will, of course, want to tailor this approach to your own security situation.

    Save and close the file.

  4. Create the account by running the script with kubectl:
    kubectl create -f spinacct.yaml
    serviceaccount "spinnaker-service-account" created
    clusterrolebinding "spinnaker-role-binding" created
  5. We can also create accounts from the command line.  For example, use these commands to create the account we’ll need later for Helm:
    kubectl -n kube-system create sa tiller
    serviceaccount "tiller" created
    kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
    clusterrolebinding "tiller" created
  6. In order to access Spinnaker, you have two choices. You can either use SSH tunnelling, or you can expose your installation to the outside world. BE VERY CAREFUL IF YOU’RE GOING TO DO THIS as Spinnaker doesn’t have any authentication attached to it; anybody who has the URL can do whatever your Spinnaker user can do, and remember, we made the user the cluster-admin.For the sake of simplicity, and because this is a “quick and dirty” guide, we’re going to go ahead and create two services, one for the front end of the UI, and one for the scripting that takes place behind the scenes. First, create the spinnaker namespace:
    kubectl create namespace spinnaker
    namespace "spinnaker" created
  7. Now you can go ahead and create the services. Create a new text file called spinsvcs.yaml and add the following to it:
    apiVersion: v1
    kind: Service
    metadata:
      namespace: spinnaker
      labels:
        app: spin
        stack: gate
      name: spin-gate-np
    spec:
      type: LoadBalancer
      ports:
      - name: http
        port: 8084
        protocol: TCP
      selector:
        load-balancer-spin-gate: "true"
    ---
    apiVersion: v1
    kind: Service
    metadata:
      namespace: spinnaker
      labels:
        app: spin
        stack: deck
      name: spin-deck-np
    spec:
      type: LoadBalancer
      ports:
      - name: http
        port: 9000
        protocol: TCP
      selector:
        load-balancer-spin-deck: "true"

    Here we’re creating two load balancers, one on port 9000 and one on port 8084; if your cluster doesn’t support load balancers, you will need to adjust accordingly or just use SSH tunneling.

  8. Create the services:
    kubectl create -f spinsvcs.yaml
    service "spin-gate-np" created
    service "spin-deck-np" created

While the services are created and IPs are allocated, let’s go ahead and configure the deployment.

Prepare to configure the Spinnaker deployment

Spinnaker is configured and deployed through a configuration management tool called Halyard.  Fortunately, Halyard itself is easy to get; it is itself available as an image.

  1. Create a deployment to host Halyard:
    kubectl create deployment hal --image gcr.io/spinnaker-marketplace/halyard:1.5.0
    deployment "hal" created
  2. It will take a minute or two for Kubernetes to download the image and instantiate the pod; in the meantime, you can edit the hal deployment to use the new spinnaker account. First execute the edit command:
    kubectl edit deploy hal
  3. Depending on the operating system of your kubectl client, you’ll either see the configuration in the command window, or a text editor will pop up.  Either way, you want to add the serviceAccountName to the spec just above the containers:
    ...
        spec:
          serviceAccountName: spinnaker-service-account
          containers:
          - image: gcr.io/spinnaker-marketplace/halyard:stable
            imagePullPolicy: IfNotPresent
            name: halyard
            resources: {}
    ...
  4. Save and close the file; Kubernetes will automatically edit the deployment and start a new pod with the new credentials.
    deployment "hal" edited
  5. Get the name of the pod by executing:
    kubectl get pods
    NAME                   READY STATUS       RESTARTS AGE
    hal-65fdf47fb7-tq4r8   0/1 ContainerCreating   0 23s

    Notice that the container isn’t actually running yet; wait until it is before you move on.

    kubectl get pods
    NAME                   READY STATUS RESTARTS   AGE
    hal-65fdf47fb7-tq4r8   1/1 Running 0        4m
  6. Connect to bash within the container:
    kubectl exec -it <CONTAINER-NAME> bash

    So in my case, it would be
    kubectl exec -it hal-65fdf47fb7-tq4r8 bash

    This will put you into the command line of the container.  Change to the spinnaker user’s home directory:

    spinnaker@hal-65fdf47fb7-tq4r8:/workdir# cd
    spinnaker@hal-65fdf47fb7-tq4r8:~# 
  7. We’ll need to interact with Kubernetes, but fortunately kubectl is already installed; we just have to configure it:
    kubectl config set-cluster default --server=https://kubernetes.default --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    kubectl config set-context default --cluster=default
    token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
    kubectl config set-credentials user --token=$token
    kubectl config set-context default --user=user
    kubectl config use-context default
  8. Another tool we’re going to need is Helm; fortunately that’s also exceedingly straightforward to install:
    spinnaker@hal-65fdf47fb7-tq4r8:~# curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
      % Total    % Received % Xferd  Average Speed Time    Time Time Current
                                     Dload Upload Total Spent Left  Speed
    100  6689 100  6689 0   0 58819 0 --:--:-- --:--:-- --:--:-- 59194
  9. The script needs some quick updates to run without root or sudo access:
    sed -i 's/\/usr\/local\/bin/\/home\/spinnaker/g' get_helm.sh
    sed -i 's/sudo //g' get_helm.sh
    export PATH=/home/spinnaker:$PATH
  10. Now go ahead and run the script:
    spinnaker@hal-65fdf47fb7-tq4r8:~# chmod 700 get_helm.sh
    spinnaker@hal-65fdf47fb7-tq4r8:~# ./get_helm.sh
    Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.8.2-linux-amd64.tar.gz
    Preparing to install into /usr/local/bin
    helm installed into /usr/local/bin/helm
    Run 'helm init' to configure helm.
  11. Next we’ll have to run it against the actual cluster. We want to make sure we use the tiller account we created earlier, and that we upgrade to the latest version:
    helm init --service-account tiller --upgrade
    Creating /root/.helm
    Creating /root/.helm/repository
    Creating /root/.helm/repository/cache
    Creating /root/.helm/repository/local
    Creating /root/.helm/plugins
    Creating /root/.helm/starters
    Creating /root/.helm/cache/archive
    Creating /root/.helm/repository/repositories.yaml
    Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
    Adding local repo with URL: http://127.0.0.1:8879/charts
    $HELM_HOME has been configured at /root/.helm.
    
    Tiller (the Helm server-side component) has been installed into your Kubernetes
    Cluster.
    
    Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
    For more information on securing your installation see: https://docs.helm.sh/usi
    ng_helm/#securing-your-helm-installation
    Happy Helming!

OK!  Now we’re ready to do the actual configuration.

Configure the Spinnaker deployment

Deploying Spinnaker involves defining the various choices you’re going to make, such as the Docker repos you want to access or the persistent storage you want to use, then telling Halyard to go ahead and do the deployment.  In our case, we’re going to define the following choices:

  • Distributed installation on Kubernetes
  • Basic Docker repos
  • Minio (an AWS S3-compatible project) for storage
  • Access to Kubernetes
  • Version 1.8.1 of Spinnaker itself
  • UI accessible from outside the cluster

Let’s get started.

  1. We’ll start by setting up the Docker registry. In this example, we’re using Docker Hub; you can find instructions on using other registries here. In addition, we’re specifying just one public repo, library/nginx. From inside the halyard container, execute the following commands:
    ADDRESS=index.docker.io
    REPOSITORIES=library/nginx 
    hal config provider docker-registry enable
    hal config provider docker-registry account add my-docker-registry \
       --address $ADDRESS \
       --repositories $REPOSITORIES

    As you can see, we’re enabling the docker-registry provider, then configuring it using the environment variables we set:

    + Get current deployment
      Success
    + Add the my-docker-registry account
      Success
    + Successfully added account my-docker-registry for provider
      dockerRegistry.
  2. Now we need to set up storage. The first thing that we need to do is set up Minio, the storage provider.  We’ll do that by first pointing at the Mirantis Helm chart repo, where we have a custom Minio chart:
    helm repo add mirantisworkloads https://mirantisworkloads.storage.googleapis.com
    "mirantisworkloads" has been added to your repositories
  3. Next you need to actually install Minio:
    helm install mirantisworkloads/minio
    NAME:   eating-tiger
    LAST DEPLOYED: Sun Mar 25 07:16:47 2018
    NAMESPACE: default
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1beta1/StatefulSet
    NAME                DESIRED CURRENT AGE
    minio-eating-tiger  4 1 0s
    
    ==> v1/Pod(related)
    NAME                  READY STATUS    RESTARTS AGE
    minio-eating-tiger-0  0/1 ContainerCreating  0 0s
    
    ==> v1/Secret
    NAME                TYPE DATA AGE
    minio-eating-tiger  Opaque 2 0s
    
    ==> v1/ConfigMap
    NAME                DATA AGE
    minio-eating-tiger  1 0s
    
    ==> v1/Service
    NAME                    TYPE CLUSTER-IP EXTERNAL-IP  PORT(S) AGE
    minio-svc-eating-tiger  ClusterIP None <none>       9000/TCP 0s
    minio-eating-tiger      NodePort 10.7.253.69 <none>       9000:31235/TCP 0s
    
    NOTES:
    Minio chart has been deployed.
    
    Internal URL:
        minio: minio-eating-tiger:9000
    
    External URL:
    Get the Minio URL by running these commands:
        export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services minio-eating-tiger)export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
        echo http://$NODE_IP:$NODE_PORT

    Make note of the internal URL; we’re going to need it in a moment.

  4. Set the endpoint to the default for the internal URL you saved a moment ago.  For example, my internal URL was:
    minio: minio-eating-tiger:9000

    So I’d set my endpoint as follows:

    ENDPOINT=http://minio-eating-tiger.default:9000
  5. Set the access key and password, then configure Haylard with your storage choices:
    MINIO_ACCESS_KEY=miniokey
    MINIO_SECRET_KEY=miniosecret
    echo $MINIO_SECRET_KEY | hal config storage s3 edit --endpoint $ENDPOINT \
       --access-key-id $MINIO_ACCESS_KEY \
       --secret-access-key
    hal config storage edit --type s3
  6. Now we’re ready to set it to use Kubernetes:
    hal config provider kubernetes enable
    hal config provider kubernetes account add my-k8s-account --docker-registries my-docker-registry
    hal config deploy edit --type distributed --account-name my-k8s-account
  7. The last standard parameter we need to define is the version:
    hal config version edit --version 1.8.1
    + Get current deployment
      Success
    + Edit Spinnaker version
      Success
    + Spinnaker has been configured to update/install version "1.8.1".
      Deploy this version of Spinnaker with `hal deploy apply`.
  8. At this point we can go ahead and deploy, but if we do, we’ll have to use SSH tunelling.  Instead, let’s configure Spinnaker to use those services we created way back at the beginning.  First, we’ll need to find out what IP addresses they’ve been assigned:
    kubectl get svc -n spinnaker
    NAME           CLUSTER-IP EXTERNAL-IP      PORT(S) AGE
    spin-deck-np   10.7.254.29 35.184.29.246    9000:30296/TCP   35m
    spin-gate-np   10.7.244.251 35.193.195.231   8084:30747/TCP   35m
  9. We want to set the UI to the EXTERNAL-IP for port 9000, and the api for the EXTERNAL-IP for port 8084, so for me it would be:
    hal config security ui edit --override-base-url http://35.184.29.246:9000 
    hal config security api edit --override-base-url http://35.193.195.231:8084

OK!  Now we are finally ready to actually deploy Spinnaker.

Deploy Spinnaker

Now that we’ve done all of our configuration, deployment is paradoxically easy:

hal deploy apply

Once you execute this command, Halyard will begin cranking away for quite some time. You can watch the console to see how it’s getting along, but you can also check in on the pods themselves by opening a second console window and looking at the pods in the spinnaker namespace:

kubectl get pods -n spinnaker

This will give you a running look at what’s happening.  For example:

kubectl get pods -n spinnaker
NAME                                    READY STATUS RESTARTS AGE
spin-clouddriver-bootstrap-v000-pdgqr   1/1 Running 0 1m
spin-orca-bootstrap-v000-xkhhh          0/1 Running 0 36s
spin-redis-bootstrap-v000-798wm         1/1 Running 0 2m

kubectl get pods -n spinnaker
NAME                                    READY STATUS RESTARTS AGE
spin-clouddriver-bootstrap-v000-pdgqr   1/1 Running 0 2m
spin-orca-bootstrap-v000-xkhhh          1/1 Running 0 49s
spin-redis-bootstrap-v000-798wm         1/1 Running 0 2m
spin-redis-v000-q9wzj                   1/1 Running 0 7s

kubectl get pods -n spinnaker
NAME                                    READY STATUS RESTARTS AGE
spin-clouddriver-bootstrap-v000-pdgqr   1/1 Running 0 2m
spin-orca-bootstrap-v000-xkhhh          1/1 Running 0 54s
spin-redis-bootstrap-v000-798wm         1/1 Running 0 2m
spin-redis-v000-q9wzj                   1/1 Running 0 12s

kubectl get pods -n spinnaker
NAME                                    READY STATUS RESTARTS AGE
spin-clouddriver-bootstrap-v000-pdgqr   1/1 Running 0 2m
spin-clouddriver-v000-jswbg             0/1 ContainerCreating 0 3s
spin-deck-v000-nw629                    0/1 ContainerCreating 0 5s
spin-echo-v000-m5drt                    0/1 ContainerCreating 0 4s
spin-front50-v000-qcpfh                 0/1 ContainerCreating 0 3s
spin-gate-v000-8jk8d                    0/1 ContainerCreating 0 4s
spin-igor-v000-xbfvh                    0/1 ContainerCreating 0 4s
spin-orca-bootstrap-v000-xkhhh          1/1 Running 0 1m
spin-orca-v000-9452p                    0/1 ContainerCreating 0 4s
spin-redis-bootstrap-v000-798wm         1/1 Running 0 2m
spin-redis-v000-q9wzj                   1/1 Running 0 18s
spin-rosco-v000-zd6wj                   0/1 Pending 0 2s

As you can see, the pods come up as Halyard gets to them.  The entire process can take half an hour or more, but eventually, you will see that all pods are running and ready.

NAME                                    READY STATUS RESTARTS AGE
 spin-clouddriver-bootstrap-v000-pdgqr   1/1 Running 0 8m
 spin-clouddriver-v000-jswbg             1/1 Running 0 6m
 spin-deck-v000-nw629                    1/1 Running 0 6m
 spin-echo-v000-m5drt                    1/1 Running 0 6m
 spin-front50-v000-qcpfh                 1/1 Running 1 6m
 spin-gate-v000-8jk8d                    1/1 Running 0 6m
 spin-igor-v000-xbfvh                    1/1 Running 0 6m
 spin-orca-bootstrap-v000-xkhhh          1/1 Running 0 7m
 spin-orca-v000-9452p                    1/1 Running 0 6m
 spin-redis-bootstrap-v000-798wm         1/1 Running 0 8m
 spin-redis-v000-q9wzj                   1/1 Running 0 6m
 spin-rosco-v000-zd6wj                   1/1 Running 0 6m

When that happens, point your browser to the UI URL you configured in the last section; it’s the address for port 9000. For example, in my case it is:

http://35.184.29.246:9000 

You should see the Spinnaker “Recently Viewed” page, which will be blank because you haven’t done anything yet:

To make sure everything’s working, choose Actions->Create Application:

Spinnaker Recently Viewed

Enter your name and email address and click Create.  

Create new Spinnaker Application

You should find yourself on the Clusters page for your new app:

Spinnaker clusters page

So that’s it!  Next time, we’ll look at actually creating a new pipeline in Spinnaker.

(Thanks to Andrey Pavlov for walking me through the mysteries of how to make this work!)

 

Subscribe to Our Newsletter

Latest Tweets

Suggested Content

Nick Chase (@NickChase on Twitter)

Nick Chase is head of Content for Mirantis and author of over a dozen technical books, including Machine Learning for Mere Mortals.

LIVE DEMO
Mirantis Application Platform with Spinnaker
WEBINAR
How to Increase the Probability of a VNF Working with Your Cloud