NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

Kubernetes Secrets in a Multi-Cluster Environment

image
When leveraging Kubernetes in large enterprise environments, we often need to spread infrastructure out over multiple clusters. We call this architectural pattern a multi-cluster environment, and while it  can enable great benefits, it also comes with new challenges. In this post, we will explore the common problem of handling confidential information in multi-cluster Kubernetes environments -- and how to make that job easier on your team by using Kubernetes Secrets.

Why do organizations adopt multi-cluster architectures?

Organizations’ reasons for leveraging multi-cluster architectures often come down to their particular needs and environment. However, there are some common trends:
  • Resiliency: The ability to recover quickly from cluster outages or temporary failovers in a cluster is one advantage of multi-cluster environments. Organizations get to limit fault domains with a multi-cluster setup.
  • Separation of roles, teams, and environments: Certain policies, privacy laws, or security concerns might push an organization to separate tenants into their own cluster. Also, it's a great practice to separate development, staging, and production environments while also managing them in a unified manner.
  • Geographical Concerns: Multi-cluster environments enable low-latency networks, which improve the experience as the cluster is available in two regions. For example, you can set up multi-cluster environments in multiple regions of Europe and the US to better serve your respective consumers.
  • Eliminate Vendor Lock-In: With multi-cluster developments, organizations have the ability to switch between different vendors and evaluate costs, etc.

What are Kubernetes Secrets?

Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a Pod definition or a container image. Think of Secrets as objects in Kubernetes that store restricted data so that it can be used without being revealed.
Secrets can be implemented in Kubernetes in a variety of ways.

Types of Kubernetes Secrets

  • Arbitrary user-defined data
  • Service Account Token
  • Docker Login info (for private container registries)
  • Credentials for Basic Authentication
  • SSH Keys
  • TLS certificates
  • Sealed Secrets
  • Any other custom secret your application or architecture requires.
For more information about the different types of secrets and their explanations, check out the official documentation. In this example, we're going to use a Sealed Secret.
For a more in-depth look at Kubernetes secrets management via HashiCorp's open source Vault and new Secrets Operator, check out our recent Tech Talk on the topic.

Difficulties with managing Kubernetes Secrets in multi-cluster environments

Because Kubernetes is a declarative system and object definitions are stored in YAML (or JSON) files, adding confidential information to a version-controlled file (that anyone can view) is against any security best practice.
Secrets data in Kubernetes is stored in etcd and this data is not encrypted at rest. This means if someone has access to the etcd disk, your Secrets are visible to them. Furthermore, as previously discussed, Secrets can't be versioned. This means change management is difficult and tracking changes becomes harder.
In addition, because of the way many services operate, applications that consume Secrets may store them non-securely. In the same vein, Pods can transmit -- and inadvertently reveal -- Secret values. Anyone who gains root access to any node on the cluster can see Secrets, because they can imitate the behavior of kubelet.

How to create a federated multi-cluster Kubernetes environment

Multi-cluster environments consist of at least two Kubernetes clusters, one being the host or parent and the remaining clusters are the child clusters. For this tutorial, we're going to use two clusters, a parent and child, and we will execute most commands from the parent cluster.
Once our clusters are up and running, it's important to have them accessible via the terminal, and the best way to check these is to check if the nodes are running.
kubectl get nodes
Next, you can rename the context of each cluster to enhance the readability of our commands. To do this, check the context of the clusters on the KUBECONFIG and rename them appropriately.
kubectl config get-contexts

kubectl config rename-context old_context_name_1 cluster1
kubectl config rename-context old_context_name_2 cluster2
Once this is done, we will set our context to the parent cluster--in this case, cluster1--and install the KubeFed project to the cluster using Helm. (You will need to install Helm if you don’t have it already.)
kubectl config use-context cluster1

helm repo add kubefed-charts  \\
https://raw.githubusercontent.com/kubernetes-sigs/kubefed/master/charts
helm --namespace kube-federation-system upgrade -i \\
kubefed kubefed-charts/kubefed --create-namespace
For KubeFed to work, we will join the clusters using the kubefedctl CLI. If you don’t have kubefedctl already, you will need to install it.
kubefedctl join cluster1 --cluster-context cluster1 \\

    --host-cluster-context cluster1 --v=2
kubefedctl join cluster2 --cluster-context cluster2 \\
    --host-cluster-context cluster1 --v=2
Next, we'll verify the status of the joined clusters. We expect to see a True status report from the command below:
kubectl -n kube-federation-system get kubefedclusters
Finally, we'll create a YAML file called namespace.yaml in our working directory, using the specifications below. 
apiVersion: v1

kind: Namespace
metadata:
  name: test
---
apiVersion: types.kubefed.io/v1beta1
kind: FederatedNamespace
metadata:
  name: test
  namespace: test
spec:
  placement:
    clusterSelector: {}
This will allow us to set up a federated namespace called test, where we will install all of our federated resources moving forward.
kubectl apply -f namespace.yaml

How to handle Kubernetes Secrets in a multi-cluster environment

There's no one-size-fits-all solution to managing Kubernetes Secrets in any environment, so there are several ways to handle the process of Secrets management in our clusters, and they're outlined here.
For this post, we're going to take advantage of Sealed Secrets. To demonstrate Secret management with Sealed Secrets, we'll create a deployment with KubeFed that uses a Docker image from a private container registry, and then use Sealed Secrets to encrypt the authentication to the container registry.

Federated deployment

Here, we'll create a deployment configuration in the federated namespace we created earlier. This deployment pulls its image from a private docker registry. A federated deployment looks this way:
apiVersion: types.kubefed.io/v1beta1

kind: FederatedDeployment
metadata:
  name: test-deployment
  namespace: test
spec:
  template:
    metadata:
      labels:
        app: <app-name>
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: <app-name>
      template:
        metadata:
          labels:
            app: <app-name>
        spec:
          tolerations:
            - effect: NoExecute
              key: node.kubernetes.io/unreachable
              operator: Exists
              tolerationSeconds: 30
            - effect: NoExecute
              key: node.kubernetes.io/not-ready
              operator: Exists
              tolerationSeconds: 30
          containers:
            - image: <private-registry-url>
              name: <app-name>
          imagePullSecrets:
            - name: regcred
  placement:
    clusterSelector: {}
Enter the specifications above into a YAML file -- let’s call it fed-deploy.yaml -- and apply it. 
kubectl apply -f fed-deploy.yaml

Download Sealed Secrets

First, visit the Sealed Secrets release page and download the latest version of the controller.yaml file.
wget <https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.16.0/controller.yaml>

Federate Kubernetes APIs

We'll need to federate certain Kubernetes APIs as Sealed Secrets needs them.
kubefedctl enable role

kubefedctl enable rolebinding
kubefedctl enable clusterrolebinding
kubefedctl enable customresourcedefinition

Install federated Sealed Secrets

Next, we generate a federated YAML of the Sealed Secrets controller. 
kubefedctl federate --filename controller.yaml > fed-controller.yaml
Since this is going to be a Federated installation, you will need to edit the newly generated fed-controller.yaml file. Throughout the YAML document, replace kube-system with the name of the federated namespace we created earlier, test. Finally, we apply it to the cluster. To do this, execute the following command:
kubectl apply -f fed-controller.yaml

Deploy Docker Secrets to our multi-cluster environment

At this point, Sealed Secrets has been installed. Now we need to generate a sealed secret and apply it to the different clusters, since they share the same private keys. 
For this example, we will need a secret.yaml document -- this is the secret file of a Docker credentials configuration, which you can learn more about  here. For the purposes of this guide, we can use a simple YAML file that looks like this:
apiVersion: v1

kind: Secret
metadata:
  name: bigsecret
data:
  USER_NAME: 8gsfjbK7
  PASSWORD: HHJsj6j4kfbn
Next, we’ll use the kubeseal CLI to create our Sealed Secret. (If you haven’t installed kubeseal previously, you can find instructions to do so here.)
kubeseal --format=yaml < secret.yaml > sealed-secret.yaml --controller-namespace test

kubectl apply -f sealed-secret.yaml
kubectl config use-context cluster2
kubectl apply -f sealed-secret.yaml
kubectl config use-context cluster1

Conclusion

In this article, we've taken a look at what multi-cluster Kubernetes environments are and why they are adopted. We've also seen methods of standing up these environments and created one ourselves. We've also looked at some of the challenges in handling Secrets in a multi-cluster environment. Lastly, we used KubeFed to show how to address these issues in your environment.

Eric Gregory

Eric Gregory is a Senior Technical Writer at Mirantis, based out of North Carolina.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW