Kubernetes 1.8 was planned as a stabilization release, but that doesn’t mean there’s nothing interesting to look forward to. The release includes early versions of a number of different developments that provide additional features and control, including a fundamental change to how Kubernetes runs.
Deployment and operations: Self-hosting Kubernetes on Kubernetes
Which came first, the chicken or the egg? How do you compile a compiler? What kind of infrastructure runs infrastructure software? That’s the question that’s been facing Kubernetes developers: Kubernetes is a great infrastructure on which to host robust applications, but Kubernetes itself can benefit from those advantages.
The solution is a ‘self-hosted” architecture, in which the Kubernetes control plane, that is, the pieces that make it work, are themselves hosted by Kubernetes. This software “inception” makes it possible to both operate and use a Kubernetes cluster using the same set of skills.
In Kubernetes 1.8, we have the first experimental version of a self-hosted cluster, easily created with the kubeadm tool. At this point you still have to enable the feature, but the community plans to make this the default for Kubernetes 1.9.
New ways to take control
Kubernetes 1.8 includes a number of different alpha-level features that provide more control over your cluster.
Many of the changes in Kubernetes 1.8 involve storage. For example, you can increase the size of a volume, though this is currently implemented only in the Gluster backend — and at this stage, it only increases the size of the volume, and doesn’t resize the filesystem. Also, you can now use the Kubernetes API to create a volume snapshot. This functionality is actually at the “prototype” level; for the moment, it doesn’t stop any processes currently running on the volume — a process called “quiescing” — so there’s a possibility that your snapshot may be inconsistent. Still, it’s a look at what’s to come.
On the server side, NFV developers in particular will be glad to hear of the arrival of alternative container-level affinity policies, as well as the ability to request pre-allocated hugepages.
Perhaps the biggest feature, however, is that you now have the ability to create your own binary extensions to the kubectl Kubernetes client. You do this by creating a plugin that provides a new subcommand for kubectl.
On the security front, Kubernetes 1.8 makes it possible to figure out exactly what permissions apply to a particular command. K8s uses Role Based Access Control (RBAC), which can make things completed, but you can now feed a file of roles, rolebindings, clusterroles, or culsterrolebindings to the kubectl auth reconcile command and get back a proper list of rules that includes all of the appropriate implied permissions.
Also, there’s a new SelfSubjectRulesReview API (now in beta), which provides a list of actions that a particular user can perform in a particular namespace, which will make it easier for UI developers to show the appropriate choices.
Networking and Storage improvements
Networking and storage have seen some major work this cycle as well; it’s now possible to specify network policies not just for what can come into a pod, but also what can go out of it. You can also specify rules by IP block. These changes are considered beta.
Also in new “early access” alpha state is new support for a new IP Virtual Server mode for kube-proxy, which is designed to provide both better performance and more sophisticated load balancing algorithms than the current iptables-based architecture.
Meanwhile, StorageClass now provides the opportunity to configure the reclaim policy for dynamically provisioned volumes, rather than always defaulting to delete. You can also use the new VolumeMount.Propogation field (still in alpha) to share mounts between containers, or even between containers and the host.
Developers have also been working on improving the ability to automatically discover and initialize new driver files, called Flexvolume drivers.
Look before you leap
Of course, an upgrade always means changes in behavior that you need to be aware of before committing to the new software so nothing bites you. For example, the release notes point out that “kubectl delete no longer scales down workload API objects prior to deletion. Users who depend on ordered termination for the Pods of their StatefulSet’s must use kubectl scale to scale down the StatefulSet prior to deletion.”
In fact, the release notes specify a number of specific actions you should take before upgrading. Some are simple, such as changing the version specifications for your objects, but others are more deliberate, such as the removal of the deprecated ThirdPartyResource (TPR) API (migrate to CustomResourceDefinition to keep your data) and the fact that the pod.alpha.kubernetes.io/initialized annotation for StatefulSets is now ignored, so dormant StatefulSets for which this value is false “might become active after upgrading”.
Just be sure to check the release notes before you upgrade.