Kubernetes 1.12 is scheduled for release in the next week or so, so we wanted to take a minute to look at some of the new features and changes you can expect.
Before we start, note that until the release actually happens, none of this is set in stone; things happen, and features can be added and removed before the actual release. Also, we’re going to concentrate on global features, though there are also some that are specific to individual providers, such as Azure and Google Cloud Engine.
So let’s start with features that are graduating to stable.
Stable features in Kubernetes 1.12
Stable features have been through several versions of Kubernetes, and are ready for use in production systems.
- Mount namespace propagation: One of the recent changes in how volumes are handled is a container’s ability to mount a volume as rshared so that container mounts are part of the host’s mount namespace. Note that this will not be the default, but when enabled, it will enable containerization of volume plugins and other capabilities.
- Server-side printing in kubectl: This feature takes the responsibility for returning data from kubectl commands such as get and describe from the client and moves it to the server, so that this logic doesn’t have to be reimplemented for each client.
- Egress support for Network Policy: When considering Network Policies, the ability to make a connection into a pod (Ingress) is what typically comes to mind, but this feature, now considered stable, enables you to specify what can leave the pod as well.
- ipBlock for Network Policies: Kubernetes 1.12 promotes the ability to specify the CIDR for egress/ingress network policies as an official field to stable status, rather than requiring users to specify the information as annotations.
Beta features in Kubernetes 1.12
Beta features aren’t necessarily ready for use in production systems, but their interfaces have stabilized and won’t be changing, so you can begin to test your applications agains them.
- Quota by Priority: In some ways, this is a bit of a misnomer, in that Kubernetes isn’t really limited by priority, but rather by namespace. This capability does, however, enable you to decide that different namespaces have different priorities, and assign quotas to those namespaces accordingly.
- Arbitrary/Custom Metrics in the Horizontal Pod Autoscaler: This new feature makes it possible to automatically scale based on arbitrary or custom metrics, rather than just the current “percentage of requested CPU”. These metrics can be standard Kubernetes objects, or even metrics associated with the pods themselves.
- Kubelet Device Plugin Registration: Kubelet now has a standard way to discover what plugins, such as CSI drivers, GPU, and so on, are available to the cluster, through the use of the Registration service. Once the plugin is registered, kubelet makes those resources available to the API server.
- Horizontal Pod Autoscaler to reach proper size faster: This new feature isn’t so much a feature as a performance improvement. The algorithm used to determine how many pods should be active has been adjusted to improve the time-to-completion, but from the user-actions perspective, nothing has changed.
- Resource Quota API restrictions by default: At the moment, if you don’t specify a quota for a particular resource, it is effectively unlimited. To limit the use of a particular resource, you simply create quotas that users must stay within. With this change, you now have the ability to set resources that are limited by default — that is, unless a user has a quota for these resources, they are not available. You can use this to, say, limit access to particularly expensive resources.
- Separate repo for generic cli utils: This feature is more for plugin developers, enabling plugins to be structured as a typical kubectl command.
- Updated Plugin mechanism for kubectl: This new mechanism makes it possible to both create new kubectl commands by creating new binaries and simply naming them with the kubectl- prefix and dropping them into the user’s PATH. This would make it possible to, say, create shortcut commands for commonly used (but cumberson_ tasks, or to create new commands that don’t yet exist. This feature also makes it possible to override existing kubectl commands.
- Topology aware dynamic provisioning: Topology aware provisioning makes it possible for Kubernetes to more intelligently provision resources. The idea is that Kubernetes makes note of all of a pod’s requirements, including resource requirements and affinity policies, before requesting a Persistent Volume. This way you don’t wind up with situations such as those where a pod can’t start because the storage resources it needs are in a different zone. This includes a few associated features, Kubernetes CSI topology support, AWS EBS topology support, and GCE PD topology support.
- Dynamic Maximum volume count: There are already limits to the number of volumes you can add to an AWS or GCE node, but otherwise the process of specifying volume attachment limits is currently inconvenient at best, and inflexible at worst. Kubernetes 1.12 brings with it the ability for any volume plugin to specify attachment limits, even providing the ability to specify different limits for different node types.
- Schedule DaemonSet Pods by kube-scheduler: DaemonSets run on each node, so it made sense when they were created to use their own scheduler, but that precludes them from taking advantage of advancements in the general scheduler. Now Kubernetes 1.12 uses the general scheduler to schedule DaemonSets, which enables them to be affected by new improvements such as the ability to priorities critical processes.
- Configurable Pod Process Namespace Sharing: You can now choose whether specific containers in a pod should share a single process namespace, which enables these processes to signal each other in a way that was impossible before.
- Encryption at rest KMS integration: As you might imagine, a lot of data goes into etcd in the course of using a Kubernetes cluster, and some of it is sensitive and needs protection. This change enables Kubernetes to use a Key Management Service such as Google KMS to encrypt that data.
- Kubelet Server TLS Certificate Rotation: Kubelet uses self-signed certificates for accepting TLS connections, but you can now generate a key locally and use it to issue a Certificate Signing Request to the cluster API server for a Certificate Authority certificate, which will be updated when it expires. Note that this release also features the promotion of the initial Kubelet TLS bootstrap, which generates the CSR, to stable status.
- Vertical Scaling of Pods: While Kubernetes is of course aimed at “cattle”, the reality of the world is that “pets” still exist in the wild, and that means it’s important to be able to scale those pods when those applications need more resources (or scale it down when it doesn’t). This feature enables you to do that.
New alpha features in Kubernetes 1.12
Alpha features can be the most exciting, because they’re new, but you definitely shouldn’t be using them in production systems. The community would love for you to experiment with these features, but when you’re building your applications be aware that both the features and their APIs may change at any time.
- Easier installation through componentconfig: The premise here is to move from individual flags on the command line to versioned API objects, making it easier to understand and control what’s going on in the diverse environments in which Kubernetes is installed. This is a continuation of existing work on ComponentConfig.
- SCTP support for Services, Pod, Endpoint, and NetworkPolicy: Stream Control Transfer Protocol (SCTP), enables multiple streams of data to be sent over the same connection, is widely used in telecommunications, but without explicit support in Kubernetes, these applications have been unable to use all of Kubernetes’ routing and discovery features. This feature will add SCTP support to ContainerPort, Service, and NetworkPolicy, which will enable use of these features. The community also intends to enable SCTP ingress and egress.
- Pass Pod information in CSI calls: The Container Storage Interface (CSI) is supposed to be platform independent, so information about a pod should be relevant. In the real world, however, that’s not necessarily the case. This change passes information about the pod as part of NodePublishVolumeRequest.volume_attributes — but only to drivers that explicitly ask for it (ie, Kubernetes CSI drivers).
- CSI Cluster Registration Mechanism: This features makes it possible (but not necessary) for a CSI driver to register itself with the Kubernetes API. This registration means that users can see what drivers are available to the cluster, and drivers can customize how Kubernetes interacts with them. This features is implemented through the use of the new CSIDriver and CSINodeInfo Custom Resource Definitions.
- Scheduler checks feasibility and scores a subset of all cluster nodes: This feature is an attempt to speed up the scheduling of pods by enabling the scheduler to choose a “good enough” solution rather than the ‘best” solution. Right now, the shceudle checks all nodes and schedules the pod on the node with the best “score”; this feature tells the schedule to stop looking for a “feasible” node when it has a large enough pool to choose from, even if it hasn’t considered every single option.
- TTL after finish: When you run a Job or a Pod to completion, it would be good if it automatically cleaned itself up after some period of time, rather than you having to clean it up manually to make more room. This feature will enable the creation of a Time To Live parameter, after which Jobs and pods that are complete will automatically be cleaned up.
- RuntimeClass: While Kubernetes made its name proimiarly wtih Docker containers, the fact is that it easily supports multiple runtimes using the Container Runtime Interface (CRI). The runtimeClass is a new field on the PodSpec that enables users to designate the specific runtime they want to use (such as Virtlet or Windows containers), and also to specify specific parameters related to that runtime.
- APIServer DryRun: This feature is, as they say, just what it says on the tin. It enables you to do a “dry run” of an operation to see what would happen without actually persisting those changes on the cluster.
- Defaulting and Pruning for Custom Resources: This feature changes the behavior of Kubernetes so that if you add an “undefined” field to your custom resource JSON object, that field gets dropped, or pruned, when the object is persisted in etcd. This may seem like a small thing, but it prevents a possible situation in which old data can react in an unexpected (read: broken or insecure) way with a new version of the resource that adds the previously undefined (and therefore unvalidated) field. This feature also adds any default values for fields that are missing.
- Snapshot / Restore Volume Support for Kubernetes (CRD + External Controller): When this feature is completely implemented, it will be possible to create and restore snapshots or copies of a value at a particular point in time. These snapshots can be used for backup purposes, or possibly as a means of replicating a system’s state.
Of course that’s just the tip of the iceberg in terms of what’s new in this release. Want to learn more? Join me on October 4, for a fast paced look at the changes that will affect you.