Last week we took a look at some of the new features and other changes in Kubernetes 1.14 ahead of Monday’s release. You can view the entire webinar here, but we also wanted to take a few minutes to answer some questions that came up during the webinar.
What is the status of RunAsGroup?
RunAsGroup, which enables you to specify the system group ID as which a Pod will run, is now beta and enabled by default. PodSpec and PodSecurityPolicy. (Not all container runtimes support this capability, however.)
If you’re using OpenStack, can you limit the number of Cinder volumes?
Yes, as part of the openStack provider, you can limit Cinder volumes, just as you can limit the number of volumes of other types users can create.
Can you use Kustomize by itself? What about for non-K8s YAML?
You don’t have to have Kubernetes installed to use Kustomize, but if you try to create non-Kubernetes YAML, you’ll get an error.
What will Ingress be replaced [with]?
Ingress itself isn’t being replaced, it’s just been moved to the networking group.
In order to run k8s on OpenStack what are the pros / cons compared to Rancher and Magnum?
This is probably a topic to which we could devote an entire blog article, but briefly, while Rancher is solely a container management system, running Kubernetes on OpenStack means you’ve got both environments available in case you need to run one or more VMs, and of course the same thing applies to Magnum, since it itself is an OpenStack component. However, running Magnum on Kubernetes means you’re stuck to the version of Kubernetes managed by your version of Magnum.
Can you resize a PersistentVolume without restarting your pods to pick up the changes?
It doesn’t appear so. It’s not so much a matter of the pods not picking up the change as it is that the change can’t happen until the pod is terminated.
Concerning the kubeadm, we must have a LB in front of masters as I could see in the docs, mustn’t we?
The community is currently in the process of simplifying the process of using load balancers with Kubeadm, but for the moment, yes, you need to go ahead and set them up. For example, you can use HA Proxy as your load balancer. You can find more information here: https://kubernetes.io/docs/setup/independent/high-availability/.