Last week we presented a webinar talking about the new features in the new release of Kubernetes 1.11. If you missed it, you can catch the replay here, but in the meantime we wanted to present some of the questions and answers, including those we didn’t get to live.
Will 1.10 applications run with no changes on 1.11?
Whether or not an application that was built for and is running on 1.10 will run with no changes on 1.11 is going to depend on two things. It’s going to depend on how much you are depending on the the actual Kubernetes infrastructure, and whether or not the parts of the Kubernetes infrastructure you are depending on have any changes.
For example, if you have an application where the application itself is just an isolated container that runs in a pod by itself — it doesn’t do anything special, it doesn’t use any particular security policies or anything like that — it’s going to run with no problem.
If your application is a custom resource, and it hooks in with the API server, and it does all kinds of security things, then you obviously are going to want to test it out before you do the upgrade to make sure nothing breaks.
What about Windows support?
I touched on this briefly when we were talking about CRI, but SIG Windows and the rest of the community have been working pretty hard on creating sort of parity between running Kubernetes and Windows and running Kubernetes on Linux. They’re getting pretty close. A lot of that has to do also with the fact that Microsoft wants you to be able to run Kubernetes on Azure, so you’ve got SIG Azure, there they’re doing a ton of work. You’ve got azuredisk that now supports the same volume expansion and so on. So it’s definitely getting there.
Can I adjust runtime arguments for my etcd?
Yes, actually you can. I You can pass in extra arguments to etcd when you are firing up Kubernetes, so you can use that to adjust the heartbeat or do whatever you need to do in terms of passing in those extra arguments.
Is there a way to control the output when I ask kubectl about my CRDs?
I assume you mean the objects created by your CRDs, but yes you can. objects created as a custom resource are treated like regular objects, so they have the regular columns that you would get, like the status, the age and the name, but you can also add additional columns using a parameter called spec.additionalPrinterColumns when you are creating your custom resource definition. That will enable you to control what gets printed out.
What is the difference between custom resources and API aggregation?
Custom resources are more of a declarative type of thing; you create the definition using YAML. They don’t necessarily involve any programming, although obviously they can, whereas if you’re going to go directly to the API, you are going directly to the API. This is something that is going to be much more complex from the programming standpoint, and it lends itself better to situations that are not good for the declarative model. So which one you want to use is going to depend on what it is you are trying to do, and what your use case is.
Please say a few words on IPv4/v6 dual stack support in 1.11
IPv4/v6 dual stack support was originally planned for 1.11, but it was pushed to 1.12, which will be out in three months.
What’s the status of multi-network interface support for the pods?
At the moment you can achieve multi-network support using a plugin such as Multus. According to the Kubernetes documentation, “Multus supports all reference plugins (eg. Flannel, DHCP, Macvlan) that implement the CNI specification and 3rd party plugins (eg. Calico, Weave, Cilium, Contiv). In addition to it, Multus supports SRIOV, DPDK, OVS-DPDK & VPP workloads in Kubernetes with both cloud native and NFV based applications in Kubernetes.”
if a pod is preempted, what happens to its shutdown? Does it get to do a graceful shutdown?
Yes. If a pod gets preempted by this whole pod priority and preemption process, it does get to do a graceful shutdown. It’s not as though all of a sudden the pod just disappears and everything crashes. That’s not what happens. If you have a 30 second graceful shutdown on that pod, it will go through the graceful shutdown, but you know it will actually get shut down.
What are the plans to add a new Master node to cluster via Kubeadm, like we can add a new node?
Current plans are to add this feature in 1.12, though of course it’s always possible that it may slip. You can track the feature issue here.
What’s your recommendations on the actual upgrade.
As far as what’s the recommendation on the actual upgrade how to perform the upgrade itself is going to depend on what system you’re using; the instructions for upgrading Kubeadm are of course different from, say, an MCP upgrade.
However, as far as whether and when to do it, I will say that this release has been pretty stable. There haven’t seemed to have been too much in the way of last-minute breakages, though it’s important to check the release notes for any known issues. Also, if you are affected by any of the things under the “Before you upgrade” section, make sure that you take those into consideration before you move on with the upgrade.