NEW! Dynamic Resource Balancer in Mirantis OpenStack for Kubernetes 24.2   |   Learn More

< BLOG HOME

Introducing Ask an Engineer!

image

If you've subscribed to our Mirantis Minute newsletter (and if you're not, you should!) you may have noticed our new feature, "Ask an Engineer".  We have such a deep bench of engineering talent here at Mirantis that we thought it would be a shame not to share it, so we're opening the floor to your questions!

We're looking for questions on any cloud related topics, such as:

  • Infrastructure management 

  • Kubernetes 

  • Application delivery 

  • YAML 

  • Anything open source 

  • Deployments 

  • Web hosting 

  • Lens usage 

  • Anything cloud-native or cloud related 

You can ask your questions in a variety of ways, including by sending them to askanengineer@mirantis.com, filling out this form, or even better, by joining the Ask An Engineer Discord at https://bit.ly/AskAMirantisEngineer !  

Meanwhile, we did get a couple of questions ahead of this process, so we wanted to go ahead and put them out here for you.

Question: Kubernetes uses manifest files to define and deploy applications and infrastructure, but these are static. How should we manage multiple environments (dev, qa, prod, etc) with Kubernetes manifests?

Answer from Mirantis Engineer Zack Fanelle:  Helm and Kustomize were built to address this use case: to templatize Kubernetes manifest files for multiple environments. One of the challenges with Kubernetes manifests is keeping multiple environments in sync. An out-of-sync change can mean a production outage, such as a deployment being done in development. Development, QA, Production will likely share 80%+ application and infrastructure elements in common. Helm and Kustomize also support defining the unique elements per each environment. Example: the production environment needs a load balancer, but the QA environment does not.

Question:  installing / updating the Docker Desktop Software for Windows 10 turns out into a nightmare most of the time. Partially related to the corporate internet proxy, which I’m forced to use. I have to tweak some weird places (e.g. C:\Windows\System32\drivers\etc\host), configure some weird virtual proxy addresses (host.docker.internal), etc.   [Editor -- Internal details snipped.]

Answer:  

While it's impossible to cover all specifics for everybody, we reached out to Docker, which still owns the Docker Desktop platform, for advice, and they said:

For Business customers with restrictive corporate firewalls, these are the endpoints that must be enabled for all of Desktop’s functionality to work effectively.

For authentication and Registry Access Management

For Kubernetes

And of course

Again, there are lots of specifics for every environment, but that should at least get you started.

Question: What are some of the approaches we should consider in edge and IoT computing use cases?

Answer from Mirantis Engineer Zack Fanelle:  Outside of software architecture patterns on top of Kubernetes, we have seen the following ways to use Kubernetes out in the wild. Of these, the most common of these approaches is local and regional clusters. 

  • Megacluster: A central hub of manager nodes runs from the 'mothership' that runs all workloads for an organization. The worker nodes run on machines near the device side. Technical organizations will use kubernetes namespaces to isolate teams, applications, and environments. Not recommended for high numbers of workers (>100 workers). Although megaclusters are very well possible, technical teams avoid them due to a higher blast radius of system failure and high latency concerns.

  • Regional Stretch Clusters: A stretch cluster is deployed regionally to reduce latency. Example, a telecom company runs a stretch cluster for each region in the US (Northeast, Midwest, etc). The regional cluster can be enriched with more complex topologies. See here for more details on zones and topology spread constraints. 

  • Local clusters: Each edge location is given its own Kubernetes cluster.

Clarity for terminology used above:

  • Node: A server (physical or VM) dedicated to Kubernetes

  • Manager Nodes: Control Plane Nodes

  • Stretch cluster: A kubernetes cluster whose manager and worker nodes are in different geographic zones.

  • Worker Nodes:  Nodes running applications in pods

  • Device: Small form computing software that sends and receives data to edge locations and/or the cloud. Example: A thermometer device in a home.

  • Edge location: A location where devices communicate with workers. An example would be a worker node in a retail store tracking all of the local devices.

You know you've got questions, and we'd love to hear them!  Again, please join us at https://bit.ly/AskAMirantisEngineer or send us an email at askanengineer@mirantis.com.


Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED

NEWSLETTER

Cloud Native & Coffee

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW