< BLOG HOME

Announcing k0rdent v1.2.0 | OpenStack Hosted Control Plane, and More Platform Engineering

image

We’re excited to announce the release of k0rdent v1.2.0, a significant milestone in simplifying distributed container management at scale.

Why this matters: v1.2.0 brings an OpenStack Hosted CP template, which is an exciting development from the end-user standpoint, and improved support for ARM64 with proper documentation on how the community can implement it.

For those of you who are new to k0rdent, 

k0rdent is a composable Kubernetes Management Platform designed for platform engineers managing infrastructure at scale. Acting as a “super control plane”, it enables centralized, template-driven lifecycle management of clusters and services across on-prem, cloud, and hybrid environments.

Built on open standards, k0rdent makes it easier to create secure, consistent Internal Developer Platforms (IDPs) for modern workloads.

  • k0rdent Cluster Manager (KCM) — Lifecycle, upgrades, and scaling of Kubernetes clusters via Cluster API.

  • k0rdent State Manager (KSM) — Deployment and management of services (e.g., Istio, Flux, cert-manager) using templated, declarative ServiceTemplates.

  • k0rdent Observability & FinOps (KOF) — Metrics, logging, dashboards, and cost visibility through integrations with VictoriaMetrics and OpenCost.

Documentation:QuickStart Guide

Source Code:GitHub

Highlights from the First k0rdent Community Call

The k0rdent community hosted its first community call on 24th July (4th Thursday of the month)watch the recording here.

During the call, maintainer Dina Belova introduced k0rdent to the community and shared a recap of the k0rdent v1.1.0 release. The event saw positive engagement, with more questions around k0rdent usage and inspiring stories of why community members are choosing k0rdent to power their platforms.

Updates to the k0rdent cluster manager (kcm)

OpenStack Hosted CP template added (docs)

A lot of community users requested an OpenStack Hosted Control Plane Template. One of the reasons it wasn’t part of kcm was due to persistent upstream issues with CAPO.

The issue reported in Panic in OpenStackMachineReconciler if OpenStackCluster.Status.Network is nil (Hosted Control Plane scenario) · Issue #2380 · kubernetes-sigs/cluster-api-provider-openstack was reproducible only when the OpenStack cluster was annotated with cluster.x-k8s.io/managed-by: k0smotron. This annotation was recommended by k0smotron for hosted cluster scenarios and indicates that the cluster is self-managed. However, it causes the OpenStackCluster status to remain empty, which leads to a panic in CAPO.

To work around this, “the managed-by: k0smotron” annotation was omitted and instead configured the deployment to skip network, subnets, router, and load balancer creation by CAPO. The user was required to provide an existing network, subnet, and router. This approach is similar to what we already have in docs for hosted clusters on AWS or Azure — using the same network as the management cluster is a supported and valid option.

When the required networking components are provided, CAPO won’t create new infrastructure resources, but it will correctly populate the OpenStackCluster status with references to the existing ones, avoiding the panic during reconciliation.

This setup effectively replicated the behavior of using the managed-by annotation, without triggering the bug in the current version of CAPO.

This approach had no downsides and enabled faster delivery of the template without the need to build and maintain a custom k0rdent enterprise build while the CAPO fix is pending upstream.

This PR adds a new template for hosted OpenStack cluster deployment.

The deployment requires you to provide an existing network, subnet and router.

Example of the ClusterDeployment configuration:

apiVersion: k0rdent.mirantis.com/v1beta1
kind: ClusterDeployment
metadata:
  name: ekaz-hosted
  namespace: kcm-system
spec:
  template: openstack-hosted-cp-1-1-1
  credential: openstack-cluster-identity-cred
  config:
    clusterLabels: {}
    clusterAnnotations: {}
    workersNumber: 2


    flavor: kaas.small
    image:
      filter:
        name: ubuntu-20.04
    externalNetwork:
      filter:
        name: "public"
    identityRef:
      name: "openstack-cloud-config"
      cloudName: "openstack"
      region: RegionOne


    network:
      filter:
        name: k8s-clusterapi-cluster-kcm-system-openstack-ekaz
    router:
      filter:
        name: k8s-clusterapi-cluster-kcm-system-openstack-ekaz
    subnets:
    - filter:
        name: k8s-clusterapi-cluster-kcm-system-openstack-ekaz
    ports:
    - network:
        filter:
          name: 
k8s-clusterapi-cluster-kcm-system-openstack-ekaz

Azure templates configuration made more flexible (docs)

Currently, Azure templates define image.marketplace as the default in values.yaml. This approach limits flexibility, as it prevents users from specifying alternative image sources such as image.id or image.computeGallery (admission validation fails when a user tries to provide another image source, since only one of marketplace, id, or computeGallery is allowed).

This can be problematic when users need to specify a different image source, for example, with alternative architectures like ARM64 using image ID or computeGallery.

Existing ARM64 limitations documented

k0rdent can be deployed on ARM64-based infrastructure, but there are some current limitations to be aware of.

Infoblox CAPI Provider Compatibility

The Infoblox Cluster API IPAM provider does not currently support the ARM64 architecture. See the upstream issue for details: Multi-arch support.

As a result, the Infoblox provider will fail to start during the installation process, and the management object will remain in a non-ready state. This blocks the successful deployment of k0rdent on ARM64 platforms.

Workaround#

To install k0rdent without the Infoblox provider, you should use a custom management configuration that excludes cluster-api-provider-infoblox from the list of enabled providers. Follow the official configuration guide here: Extended Management Configuration Guide.

This will allow k0rdent to be deployed successfully on ARM64 infrastructure without relying on unsupported components.

Support for overriding k0smotron manager parameters added (docs)

As part of #1369 ([PR https://github.com//pull/1392), support for configuring manager parameters was added for all providers except k0smotron, due to a bug in the Cluster API Operator https://github.com/kubernetes-sigs/cluster-api-operator/issues/787).

Now that the bug has been resolved, we can proceed with adding the same functionality to k0smotron to ensure consistency across all providers.

Changelog

The complete ChangeLog for k0rdent Cluster Manager repo is as follows:

🚀 New Features 🚀

🐛 Notable Fixes 🐛

Notable Changes

Updates to k0rdent observability and FinOps (KOF)

Thek0rdent Observability & FinOps (KOF) module is purpose-built to address the organizations that are scaling Kubernetes operations across cloud and on-prem environments, making observability and financial accountability critical and needing integrating robust metrics, logging, and cost visibility.

  • [kof] Switched to opentelemetry-kube-stack collectors, metrics, dashboards (docs adjusted)

  • [kof] Optional crossNamespace discovery of regional cluster (docs adjusted)

Following up on #173 (comment), #200 and #179

We have 4 collectors:

  1. kube cluster collectors (cluster-stats)

    enable as much as possible for kube collection

  2. node daemon collectors

    host metrics

    additional scrape config for "kubernetes-pods" jobs and kubelet metrics (/cadvisor, /metrics, /metrics/resource, /metrics/probes)

  3. k0s components collector (hostnetwork, polling etcd, kube-controller-manager)

    prometheus receiver with a scrape config to poll pods such as kube-controller-manager, scheduler, etcd (launched by k0s)

    syslog collector that also extracts contents using Grok patterns as for instance default Ubuntu 24.04 log format forwarded by systemd to rsyslog is not in any way syslog-rfc-compliant

  4. target-allocator collector - dedicated

    works only against prometheus objects. ta is enabled independently as it modifies how node collectors receive targets to scrape and affects the part of the scrape config for daemon that uses hacks around env variables such as OTEK_KUBE_NODE_NAME and others. so we separate those 2 daemons for them not to step on each other's toes

Collectors are sprinkled over with attribute transformers and populate node/job/instance and/or their opentelemetry counterparts (i.e. service.instance.id), so when we use kube-prom-stack dashboards and alerts, we do not have label discrepancy.

Possible known issues with the version:

  1. Some attributes need to be additionally renamed ('/hostfs' for /var/log/syslog)

  2. Some collectors are commented out (journald is still alpha, but can be parametrised to be enabled, requires json parsing and ugly hacks with LD_LIBRARY_PATH - honestly, to be removed in this version)

  3. Requires additional filtering of redundant and very noise metrics such as some of kubeapi latency buckets, etc.

  4. Some servicemonitors might collect the same metrics as other collectors (i.e. node-exporter for daemon collector and node-exporter via service monitor — to be cleaned up as well)

  5. otel-operator requires explicit setting for fallbackstrategy to collect non-Node service monitors such as apiserver

P.S. otel-kube-stack directory is a sandbox playground to be removed in a consequent commit bit later

The community suggested we use opentelemetry-helm-chart subchart of opentelemetry-kube-stack and move to it completely as it seems to provide all that these features mention and even more.

As part of the KOF 1.2.0 overhaul of metrics collection and representation, we switched from the victoria-metrics-k8s-stack metrics and dashboards to opentelemetry-kube-stack metrics and kube-prometheus-stack dashboards.

KOF data (metrics, logs, traces) can be collected from each cluster and stored in specific places:

Some of the previously collected metrics have slightly different labels.

  • If consistency of time series labeling is important, users are advised to conduct relabeling of the corresponding timeseries in the metric storage by running a retroactive relabeling procedure of their preference.

  • A possible reference solution here would be to use Rules backfilling via vmalert.

  • The labels that would require renaming are these:

    • Replace job="integrations/kubernetes/kubelet" with job="kubelet", metrics_path="/metrics".

    • Replace job="integrations/kubernetes/cadvisor" with job="kubelet", metrics_path="/metrics/cadvisor".

    • Replace job="prometheus-node-exporter" with job="node-exporter".

Also:
To upgrade from cert-manager-1-16-4 to cert-manager-v1-16-4please apply this patch to management cluster:kubectl apply -f - <

apiVersion: k0rdent.mirantis.com/v1beta1

kind: ServiceTemplateChain

metadata:

  name: patch-cert-manager-v1-16-4-from-1-16-4

  namespace: kcm-system

  annotations:

    helm.sh/resource-policy: keep

spec:

  supportedTemplates:

    - name: cert-manager-v1-16-4

    - name: cert-manager-1-16-4

      availableUpgrades:

        - name: cert-manager-v1-16-4

  • EOF

Check out the full release notes here.

Full Changelog: v1.1.0...v1.2.0

Learn More

Join the k0rdent Community

Platform engineers today face increasing demands—but they don’t have to tackle them alone. k0rdent is 100% open-source and community-driven, offering the flexibility, tools, and ecosystem needed to manage distributed infrastructure efficiently.

Built by an international team of passionate developers, k0rdent thrives on collaboration. We welcome contributions and ideas to expand and improve the project.

Get Involved:

️Drop a star to support the k0rdent project Explore the k0rdent Community repo on GitHub Join the #k0rdent channel on CNCF Community Slack (Sign up for CNCF Slack, then join #k0rdent) Sign up via our Community Invitation Form to attend Team k0rdent’s regular Office Hours

Be part of the movement—let’s build the future of Kubernetes-native infrastructure together!

Getting Started

You can try out k0rdent v1.2.0 today by following the QuickStart guide and deployment instructions. For feedback, questions, or support, please reach out via email, Slack, or GitHub Issues. We look forward to hearing about your experience.

Prithvi Raj

Community Manager & Developer Advocate

Mirantis simplifies Kubernetes.

From the world’s most popular Kubernetes IDE to fully managed services and training, we can help you at every step of your K8s journey.

Connect with a Mirantis expert to learn how we can help you.

CONTACT US
k8s-callout-bg.png