Kubernetes 1.30 Available, New Version of the Container Orchestrator


Kubernetes 1.30, named “Uwurnetes,” has been released as the new stable version of the well-known container orchestrator, which originated at Google and is now under the umbrella of The Linux Foundation. The developers have described this release as "the most beautiful" in the history of the software and have announced it includes a total of 45 new features, with 17 in stable phase, 18 in beta, and 10 in alpha. However, here we will only mention the most important ones.

The first notable update in Kubernetes 1.30 is the stable release of the Robust VolumeManager rebuild after restarting the kubelet. This involves a refactoring of the volume manager that allows the kubelet to complete additional information on how existing volumes are mounted during the kubelet's startup. Generally, this makes volume cleanup more robust after restarting the kubelet or the machine.

Also arriving in stable phase is the prevention of unauthorized volume mode conversion during volume restoration. This involves the control plane preventing unauthorized changes to volume modes when a snapshot is restored to a persistent volume (PersistentVolume). Cluster administrators will need to grant permissions to the appropriate identity managers if they need to allow this type of change at the time of restoration.

Another feature that has reached stable status in Kubernetes 1.30 is the preparation for pod scheduling. This opens the door for the orchestrator to prevent scheduling a pod that has been defined when the cluster does not yet have the resources provisioned to allow binding that pod to a node. This adds a custom control that allows a Pod to be scheduled while also enabling the user to implement quota mechanisms, security controls, and more.

The ability to mark pods as exempt from scheduling, as expected, reduces the workload of the scheduler, thus generating pods that cannot or should not be scheduled on the nodes that the cluster has. If auto-scaling of the cluster is active, using scheduling gates reduces the load on the scheduler, which can lead to cost savings. Without scheduling gates, the auto-scaler could launch a node that does not need to be started.

The minimum domains (minDomains parameter) for PodTopologySpread constraints allow defining the minimum number of domains and are another feature that arrives in stable phase with the release of Kubernetes 1.30. It is designed to be used with the Cluster Autoscaler, which would provision nodes in new domains.

As the last thing that has arrived in stable phase is that the Kubernetes repository now uses Go workspaces. This change should not impact end users but is relevant to developers of downstream projects. The shift to Go workspaces has triggered some significant changes in the flags of various tools from k8s.io/code-generator.

In terms of features that have entered the beta phase, the first mentioned is NodeLogQuery, which requires being enabled for the node and that the kubelet's configuration options enableSystemLogHandler and enableSystemLogQuery are set to true. On Linux, this means that logs are available in journald, while on Windows, it is assumed that service logs are available from the application log provider.

Another feature reaching beta phase is the CRD Validation Ratcheting (CRDValidationRatcheting), which is then applied to all CustomResourceDefinitions of the cluster. With this, the server's API is willing to accept updates to resources that are not valid after the update, under the condition that each part of the resource that could not be validated has not been modified by the update operation.

Contextual Logging, also in beta phase, allows developers and operators to inject customizable and correlated contextual details such as service names and transaction identifiers (ID) into logs through WithValues and WithName. This simplifies the correlation and analysis of log data across distributed systems, significantly enhancing the efficiency of troubleshooting efforts.

The behavior of the load balancer, served through the LoadBalancerIPMode feature, is now enabled by default. It allows configuring .status.loadBalancer.ingress.ipMode for a service with type set to LoadBalancer. .status.loadBalancer.ingress.ipMode specifies how the load balancer's IP behaves.

Structured Authentication Configuration is a function that represents the first step to correct certain limitations present in the container orchestrator, such as the impossibilities of using multiple authenticators of the same type and changing the configuration without having to restart the server's API. Additionally, it provides a more extensible way to configure the authentication chain in Kubernetes.

And we close the beta features with the Structured Authorization Configuration, which aims to provide a more structured and versatile configuration of the authorization chain.

To cover some things that are in the alpha phase, the first thing mentioned by the software managers is that version 1.27 included an optimization that sets SELinux labels on the content of volumes using only a constant time, which is achieved by Kubernetes through the use of a mount option. The novelty brought by the SELinux mount option in Kubernetes 1.30 is that support has been extended to all volumes in the alpha phase, while the limited support to ReadWriteOncePod volumes came in version 1.27 and is in beta phase.

Recursive read-only mounts are another feature that has arrived in the alpha phase and represents a new layer of security for data. It allows configuring volumes and their submounts as read-only, thus preventing accidental modifications.

Indexed jobs support .spec.successPolicy from Kubernetes 1.30 to define when a job can be declared successful based on successful pods. This function allows defining two types of criteria: succeededIndexes, which indicates that the job can be declared successful when these indexes were successful even if others failed, and succeededCount, which indicates that the job can be declared successful when the number of successful indexes reaches this criterion.

The spec.trafficDistribution field has been introduced in the Kubernetes service in alpha phase and allows expressing preferences on how traffic should be routed to the service endpoints. While traffic policies focus on strict semantic guarantees, traffic distribution allows expressing preferences, which can help optimize performance, cost, or reliability. The field can be used by enabling the ServiceTrafficDistribution feature gate for the cluster and all nodes.

Continuing with more things related to traffic distribution, PreferClose indicates a preference for routing traffic to endpoints that are topologically close to the client. Setting this value allows implementations to make different trade-offs.

And as the last notable novelty of Kubernetes 1.30, also in alpha phase, is the new API for storage version migration (StorageVersionMigration). The orchestrator relies on a data API that is being actively rewritten to support some storage-related maintenance activities.

And these are all the notable new features of Kubernetes 1.30. All details are published in the official announcement and release notes, while the software's source code can be obtained from the release section of the project's GitHub repository.


Comments

Popular posts from this blog

systemd 256 arrives with run0, the ‘sudo clone’ that aims to improve security

Fedora Asahi Remix 40, the new version of Linux for Apple Silicon