Table of contents
- Introduction
- Deprecation and Removal Features in Kubernetes v1.29
- 1. Deprecation of .status.nodeInfo.kubeProxyVersion:
- 2. Deprecation of Legacy In-Tree Cloud Provider Integrations:
- 3. Deprecation of Legacy Service Account Token Cleanup:
- 4. Deprecation of DisableCloudProviders and DisableKubeletCloudCredentialProviders Feature Gates:
- 5. Deprecation of NodeLegacyController in Node Lifecycle:
- Improvements that graduated to stable in Kubernetes v1.29
- New alpha features
Introduction
The release of Kubernetes v1.29 marks another milestone in the continuous evolution of the Kubernetes ecosystem. This release, characterized by 49 impactful enhancements, demonstrates the commitment to delivering excellence within the Kubernetes development cycle. The strength of our community, known for its vibrancy and collaboration, is prominently showcased as we unveil a host of features, with 11 graduating to Stable, 19 entering Beta, and another 19 achieving the esteemed Alpha status.
As Kubernetes matures, each release brings a wealth of innovations, improvements, and increased stability. Let's delve into the key features that have emerged from the crucible of development, signaling the ever-growing capabilities of Kubernetes v1.29.
Join us on this exploration of the latest stable, beta, and alpha features, and discover how Kubernetes v1.29 is shaping the future of container orchestration.
Deprecation and Removal Features in Kubernetes v1.29
In Kubernetes v1.29, several features have been marked for deprecation or removal, signaling changes in the ecosystem that users should be aware of. Understanding these changes is crucial for maintaining compatibility, adopting best practices, and ensuring a smooth transition to newer technologies. Let's delve into the details of the deprecated and removed features in this release:
1. Deprecation of .status.nodeInfo.kubeProxyVersion
:
Overview:
The field
.status.nodeInfo.kubeProxyVersion
for Node objects is deprecated in v1.29.This field historically stored information about the kube-proxy version.
Rationale:
- The kubelet historically managed this field, but it did not have accurate information about the kube-proxy version. Therefore, relying on this field for kube-proxy version information is not reliable.
Impact:
- Client software relying on this field for kube-proxy version should transition to alternative methods.
Recommendation:
- Users are advised to use alternative means to obtain kube-proxy version information.
2. Deprecation of Legacy In-Tree Cloud Provider Integrations:
Overview: In Kubernetes v1.29, a significant change is introduced with the deprecation of in-tree cloud provider integrations. These integrations specifically pertain to popular cloud platforms such as Azure, GCE (Google Cloud Engine), AWS (Amazon Web Services), OpenStack, and vSphere.
Rationale: The decision to deprecate these in-tree cloud provider integrations is driven by an overarching goal within the Kubernetes community—to enhance extensibility and maintainability. By deprecating these integrations, Kubernetes aims to encourage a more modular and adaptable approach, promoting better extensibility for future developments.
Recommendation: The recommended course of action for users is to transition from the deprecated in-tree cloud provider integrations to external cloud controller managers. External cloud controller managers are standalone components that can be managed separately from the core Kubernetes codebase. They offer a more flexible and modular approach, allowing users to stay up-to-date with the latest cloud provider features without waiting for Kubernetes releases.
Example Scenario: Consider a Kubernetes cluster that initially used in-tree AWS integration. With the deprecation, users can adopt an external AWS cloud controller manager, which can be updated independently of the main Kubernetes version. This ensures that AWS-related functionalities remain current and aligned with AWS API changes.
3. Deprecation of Legacy Service Account Token Cleanup:
Overview:
- Legacy service account tokens are marked for cleanup in Kubernetes v1.29.
Rationale:
- Cleaning up legacy service account tokens is essential for security reasons and aligning with evolving best practices.
Impact:
- Users with applications relying on deprecated service account tokens should update their configurations.
Recommendation:
- Remove or update applications using legacy service account tokens to ensure security and compatibility.
4. Deprecation of DisableCloudProviders and DisableKubeletCloudCredentialProviders Feature Gates:
Overview:
- Feature gates DisableCloudProviders and DisableKubeletCloudCredentialProviders are deprecated in v1.29.
Rationale:
- These feature gates were associated with the legacy in-tree cloud provider integrations.
Impact:
- Users relying on these feature gates should transition to alternative configurations.
Recommendation:
- Switch to external cloud controller managers and adjust configurations accordingly.
5. Deprecation of NodeLegacyController in Node Lifecycle:
Overview:
- NodeLegacyController, a part of the node lifecycle, is deprecated.
Rationale:
- This change is part of a broader initiative—to separate node lifecycle management from taint management. By doing so, Kubernetes becomes more modular, allowing each piece of the puzzle to evolve independently. The move toward modularity enhances adaptability and ensures that each component can be updated without disrupting the entire system.
Impact:
- Users relying on NodeLegacyController should transition to alternative controllers.
Recommendation:
- Embrace the new separation and adapt to changes in node lifecycle management.
Improvements that graduated to stable in Kubernetes v1.29
ReadWriteOncePod PersistentVolume Access Mode:
- ReadWriteOncePod — introduced in the 1.22 release as an Alpha feature for CSI driver now graduated to stable. Before 1.22 k8s had three volume access modes RWO, RWM, ROM
ReadWriteOncePod (RWOP)makes sure only the pod attached to the volume in the whole cluster can only read or write to that PV/PVC.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOncePod
Node volume expansion Secret support for CSI drivers:
In Kubernetes, when you need to expand the storage capacity of a volume on a node (like making a disk larger), some storage drivers require special credentials or secrets for certain scenarios. For example, if your storage is encrypted, you might need to provide a passphrase during expansion. Additionally, some drivers need credentials to communicate with the storage backend for validations.
To address this, Kubernetes introduced the CSI Node Expand Secret feature in version 1.25. This feature allows CSI drivers to include an optional secret when requesting volume expansion on a node. This secret is essential for operations like providing passphrases or credentials needed for secure communication with the storage system. As of Kubernetes version 1.29, this feature is generally available and can be used for secure and authenticated volume expansions.
KMS v2 Encryption at Rest Generally Available:
In Kubernetes, ensuring the security of your cluster involves encrypting stored API data to protect it. Kubernetes provides Key Management Service (KMS) as an interface for external key services to handle encryption using keys. With the release of Kubernetes version 1.29, KMS v2 has become a stable and reliable feature. KMS v2 brings improvements in performance, key rotation, health checks, status monitoring, and observability. These enhancements offer users a more robust solution for encrypting all resources in their Kubernetes clusters.
If you're using KMS, it's recommended to switch to KMS v2 as it is now the default stable version. The older KMS v1 feature gate is disabled by default, and if you still want to use it, you'll need to explicitly enable it. This ensures that users benefit from the latest advancements and security features provided by KMS v2.
New alpha features
Pod Affinity/Anti-Affinity Enhancement:
- Purpose: When updating applications in a Kubernetes cluster, the alpha feature for PodAffinity/PodAntiAffinity aims to make the process more accurate. It allows you to specify rules guiding the placement (affinity) or avoidance (anti-affinity) of pods during rolling updates. This ensures better control over how pods are distributed across your cluster.
nftables Backend for kube-proxy:
- Purpose: The default kube-proxy implementation on Linux has been using iptables, but due to issues with iptables, Kubernetes is introducing a new backend based on nftables. This is significant because nftables is a more modern and actively developed packet filtering system, addressing performance problems associated with iptables. This change ensures continued support as some Linux distributions are phasing out iptables.
APIs for Managing Service IP Ranges:
- Purpose: Kubernetes Services provides a way to expose applications, and these services have virtual IP addresses. With the new API objects, ServiceCIDR and IPAddress, users can now dynamically manage the IP ranges allocated for services without the need to restart the kube-apiserver. This brings flexibility and avoids disruptions caused by IP exhaustion or renumbering.
Image Pull per Runtime Class:
Purpose: In Kubernetes v1.29, a new feature allows more granular control over pulling container images based on the runtime class of the Pod. Instead of relying on default behavior, where the platform of the host dictates the pulled image, this enhancement lets users pull images based on a specified runtime class. This is particularly useful for scenarios like VM-based containers, such as Windows Hyper-V.
apiVersion: v1 kind: Pod metadata: name: mypod spec: runtimeClassName: windows-container containers: - name: my-container image: my-windows-image:latest
Here, the
runtimeClassName
specifies the runtime class for the Pod, allowing the system to pull the appropriate image for the specified runtime class.
In-Place Updates for Windows Pod Resources:
Purpose: For Windows Pods, Kubernetes now supports the alpha feature of in-place updates for resource requests and limits. This means you can modify the desired compute resources for a running Windows Pod without having to restart it. This flexibility is beneficial for scenarios where dynamic resource adjustments are required without interrupting the application running in the container.
apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: my-container image: my-windows-app:latest resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m"
You can dynamically update the resources of the running Windows Pod without restarting it. For instance, adjust the memory or CPU limits without interrupting the application.
Some Other Features:
1. ServiceCIDR: Taking Control of Service IP Address Ranges
Feature Description: ServiceCIDR makes its debut in Kubernetes v1.29, offering administrators a powerful tool to manage IP address ranges assigned to services within the cluster.
Why It Matters:
Control Over IP Addresses:
- ServiceCIDR provides administrators with precise control over the IP addresses assigned to services. This granularity is invaluable for ensuring that IP allocations align with specific network requirements and policies.
Enhanced Network Management:
- By introducing ServiceCIDR, Kubernetes takes a stride toward enhanced network management capabilities. This feature streamlines the administration of service IP addresses, facilitating a more organized and efficient network architecture.
Illustration:
# Example ServiceCIDR configuration in Kubernetes
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
clusterIP: 10.0.0.1
serviceCIDR: 10.0.0.0/24
2.PodLifecycle...PreStop: A Graceful Farewell to Pods
Feature Description: PodLifecycle in Kubernetes v1.29 introduces the PreStop hook, a mechanism allowing users to execute custom logic just before a pod undergoes termination.
Why It Matters:
Facilitates Graceful Termination:
- The PreStop hook facilitates the graceful termination of pods by enabling users to perform cleanup tasks just before shutdown. This ensures that critical processes or connections can gracefully conclude, preventing data loss or service disruptions.
Opportunity for Application Cleanup:
- By allowing custom logic execution before termination, PodLifecycle ensures that applications have the opportunity to shut down gracefully. This is particularly vital for applications that may need to save state, complete transactions, or release resources before exiting.
Illustration:
# Example PodLifecycle configuration with PreStop hook
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-image
terminationGracePeriodSeconds: 30
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "echo 'Stopping gracefully'"]
Conclusion:
Kubernetes 1.29 is not just a release; it's a testament to the vibrancy of an open-source community dedicated to shaping the future of container orchestration. Whether you're a seasoned Kubernetes user or just stepping into the world of containers, version 1.29 invites you to partake in the journey, explore the features, and witness firsthand the evolution of a platform that continues to redefine the boundaries of possibility in the realm of cloud-native computing.