Hepapi Blog - hepapi.com

What’s New in Kubernetes v1.34: KYAML, Smarter Traffic Routing, and More

Written by Bora Fenari Köstem | Oct 19, 2025 4:03:52 PM

Kubernetes v1.34 is here, and it’s packed with enhancements that improve usability, observability, and flexibility for cluster operators and developers alike. While there are many updates, two stand out for everyday Kubernetes users — KYAML, a new safer YAML dialect for Kubernetes manifests, and PreferSameZone / PreferSameNode for smarter service traffic distribution. Let’s break down the key highlights.

Dynamic Resource Allocation (DRA) Goes Stable

Fully stable support for flexible GPU/custom hardware allocation using ResourceClaim and DeviceClass.

ServiceAccount Tokens for Image Pulls (Beta)

Short-lived, Pod-scoped ServiceAccount tokens can now be used for pulling images securely — no more long-lived Secrets.

KYAML — A Safer, Cleaner Kubernetes YAML

YAML has been both a blessing and a curse for Kubernetes users. It’s human-readable, but whitespace quirks, implicit type conversions, and inconsistent quoting can lead to frustrating bugs. JSON solves some of these issues, but lacks comments and flexibility.

Enter KYAML — a strict YAML subset designed specifically for Kubernetes manifests and Helm charts.

Key benefits of KYAML:

  • Always double-quotes strings → No accidental type confusion ("yes" becoming true).
  • Keys remain unquoted unless necessary → Keeps things readable while avoiding ambiguity.
  • Consistent use of {} for mappings and [] for lists → Clearer, less error-prone structure.
  • Supports comments and trailing commas → Developer-friendly without breaking JSON compatibility.

With kubectl v1.34, you’ll be able to:

 

kubectl get pods -o kyaml

 

to output manifests in KYAML format. All KYAML files are valid YAML, so you can mix and match without breaking compatibility.

 

Example: YAML vs KYAML

Traditional YAML

 

 

apiVersion: v1

 

kind: ConfigMap

 

metadata:

 

  name: test-config

 

data:

 

  enabled: yes

 

  retries: 3

 

  items:

 

    - apple

 

    - orange

 

KYAML

 

apiVersion: "v1"

 

kind: "ConfigMap"

 

metadata: {"name": "test-config"}

 

data: {

 

  "enabled": "yes",

 

  "retries": "3",

 

  "items": ["apple", "orange"],

 

}

Notice how KYAML:

  • Quotes all strings ("yes" stays a string, not a boolean)
  • Uses {} for mappings and [] for lists
  • Allows trailing commas for easier editing

 

PreferSameZone & PreferSameNode — Smarter Service Traffic Distribution

Service traffic routing in Kubernetes gets a major usability boost in v1.34 with the spec.trafficDistribution field enhancements.

Previously, you could use PreferClose to send traffic to the nearest endpoint, but now two new options give you finer control:

  • PreferSameZone — Prioritizes routing traffic to endpoints in the same availability zone as the client (similar to the old PreferClose).
  • PreferSameNode — Routes traffic to endpoints on the same node as the client when possible, reducing latency and cross-node network traffic.

These are especially useful in multi-zone clusters and for latency-sensitive workloads like real-time apps, gaming servers, or AI inference pods.

 

Deployment Pod Replacement Policy (Alpha)

 

A new spec.podReplacementPolicy field for Deployments gives you fine-grained control over rollout behavior.

 

  • TerminationStarted → Start new Pods as soon as old Pods begin shutting down (faster rollouts, higher peak resource usage).
  • TerminationComplete → Wait for old Pods to fully stop before creating new ones (slower rollouts, stable resource usage).

 

Example:

 

apiVersion: apps/v1

 

kind: Deployment

 

metadata:

 

  name: web-app

 

spec:

 

  replicas: 5

 

  podReplacementPolicy: TerminationStarted

 

  template:

 

    spec:

 

      containers:

 

      - name: app

 

        image: myapp:v2

 

Use TerminationStarted when you want zero downtime and can afford extra resource usage during rollout.

Production-Ready Tracing for Kubelet & API Server

Kubernetes now supports stable OpenTelemetry tracing in both the kubelet and API server, giving deep visibility into workload operations from control plane to node.

Example scenario:

  • You notice Pods taking 15s to start.
  • With tracing enabled, you can see:
    • API Server received the Pod create request at 0s
    • Scheduler assigned the Pod at 1s
    • Kubelet pulled the image at 2s -- 13s
    • Container runtime started the container at 14s
      This instantly identifies image pulling as the bottleneck.

Enable example (API Server):

 

kube-apiserver \

 

  --tracing-config-file=/etc/kubernetes/tracing.yaml

Example Config File:

 

apiVersion: apiserver.config.k8s.io/v1

 

kind: TracingConfiguration

 

endpoint: localhost:4317

 

samplingRatePerMillion: 100

 

Enable example (Kubelet):

 

kubelet \

 

  --tracing-config-file=/var/lib/kubelet/tracing.yaml

 

Example Config File:

 

apiVersion: kubelet.config.k8s.io/v1

 

kind: KubeletConfiguration

 

featureGates:

 

  KubeletTracing: true

 

tracing:

 

  endpoint: localhost:4317

 

  samplingRatePerMillion: 100

 

HPA Configurable Tolerance (Beta)

The Horizontal Pod Autoscaler now supports per-HPA tolerance settings instead of the fixed 10% default. This is useful for large-scale workloads where even small metric changes can cause massive scale adjustments.

Example:

apiVersion: autoscaling/v2

kind: HorizontalPodAutoscaler

metadata:

  name: web-app-hpa

spec:

  scaleTargetRef:

    apiVersion: apps/v1

    kind: Deployment

    name: web-app

  minReplicas: 5

  maxReplicas: 50

  behavior:

    scaleUp:

      tolerance: 0.05   # 5% change needed before scaling up

    scaleDown:

      tolerance: 0.20   # 20% change needed before scaling down

Here, scale-up is 5% to handle spikes quickly, while scale-down is 20% to prevent flapping.

Final Thoughts

Kubernetes v1.34 continues the project’s trend of giving operators more fine-grained control while improving developer experience. KYAML will be a game-changer for manifest authors tired of whitespace and quoting pitfalls, while PreferSameZone and PreferSameNode will help optimize performance in geographically or topologically diverse clusters.

Whether you’re a cluster admin managing GPUs or a developer fine-tuning deployments, there’s something in this release for you.