Place GKE Pods in specific zones


This page shows you how to tell Google Kubernetes Engine (GKE) to run your Pods on nodes in specific Google Cloud zones using zonal topology. This type of placement is useful in situations such as the following:

  • Pods must access data that's stored in a zonal Compute Engine persistent disk.
  • Pods must run alongside other zonal resources such as Cloud SQL instances.

You can also use zonal placement with topology-aware traffic routing to reduce latency between clients and workloads. For details about topology-aware traffic routing, see Topology aware routing.

Using zonal topology to control Pod placement is an advanced Kubernetes mechanism that you should only use if your situation requires that Pods run in specific zones. In most production environments, we recommend that you use regional resources, which is the GKE default, when possible.

Zonal placement methods

Zonal topology is built into Kubernetes with the topology.kubernetes.io/zone: ZONE node label. To tell GKE to place a Pod in a specific zone, use one of the following methods:

  • nodeAffinity: Specify a nodeAffinity rule in your Pod specification for one or more Google Cloud zones. This method is more flexible than a nodeSelector because it lets you place Pods in multiple zones.
  • nodeSelector: Specify a nodeSelector in your Pod specification for a single Google Cloud zone.

  • Compute classes: Configure your Pod to use a GKE compute class. This approach lets you define a prioritized list of sets of Google Cloud zones. It enables the workload to be moved dynamically to the most preferred set of zones when nodes are available in these zones. For more information, see About custom compute classes.

Considerations

Zonal Pod placement using zonal topology has the following considerations:

  • The cluster must be in the same Google Cloud region as the requested zones.
  • In Standard clusters, you must use node auto-provisioning or create node pools with nodes in the requested zones. Autopilot clusters automatically manage this process for you.
  • Standard clusters must be regional clusters.

Pricing

Zonal topology is a Kubernetes scheduling capability and is offered at no extra cost in GKE.

For pricing details, see GKE pricing.

Before you begin

Before you start, make sure you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running gcloud components update.
  • Ensure that you have an existing GKE cluster in the same Google Cloud region as the zones in which you want to place your Pods. To create a new cluster, see Create an Autopilot cluster.

Place Pods in multiple zones using nodeAffinity

Kubernetes nodeAffinity provides a flexible scheduling control mechanism that supports multiple label selectors and logical operators. Use nodeAffinity if you want to let Pods run in one of a set of zones (for example, in either us-central1-a or us-central1-f).

  1. Save the following manifest as multi-zone-affinity.yaml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx-multi-zone
      template:
        metadata:
          labels:
            app: nginx-multi-zone
        spec:
          containers:
          - name: nginx
            image: nginx:latest
            ports:
            - containerPort: 80
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: topology.kubernetes.io/zone
                    operator: In
                    values:
                    - us-central1-a
                    - us-central1-f
    

    This manifest creates a Deployment with three replicas and places the Pods in us-central1-a or us-central1-f based on node availability.

    Ensure that your cluster is in the us-central1 region. If your cluster is in a different region, change the zones in the values field of the manifest to valid zones in your cluster region.

  2. Create the Deployment:

    kubectl create -f multi-zone-affinity.yaml
    

    GKE creates the Pods in nodes in one of the specified zones. Multiple Pods might run on the same node. You can optionally use Pod anti-affinity to tell GKE to place each Pod on a separate node.

Place Pods in a single zone using a nodeSelector

To place Pods in a single zone, use a nodeSelector in the Pod specification. A nodeSelector is equivalent to a requiredDuringSchedulingIgnoredDuringExecution nodeAffinity rule that has a single zone specified.

  1. Save the following manifest as single-zone-selector.yaml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-singlezone
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx-singlezone
      template:
        metadata:
          labels:
            app: nginx-singlezone
        spec:
          nodeSelector:
            topology.kubernetes.io/zone: "us-central1-a"
          containers:
          - name: nginx
            image: nginx:latest
            ports:
            - containerPort: 80
    

    This manifest tells GKE to place all replicas in the Deployment in the us-central1-a zone.

  2. Create the Deployment:

    kubectl create -f single-zone-selector.yaml
    

Prioritize Pod placement in selected zones using a compute class

GKE compute classes provide a control mechanism that lets you define a list of node configuration priorities. Zonal preferences let you define the zones that you want GKE to place Pods in. Defining zonal preferences in compute classes requires GKE version 1.33.1-gke.1545000 or later.

The following example creates a compute class that specifies a list of preferred zones for Pods.

These steps assume that your cluster is in the us-central1 region. If your cluster is in a different region, change the values of the zones in the manifest to valid zones in your cluster region.

  1. Save the following manifest as zones-custom-compute-class.yaml:

    apiVersion: cloud.google.com/v1
    kind: ComputeClass
    metadata:
      name: zones-custom-compute-class
    spec:
      priorities:
      - location:
        zones: [us-central1-a, us-central1-b]
      - location:
        zones: [us-central1-c]
      activeMigration:
        optimizeRulePriority: true
      nodePoolAutoCreation:
        enabled: true
      whenUnsatisfiable: ScaleUpAnyway
    

    This compute class manifest changes scaling behavior as follows:

    1. GKE tries to place Pods in either us-central1-a or in us-central1-b.
    2. If us-central1-a and us-central1-b don't have available capacity, GKE tries to place Pods in us-central1-c.
    3. If us-central1-c doesn't have available capacity, the whenUnsatisfiable: ScaleUpAnyway field makes GKE place the Pods in any available zone in the region.
    4. If a zone that has higher priority in the compute class becomes available later, the activeMigration.optimizeRulePriority: true field makes GKE move the Pods to that zone from any lower priority zones. This migration uses the Pod Disruption Budget to ensure service availability.
  2. Create the Custom Compute Class:

    kubectl create -f zones-custom-compute-class.yaml
    

    GKE creates a custom compute class that your workloads can reference.

  3. Save the following manifest as custom-compute-class-deployment.yaml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-zonal-preferences
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx-zonal-preferences
      template:
        metadata:
          labels:
            app: nginx-zonal-preferences
        spec:
          nodeSelector:
            cloud.google.com/compute-class: "zones-custom-compute-class"
          containers:
          - name: nginx
            image: nginx:latest
            ports:
            - containerPort: 80
    
  4. Create the Deployment:

    kubectl create -f custom-compute-class-deployment.yaml
    

Verify Pod placement

To verify Pod placement, list the Pods and check the node labels. Multiple Pods might run in a single node, so you might not see Pods spread across multiple zones if you used nodeAffinity.

  1. List your Pods:

    kubectl get pods -o wide
    

    The output is a list of running Pods and the corresponding GKE node.

  2. Describe the nodes:

    kubectl describe node NODE_NAME | grep "topology.kubernetes.io/zone"
    

    Replace NODE_NAME with the name of the node.

    The output is similar to the following:

    topology.kubernetes.io/zone: us-central1-a
    

If you want GKE to spread your Pods evenly across multiple zones for improved failover across multiple failure domains, use topologySpreadConstraints.

What's next