Skip to content

๐Ÿงฒ Affinity & Anti-Affinity in Kubernetes

Affinity rules let you control where Pods run, based on labels โ€” either to attract them to specific nodes or spread/group them with other Pods.


๐Ÿ“Œ Types of Affinity

Type Scope Purpose
Node Affinity Node Run Pods on nodes with labels
Pod Affinity Pod Schedule Pods near matching Pods
Pod Anti-Affinity Pod Avoid scheduling Pods together

๐Ÿงฑ Node Affinity (Hard or Soft)

Defined in pod.spec.affinity.nodeAffinity

Example: Require node with label

nodeAffinity:
  requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
      - key: disktype
        operator: In
        values:
        - ssd

Only schedule on nodes labeled disktype=ssd

Example: Prefer nodes

preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
  preference:
    matchExpressions:
    - key: disktype
      operator: In
      values:
      - ssd

Prefer ssd nodes but not required

Alternative Operators

operator: NotIn
values:
- spinning

๐Ÿ“Š Node Affinity Types

Kubernetes currently supports these runtime policies:

Type DuringScheduling DuringExecution
requiredDuringSchedulingIgnoredDuringExecution Required Ignored
preferredDuringSchedulingIgnoredDuringExecution Preferred Ignored

Planned (future support):

Type DuringScheduling DuringExecution
requiredDuringSchedulingRequiredDuringExecution Required Required
preferredDuringSchedulingRequiredDuringExecution Preferred Required

๐Ÿค Pod Affinity (Pods that must run together)

Used when Pods should run close to each other (e.g., same node or zone).

Example:

podAffinity:
  requiredDuringSchedulingIgnoredDuringExecution:
  - labelSelector:
      matchExpressions:
      - key: app
        operator: In
        values:
        - frontend
    topologyKey: "kubernetes.io/hostname"

Schedule this Pod on a node that already has a frontend Pod.


๐Ÿ™… Pod Anti-Affinity (Avoid scheduling together)

Useful for spreading out replicas for high availability.

Example:

podAntiAffinity:
  requiredDuringSchedulingIgnoredDuringExecution:
  - labelSelector:
      matchExpressions:
      - key: app
        operator: In
        values:
        - frontend
    topologyKey: "kubernetes.io/hostname"

Avoid scheduling on nodes that already have frontend Pods.


โš–๏ธ When to Use What

Scenario Strategy
Dedicated hardware for an app Node Affinity
Spread replicas for high availability Pod Anti-Affinity
Group dependent services together Pod Affinity
Flexible placement but prefer zones Preferred Affinity

๐Ÿฅบ Check Labels on Nodes and Pods

kubectl get nodes --show-labels
kubectl get pods --show-labels

๐Ÿง  Tips

  • Use topologyKey: "kubernetes.io/hostname" for node-level rules
  • Avoid strict requiredDuringScheduling rules unless absolutely needed โ€” they may lead to Pending pods
  • Prefer using preferredDuringScheduling when flexibility is acceptable
  • Use In/NotIn to write expressive match rules

๐Ÿงช Node Affinity Hands-On Walkthrough

This section covers a real-world lab using kubectl commands to:

  • Inspect node labels
  • Apply a custom label
  • Create deployments
  • Use Node Affinity for precise scheduling

๐Ÿ” How Many Labels on node01?

kubectl get node node01 --show-labels
kubectl get node node01 -o json | jq '.metadata.labels | length'

โœ… Output:

5 labels found on node01

๐Ÿท๏ธ Label node01 with color=blue

kubectl label nodes node01 color=blue

Confirm:

kubectl describe node node01 | grep color

๐Ÿš€ Create blue Deployment (3 replicas)

kubectl create deployment blue --image=nginx --replicas=3

Ensure both nodes have no taints:

kubectl describe node controlplane | grep -i taints
kubectl describe node node01 | grep -i taints

๐ŸŽฏ Apply Node Affinity to blue (Only schedule on node01)

kubectl patch deployment blue \
  --type='merge' \
  -p '{
    "spec": {
      "template": {
        "spec": {
          "affinity": {
            "nodeAffinity": {
              "requiredDuringSchedulingIgnoredDuringExecution": {
                "nodeSelectorTerms": [
                  {
                    "matchExpressions": [
                      {
                        "key": "color",
                        "operator": "In",
                        "values": ["blue"]
                      }
                    ]
                  }
                ]
              }
            }
          }
        }
      }
    }
  }'

Check placement:

kubectl get pods -l app=blue -o wide

โœ… All pods should now run on node01.


๐Ÿ”ด Create red Deployment for Control Plane Node

kubectl create deployment red --image=nginx --replicas=2

Apply Node Affinity to force scheduling on the control plane:

kubectl patch deployment red \
  --type='merge' \
  -p '{
    "spec": {
      "template": {
        "spec": {
          "affinity": {
            "nodeAffinity": {
              "requiredDuringSchedulingIgnoredDuringExecution": {
                "nodeSelectorTerms": [
                  {
                    "matchExpressions": [
                      {
                        "key": "node-role.kubernetes.io/control-plane",
                        "operator": "Exists"
                      }
                    ]
                  }
                ]
              }
            }
          }
        }
      }
    }
  }'

Verify:

kubectl get pods -l app=red -o wide

โœ… Pods should be running only on the controlplane node.


This workflow demonstrates:

  • Label inspection and manipulation
  • Pod scheduling control using nodeAffinity
  • How to constrain deployments to specific node types

Place this doc under: scheduling/affinity-lab.md if you wish to keep practical walkthroughs separate from theory.


โœ… Summary

Affinity is about controlling pod placement:

  • Node Affinity targets specific node labels
  • Pod Affinity attracts Pods near others
  • Pod Anti-Affinity spreads Pods across nodes
  • Use them for performance, resilience, or compliance

Also see: Node Selectors