๐งฒ Affinity & Anti-Affinity in Kubernetes
Affinity rules let you control where Pods run, based on labels โ either to attract them to specific nodes or spread/group them with other Pods.
๐ Types of Affinity
| Type | Scope | Purpose |
|---|---|---|
| Node Affinity | Node | Run Pods on nodes with labels |
| Pod Affinity | Pod | Schedule Pods near matching Pods |
| Pod Anti-Affinity | Pod | Avoid scheduling Pods together |
๐งฑ Node Affinity (Hard or Soft)
Defined in pod.spec.affinity.nodeAffinity
Example: Require node with label
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
Only schedule on nodes labeled
disktype=ssd
Example: Prefer nodes
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: disktype
operator: In
values:
- ssd
Prefer
ssdnodes but not required
Alternative Operators
operator: NotIn
values:
- spinning
๐ Node Affinity Types
Kubernetes currently supports these runtime policies:
| Type | DuringScheduling | DuringExecution |
|---|---|---|
requiredDuringSchedulingIgnoredDuringExecution |
Required | Ignored |
preferredDuringSchedulingIgnoredDuringExecution |
Preferred | Ignored |
Planned (future support):
| Type | DuringScheduling | DuringExecution |
|---|---|---|
requiredDuringSchedulingRequiredDuringExecution |
Required | Required |
preferredDuringSchedulingRequiredDuringExecution |
Preferred | Required |
๐ค Pod Affinity (Pods that must run together)
Used when Pods should run close to each other (e.g., same node or zone).
Example:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- frontend
topologyKey: "kubernetes.io/hostname"
Schedule this Pod on a node that already has a
frontendPod.
๐ Pod Anti-Affinity (Avoid scheduling together)
Useful for spreading out replicas for high availability.
Example:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- frontend
topologyKey: "kubernetes.io/hostname"
Avoid scheduling on nodes that already have
frontendPods.
โ๏ธ When to Use What
| Scenario | Strategy |
|---|---|
| Dedicated hardware for an app | Node Affinity |
| Spread replicas for high availability | Pod Anti-Affinity |
| Group dependent services together | Pod Affinity |
| Flexible placement but prefer zones | Preferred Affinity |
๐ฅบ Check Labels on Nodes and Pods
kubectl get nodes --show-labels
kubectl get pods --show-labels
๐ง Tips
- Use
topologyKey: "kubernetes.io/hostname"for node-level rules - Avoid strict
requiredDuringSchedulingrules unless absolutely needed โ they may lead to Pending pods - Prefer using
preferredDuringSchedulingwhen flexibility is acceptable - Use
In/NotInto write expressive match rules
๐งช Node Affinity Hands-On Walkthrough
This section covers a real-world lab using kubectl commands to:
- Inspect node labels
- Apply a custom label
- Create deployments
- Use Node Affinity for precise scheduling
๐ How Many Labels on node01?
kubectl get node node01 --show-labels
kubectl get node node01 -o json | jq '.metadata.labels | length'
โ Output:
5 labels found on node01
๐ท๏ธ Label node01 with color=blue
kubectl label nodes node01 color=blue
Confirm:
kubectl describe node node01 | grep color
๐ Create blue Deployment (3 replicas)
kubectl create deployment blue --image=nginx --replicas=3
Ensure both nodes have no taints:
kubectl describe node controlplane | grep -i taints
kubectl describe node node01 | grep -i taints
๐ฏ Apply Node Affinity to blue (Only schedule on node01)
kubectl patch deployment blue \
--type='merge' \
-p '{
"spec": {
"template": {
"spec": {
"affinity": {
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "color",
"operator": "In",
"values": ["blue"]
}
]
}
]
}
}
}
}
}
}
}'
Check placement:
kubectl get pods -l app=blue -o wide
โ
All pods should now run on node01.
๐ด Create red Deployment for Control Plane Node
kubectl create deployment red --image=nginx --replicas=2
Apply Node Affinity to force scheduling on the control plane:
kubectl patch deployment red \
--type='merge' \
-p '{
"spec": {
"template": {
"spec": {
"affinity": {
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "node-role.kubernetes.io/control-plane",
"operator": "Exists"
}
]
}
]
}
}
}
}
}
}
}'
Verify:
kubectl get pods -l app=red -o wide
โ
Pods should be running only on the controlplane node.
This workflow demonstrates:
- Label inspection and manipulation
- Pod scheduling control using
nodeAffinity - How to constrain deployments to specific node types
Place this doc under: scheduling/affinity-lab.md if you wish to keep practical walkthroughs separate from theory.
โ Summary
Affinity is about controlling pod placement:
- Node Affinity targets specific node labels
- Pod Affinity attracts Pods near others
- Pod Anti-Affinity spreads Pods across nodes
- Use them for performance, resilience, or compliance
Also see: Node Selectors