What is pod(Anti)Affinity?
When a pod is scheduled to be placed on a node, Kubernetes checks what nodes a pod can be placed on.
PodAffinity is a way to describe what nodes the scheduler can use.
PodAntiAffinity is a way to describe what nodes the scheduler should not use.
It does this based on the pods that are already running on the nodes.
With the podAffinity and podAntiAffinity you can describe what pods we should check for.
How does it work?
Types of PodAffinity
There are 2 types of PodAffinity:
requiredDuringSchedulingIgnoredDuringExecution
The rules defined below have to be met. Otherwise the pod won’t be deployed.preferredDuringSchedulingIgnoredDuringExecution
This one is based on best effort. When no match can be found, Kubernetes will deploy the pod anyway.
Selectors
The selectors consist of either labelSelector
or, namespaceSelector
.
Read more about them here.
The selector is used to determine what pod to look for.
In the following example we look for pods that have a label app
with the value my-app
.
- labelSelector:
matchExpressions:
- key: app
operator:
In values:
- my-app
YAMLThis selector can be used for example to look for pods of the same type as the one you need to schedule.
This can be used for a high available setup. You can read more about that here [[High availability in Kubernetes]].
Topology
Topology is used to determine how to determine related nodes.topologyKey: topology.kubernetes.io/zone
for example selects all nodes that are in the same zone as the pods found by the selectors.
Be aware though! Even though this looks like a way to create a High Available setup, it is not. Well, not when you have more pods then you have zones.
This is due to Kubernetes not spreading the pods evenly. It just tries to select a node that adheres to the specs. However when it cannot, then it either doesn’t deploy the pod, or all bets are off where it is put.
If you want to go for a High Available setup, then you can read my post about that [[High availability in Kubernetes]]
Weight
An optional Weight can be added to the preferredDuringSchedulingIgnoredDuringExecution
rules.
This enables us to make one rule more important then another. However, the weight is added to other weight scores as well. So it is not a complete prioritization. You can read more about that here
Examples
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
selector:
matchLabels:
app: my-app
replicas: 3
template:
metadata:
labels:
app: my-app
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: "kubernetes.io/hostname"
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- web-app
topologyKey: "kubernetes.io/hostname"
containers:
- name: my-backend-app
image: nginx:1.16-alpine
YAMLIn this example we make sure that 2 pods with the label my-app won’t be on the same node.
However, we do try to put those pods on the same notes as pods with the label web-app.
Links
Kubernetes docs about assigning pods to nodes
Kubernetes docs about label selectors
High availability in Kubernetes
One Reply to “Pod(Anti)Affinity”
Comments are closed.