Description
Describe the bug
Right now if you try for example set singleBinary.affinity
in your values to something else than requiredDuringSchedulingIgnoredDuringExecution
for example to preferredDuringSchedulingIgnoredDuringExecution
it come to a disadvantageous merge.
The result is something like this:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/component: single-binary
topologyKey: topology.kubernetes.io/zone
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/component: single-binary
topologyKey: kubernetes.io/hostname
Which is rejected by Kubernetes.
We would like to use preferredDuringSchedulingIgnoredDuringExecution
to be more flexible.
With requiredDuringSchedulingIgnoredDuringExecution
our clusterautoscaler would provision nodes in zones which are not needed, But if there are Nodes in zones available we would like to spread loki across. That's where preferredDuringSchedulingIgnoredDuringExecution
comes handy.
To Reproduce
Steps to reproduce the behavior:
- Do a helm template with this value
singleBinary:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/component: single-binary
topologyKey: topology.kubernetes.io/zone
- See/Try to apply the manifest
Expected behavior
The result should be something like this
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/component: single-binary
topologyKey: topology.kubernetes.io/zone
Environment:
- Kubernetes
- Helm
Screenshots, Promtail config, or terminal output
If applicable, add any output to help explain your problem.