We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Which component are you using?: /area cluster-autoscaler
What version of the component are you using?:
Component version: 1.29.4
What k8s version are you using (kubectl version)?:
kubectl version
$ kubectl version v1.29.12
What environment is this in?:
Amazon EKS
What did you expect to happen?:
Daemonsets pods are evicted after those which have lower priority class, or no priority class.
What happened instead?:
All of the pods are evicted at the same time, then those critical daemonsets pods are redeployed, and not gracefully removed.
How to reproduce it (as minimally and precisely as possible):
CA arguments:
- ./cluster-autoscaler - --v=4 - --stderrthreshold=info - --cloud-provider=aws - --skip-nodes-with-local-storage=false - --expander=least-waste - --node-group-auto-discovery=<relevant-tags> - --scale-down-utilization-threshold=0.7 - --balance-similar-node-groups - --skip-nodes-with-system-pods=false - --startup-taint=node.cilium.io/agent-not-ready - --drain-priority-config='2000001000:120,0:60'
Anything else we need to know?: Daemonsets pod specification contains proper priority class, in case that's a requirement.
This leads me to believe that this is supported in 1.29 version as well:
Let me know if anything else is needed. Thank you.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Which component are you using?:
/area cluster-autoscaler
What version of the component are you using?:
Component version: 1.29.4
What k8s version are you using (
kubectl version
)?:kubectl version
What environment is this in?:
Amazon EKS
What did you expect to happen?:
Daemonsets pods are evicted after those which have lower priority class, or no priority class.
What happened instead?:
All of the pods are evicted at the same time, then those critical daemonsets pods are redeployed, and not gracefully removed.
How to reproduce it (as minimally and precisely as possible):
CA arguments:
Anything else we need to know?:
Daemonsets pod specification contains proper priority class, in case that's a requirement.
This leads me to believe that this is supported in 1.29 version as well:
Let me know if anything else is needed.
Thank you.
The text was updated successfully, but these errors were encountered: