You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:
When running cluster-autoscaler in a separate “management” cluster, the priority expander ConfigMap must reside in the workload cluster’s kube-system namespace. This is inconvenient if you want to manage it exclusively from the management cluster. Also the ConfigMap with status is located in the other cluster and not besides the auto-scaler.
Describe the solution you'd like.:
Introduce a new flag (e.g., --namespace-configmap) or similar mechanism allowing users to define which namespace (and cluster context) holds the priority expander ConfigMap and cluster autoscaler status ConfigMap.
This provides flexibility for setups where cluster-autoscaler is not running directly in the workload cluster, enabling centralised configuration and management of the priority expander and the status ConfigMap.
Describe any alternative solutions you've considered.:
We found a workaround for this, but the status ConfigMap can be hard to discover if you don't know where to look at.
Additional context.:
If nothing else, I would like to update the docs to point out this, as I didn't find much around this. But code confirmed this is expected. 👀
Thank you!
The text was updated successfully, but these errors were encountered:
I'm a bit worried about making Cluster Autoscaler aware of two different apiservers: this is going to be a complicated setup for instance from permissions perspective. What I would suggest for your use case instead, would be to just allow passing priority expander config as a file. That would allow you to mount a ConfigMap in your management cluster as a volume in Cluster Autoscaler pod, effectively achieving what you want without making Cluster Autoscaler aware of another cluster.
@x13n Thanks for your reply! I think that solves the configuration aspect, and that would work for our setup. But the configmap status would still be created in the other cluster right?
Again for us its not a blocker, found a workaround, but I suspect there might be others with similar setups so wanted to open this issue.
Is it okay if I update some docs to reflect how it works in this setup, as it took me a bit longer to figure out why its being configured, so maybe it saves others some time?
The status ConfigMap is really meant for observability: if I have pods not getting scheduled in my cluster, I want to understand why: checking the status ConfigMap is meant to be used in that context. However, if you want to dump it in another cluster as well, I would suggest the same approach - pass a status filename via commandline and set up a volume that can be updated. It would no longer be a ConfigMap though since they are read only. Would that approach make more sense to you than the current workaround you're using?
As for updating docs: yes, this is always a good idea :)
Which component are you using?:
/area cluster-autoscaler
Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:
When running cluster-autoscaler in a separate “management” cluster, the priority expander ConfigMap must reside in the workload cluster’s kube-system namespace. This is inconvenient if you want to manage it exclusively from the management cluster. Also the ConfigMap with status is located in the other cluster and not besides the auto-scaler.
Describe the solution you'd like.:
Introduce a new flag (e.g.,
--namespace-configmap
) or similar mechanism allowing users to define which namespace (and cluster context) holds the priority expander ConfigMap and cluster autoscaler status ConfigMap.This provides flexibility for setups where cluster-autoscaler is not running directly in the workload cluster, enabling centralised configuration and management of the priority expander and the status ConfigMap.
Describe any alternative solutions you've considered.:
We found a workaround for this, but the status ConfigMap can be hard to discover if you don't know where to look at.
Additional context.:
If nothing else, I would like to update the docs to point out this, as I didn't find much around this. But code confirmed this is expected. 👀
Thank you!
The text was updated successfully, but these errors were encountered: