Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cluster-autoscaler ConfigMaps in multi cluster setup #7805

Open
lilic opened this issue Feb 5, 2025 · 3 comments
Open

cluster-autoscaler ConfigMaps in multi cluster setup #7805

lilic opened this issue Feb 5, 2025 · 3 comments
Labels
area/cluster-autoscaler kind/feature Categorizes issue or PR as related to a new feature.

Comments

@lilic
Copy link
Member

lilic commented Feb 5, 2025

Which component are you using?:

/area cluster-autoscaler

Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:

When running cluster-autoscaler in a separate “management” cluster, the priority expander ConfigMap must reside in the workload cluster’s kube-system namespace. This is inconvenient if you want to manage it exclusively from the management cluster. Also the ConfigMap with status is located in the other cluster and not besides the auto-scaler.

Describe the solution you'd like.:

Introduce a new flag (e.g., --namespace-configmap) or similar mechanism allowing users to define which namespace (and cluster context) holds the priority expander ConfigMap and cluster autoscaler status ConfigMap.

This provides flexibility for setups where cluster-autoscaler is not running directly in the workload cluster, enabling centralised configuration and management of the priority expander and the status ConfigMap.

Describe any alternative solutions you've considered.:

We found a workaround for this, but the status ConfigMap can be hard to discover if you don't know where to look at.

Additional context.:

If nothing else, I would like to update the docs to point out this, as I didn't find much around this. But code confirmed this is expected. 👀

Thank you!

@lilic lilic added the kind/feature Categorizes issue or PR as related to a new feature. label Feb 5, 2025
@x13n
Copy link
Member

x13n commented Feb 7, 2025

I'm a bit worried about making Cluster Autoscaler aware of two different apiservers: this is going to be a complicated setup for instance from permissions perspective. What I would suggest for your use case instead, would be to just allow passing priority expander config as a file. That would allow you to mount a ConfigMap in your management cluster as a volume in Cluster Autoscaler pod, effectively achieving what you want without making Cluster Autoscaler aware of another cluster.

@lilic
Copy link
Member Author

lilic commented Feb 7, 2025

@x13n Thanks for your reply! I think that solves the configuration aspect, and that would work for our setup. But the configmap status would still be created in the other cluster right?

Again for us its not a blocker, found a workaround, but I suspect there might be others with similar setups so wanted to open this issue.

Is it okay if I update some docs to reflect how it works in this setup, as it took me a bit longer to figure out why its being configured, so maybe it saves others some time?

@x13n
Copy link
Member

x13n commented Feb 7, 2025

The status ConfigMap is really meant for observability: if I have pods not getting scheduled in my cluster, I want to understand why: checking the status ConfigMap is meant to be used in that context. However, if you want to dump it in another cluster as well, I would suggest the same approach - pass a status filename via commandline and set up a volume that can be updated. It would no longer be a ConfigMap though since they are read only. Would that approach make more sense to you than the current workaround you're using?

As for updating docs: yes, this is always a good idea :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

3 participants