Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase max number of volumes on a node #84

Open
vxav opened this issue Sep 7, 2022 · 7 comments
Open

Increase max number of volumes on a node #84

vxav opened this issue Sep 7, 2022 · 7 comments
Assignees
Labels
enhancement New feature or request

Comments

@vxav
Copy link

vxav commented Sep 7, 2022

Is your feature request related to a problem? Please describe.

Nodes have a limit of 15 disks attached since it uses 1 scsi controller.

https://github.com/vmware/cloud-director-named-disk-csi-driver/blob/main/pkg/csi/node.go#L24-L31

Describe the solution you'd like

Use more than 1 scsi controller to increase maximum number of disks.

Describe alternatives you've considered

No response

Additional context

No response

@vxav vxav added the enhancement New feature or request label Sep 7, 2022
@arunmk arunmk self-assigned this Feb 19, 2023
@markuszeiter
Copy link

Do we have an update here when this is going to be added to the csi?

@penoux
Copy link

penoux commented Nov 19, 2024

Hello, any update on this issue? On nodes where we may run 90+ pods, this mean only 1 pod over 6 can mount a single PVC.
We cannot build any production grade cluster with this limitation?

@vitality411
Copy link

Hello,
I have asked VMware support to open a feature request for this. When I tried to prioritize the feature request with the CSE engineering team via our TAM, I was told that there is little hope that this feature will be implemented as VCD will not receive any new features. VCD will get a release next year which will make it compatible with VCF 9. After that VCD will get 3 more years of support but only bugfixes and security fixes.

What I also learned is that VMware Engineering apparently doesn't really use GitHub to track issues and you don't need to wait for a response. The only way seems to be a support ticket or direct contact to VMware.

@penoux
Copy link

penoux commented Nov 19, 2024

@vitality411 Thanks a lot for these details, we also see that our previous issues and even PRs into GitHub have little to no answers. We'll have to switch to Longhorn storage 100% for the remaining time we use VCD :(

@vxav
Copy link
Author

vxav commented Nov 19, 2024

The maintainers have been moved to other projects, CAPVCD and co are mostly on life support.
I no longer have this issue because we moved to NFS (for unrelated reasons), I actually forgot about this issue.

Your best bet is to contribute upstream if you have the possibility to, you can still get your PRs merged.
Otherwise if it is a blocker, then moving to another CSI is indeed our only option I'm afraid 🙁.

@arunmk
Copy link
Collaborator

arunmk commented Nov 19, 2024

I think in the latest version of VCD (10.6.1 or 10.6.0) more than 16 disks are allowed. The CSI change is in one location where the number of disks is defined. Can someone try it out, test it, and submit a patch please. We will merge it to main.

However, please note that we are only looking at keeping CSI alive for newer k8s versions, or fixing severe customer issues. I don't know if there will be a released version of CSI with this fix. We are releasing only 1.6.z branch at this time, and based on business demands.

So we can help with the fix and also merge the changes. But we may not be able to have the change in a released version.

@penoux
Copy link

penoux commented Nov 19, 2024

@arunmk thanks, but I don't find any other information than: the number of disk per SCSI controller is 15, VMs support 4 SCSI controllers (theorically 60 disks attached then), but CSI Driver only implements 1 SCSI controller. Any document link?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants