-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remove storageclaim api and corresponding controller along with references in rest of the controllers #302
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: leelavg The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Part of RHSTOR-6913 |
storageClass := &storagev1.StorageClass{} | ||
if err := json.Unmarshal(eResource.Data, &storageClass); err != nil { | ||
return reconcile.Result{}, fmt.Errorf("failed to unmarshal storage configuration response: %v", err) | ||
} | ||
// TODO: there will be a clash if storageclass is being reconcile by another entity | ||
// ex: in internal mode storageclass was created by storagecluster controller | ||
// and if we are mutating anything other than params there'll be unending reconcile loop | ||
if err := utils.CreateOrReplace(r.ctx, r.Client, storageClass, func() error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- There is no name for StorageClass, createOrReplace will fail
- in the CreateOrReplace the provisioner and the reclaimPolicy is missing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, that data should be coming from provider and unmarshalled before mutation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But we are unmarshalling eResource.Data that contains the parameters isn't it? is there a provider side PR too to have these changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is no PR, the expectation is provider should send all these details and yes for some time main branch could be unstable, the idea is to have most of the changes in before ocs-op uses this updated code as that ensures any non-convergence PRs on ocs-op will continue to work until we mandate client-op by which time client-op should be ready or almost ready w/ minor adjustments here and there.
the expectation is similar to this https://github.com/red-hat-storage/ocs-operator/pull/2936/files#diff-54526f798a0113d02138ef0d19209ef48b13118976946542ec55b11119688a2bR716-R729
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated to be consistent, always respect separate labels & annotations from proto and everything else from unmarshalled data, since storage & snapshot classes doesn't have a spec we'll update whatever required (namespace) and consume the remaining of whatever provider sends.
scParams := storageClass.Parameters | ||
scParams["csi.storage.k8s.io/provisioner-secret-namespace"] = r.OperatorNamespace | ||
scParams["csi.storage.k8s.io/node-stage-secret-namespace"] = r.OperatorNamespace | ||
scParams["csi.storage.k8s.io/controller-expand-secret-namespace"] = r.OperatorNamespace | ||
|
||
if err := r.reconcileSharedfileStorageClaim(); err != nil { | ||
return reconcile.Result{}, err | ||
} | ||
storageClass.Parameters = scParams |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need a new variable here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
if err := json.Unmarshal(eResource.Data, snapshotClass); err != nil { | ||
return reconcile.Result{}, fmt.Errorf("failed to unmarshal storage configuration response: %v", err) | ||
} | ||
if err := utils.CreateOrReplace(r.ctx, r.Client, snapshotClass, func() error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- snapshotClass name is empty
- fields driver and deletionPolicy is missing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, that data should be coming from provider and unmarshalled before mutation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated.
Signed-off-by: Leela Venkaiah G <[email protected]>
Signed-off-by: Leela Venkaiah G <[email protected]>
Signed-off-by: Leela Venkaiah G <[email protected]>
Signed-off-by: Leela Venkaiah G <[email protected]>
Signed-off-by: Leela Venkaiah G <[email protected]>
Signed-off-by: Leela Venkaiah G <[email protected]>
Signed-off-by: Leela Venkaiah G <[email protected]>
it's decided that provider would continue sending older secrets as well Signed-off-by: Leela Venkaiah G <[email protected]>
Signed-off-by: Leela Venkaiah G <[email protected]>
clientsList := &v1alpha1.StorageClientList{} | ||
if err := c.list(clientsList, client.Limit(1)); err != nil { | ||
c.log.Error(err, "unable to verify StorageClients presence prior to removal of CSI resources") | ||
return err | ||
} else if len(claimsList.Items) != 0 { | ||
err = fmt.Errorf("failed to clean up resources: storage claims are present on the cluster") | ||
c.log.Error(err, "Waiting for all storageClaims to be deleted.") | ||
} else if len(clientsList.Items) != 0 { | ||
err = fmt.Errorf("failed to clean up resources: storage clients are present on the cluster") | ||
c.log.Error(err, "Waiting for all storageClients to be deleted.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is that needed? The reason we where looking into the existence of claim was that claims themselves where protected from deletion until all PV/PVCs are gone. Are we now giving similar guarantees with the clients? And if so where is the change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, all the code was moved to storageclient controller, maybe you need to click on the Load Diff in github. for ex
func (r *StorageClientReconciler) hasPersistentVolumes() (bool, error) { |
Removal of storageclaim controller and moving the work over to the storageclient controller. Existing resources that are owned by storageclaims will now be owned by storageclient but protected against deletion due to finalizer for which current expectation is to be removed manually (or a follow up PR will be raised to remove finalizer on storageclaims)
no provider changes done atm and the expectation is, provider would send info that it used to send for storageclaim now to storageclient and any opportunity to change the message struct will be made and correspondingly a new PR will be raised.