Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove storageclaim api and corresponding controller along with references in rest of the controllers #302

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

leelavg
Copy link
Contributor

@leelavg leelavg commented Feb 10, 2025

Removal of storageclaim controller and moving the work over to the storageclient controller. Existing resources that are owned by storageclaims will now be owned by storageclient but protected against deletion due to finalizer for which current expectation is to be removed manually (or a follow up PR will be raised to remove finalizer on storageclaims)

no provider changes done atm and the expectation is, provider would send info that it used to send for storageclaim now to storageclient and any opportunity to change the message struct will be made and correspondingly a new PR will be raised.

Copy link

openshift-ci bot commented Feb 10, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: leelavg

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@leelavg
Copy link
Contributor Author

leelavg commented Feb 10, 2025

Part of RHSTOR-6913

Comment on lines 406 to 413
storageClass := &storagev1.StorageClass{}
if err := json.Unmarshal(eResource.Data, &storageClass); err != nil {
return reconcile.Result{}, fmt.Errorf("failed to unmarshal storage configuration response: %v", err)
}
// TODO: there will be a clash if storageclass is being reconcile by another entity
// ex: in internal mode storageclass was created by storagecluster controller
// and if we are mutating anything other than params there'll be unending reconcile loop
if err := utils.CreateOrReplace(r.ctx, r.Client, storageClass, func() error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • There is no name for StorageClass, createOrReplace will fail
  • in the CreateOrReplace the provisioner and the reclaimPolicy is missing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, that data should be coming from provider and unmarshalled before mutation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But we are unmarshalling eResource.Data that contains the parameters isn't it? is there a provider side PR too to have these changes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is no PR, the expectation is provider should send all these details and yes for some time main branch could be unstable, the idea is to have most of the changes in before ocs-op uses this updated code as that ensures any non-convergence PRs on ocs-op will continue to work until we mandate client-op by which time client-op should be ready or almost ready w/ minor adjustments here and there.

the expectation is similar to this https://github.com/red-hat-storage/ocs-operator/pull/2936/files#diff-54526f798a0113d02138ef0d19209ef48b13118976946542ec55b11119688a2bR716-R729

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated to be consistent, always respect separate labels & annotations from proto and everything else from unmarshalled data, since storage & snapshot classes doesn't have a spec we'll update whatever required (namespace) and consume the remaining of whatever provider sends.

Comment on lines 423 to 428
scParams := storageClass.Parameters
scParams["csi.storage.k8s.io/provisioner-secret-namespace"] = r.OperatorNamespace
scParams["csi.storage.k8s.io/node-stage-secret-namespace"] = r.OperatorNamespace
scParams["csi.storage.k8s.io/controller-expand-secret-namespace"] = r.OperatorNamespace

if err := r.reconcileSharedfileStorageClaim(); err != nil {
return reconcile.Result{}, err
}
storageClass.Parameters = scParams
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need a new variable here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed.

if err := json.Unmarshal(eResource.Data, snapshotClass); err != nil {
return reconcile.Result{}, fmt.Errorf("failed to unmarshal storage configuration response: %v", err)
}
if err := utils.CreateOrReplace(r.ctx, r.Client, snapshotClass, func() error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • snapshotClass name is empty
  • fields driver and deletionPolicy is missing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, that data should be coming from provider and unmarshalled before mutation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated.

Comment on lines +396 to +402
clientsList := &v1alpha1.StorageClientList{}
if err := c.list(clientsList, client.Limit(1)); err != nil {
c.log.Error(err, "unable to verify StorageClients presence prior to removal of CSI resources")
return err
} else if len(claimsList.Items) != 0 {
err = fmt.Errorf("failed to clean up resources: storage claims are present on the cluster")
c.log.Error(err, "Waiting for all storageClaims to be deleted.")
} else if len(clientsList.Items) != 0 {
err = fmt.Errorf("failed to clean up resources: storage clients are present on the cluster")
c.log.Error(err, "Waiting for all storageClients to be deleted.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is that needed? The reason we where looking into the existence of claim was that claims themselves where protected from deletion until all PV/PVCs are gone. Are we now giving similar guarantees with the clients? And if so where is the change?

Copy link
Contributor Author

@leelavg leelavg Feb 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, all the code was moved to storageclient controller, maybe you need to click on the Load Diff in github. for ex

func (r *StorageClientReconciler) hasPersistentVolumes() (bool, error) {

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants