Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: Correct persistence storage configuration in seata-server-cluster.yaml #24

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

flypiggyyoyoyo
Copy link

What is the purpose of the change?

The current storage configuration in seata-server-cluster.yaml is incorrectly placed under spec.store, which causes deployment failure. The error message shows: "error when creating deploy/seata-server-cluster.yaml: SeataServer in version "v1alpha1" cannot be handled as a SeataServer: strict decoding error: unknown field 'spec.store'". This change moves the storage configuration to the correct location according to the CRD definition.

Changes

# Before
spec:
  persistence:
    volumeReclaimPolicy: Delete
  store:
    resources:
      requests:
        storage: 5Gi

# After
spec:
  persistence:
    volumeReclaimPolicy: Delete
    spec:
      resources:
        requests:
          storage: 5Gi

Testing Results

  1. SeataServer resource created successfully:
$ kubectl get seataserver seata-server
NAME           AGE
seata-server   42m
  1. PVC created and bound with correct configuration:
$ kubectl get pvc
NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
seata-store-seata-server-0   Bound    pvc-da97f099-2d75-4936-8b56-e0ffb7384dd1   5Gi        RWO            standard       42m

$ kubectl describe pvc seata-store-seata-server-0
Name:          seata-store-seata-server-0
Namespace:     default
StorageClass:  standard
Status:        Bound
Capacity:      5Gi
Access Modes:  RWO
VolumeMode:    Filesystem
  1. StatefulSet status:
$ kubectl get statefulsets -n default
NAME           READY   AGE
seata-server   0/1     3m38s

Checklist

  • The change is well described and easy to understand
  • The change addresses a specific issue with the YAML configuration
  • Manual testing has been performed and results documented
  • Configuration aligns with CRD definition
  • Deployment verification completed

[中文]

修改目的

当前 seata-server-cluster.yaml 中的存储配置错误地放在了 spec.store 字段下,导致部署失败。根据 CRD 定义,将存储配置移动到了正确的位置 persistence.spec 下。

测试验证

已完成部署测试,验证了 SeataServer 资源创建、PVC 创建和绑定等功能正常。详细测试结果见上述英文描述。

@github-advanced-security
Copy link

This pull request sets up GitHub code scanning for this repository. Once the scans have completed and the checks have passed, the analysis results for this pull request branch will appear on this overview. Once you merge this pull request, the 'Security' tab will show more code scanning analysis results (for example, for the default branch). Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results. For more information about GitHub code scanning, check out the documentation.

resources:
requests:
storage: 5Gi
spec:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much for your feedback. For this issue, I think it needs further modification. The semantics of spec.persistence.spec here is not clear. I hope to optimize the name of this attribute to make it have a more explicit meaning. Depending on the API specification, it is best to fix this issue by upgrading the CRD version.
image
https://github.com/apache/incubator-seata-k8s/blob/master/config/crd/bases/operator.seata.apache.org_seataservers.yaml#L153

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Received. I'll upgrade the CRD version and optimize the spec.persistence.spec attribute name as you suggested.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants