-
Notifications
You must be signed in to change notification settings - Fork 854
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lingering 3rd party project dependency on prow.k8s.io #7709
Comments
@dims confirmed that cadvisor's prow CI is actually still not functional since the prow.k8s.io control plane migration from google.com to kubernetes.io (community) GCP project, so that leaves only github.comcontainerd/containerd and Kubernetes' own projects/subprojects. |
Do we want to include CRI-O given what some tests using the K8S Infrastructure ? https://testgrid.k8s.io/sig-node-cri-o |
My understanding is that these are jobs that are not directly connected to the repo and have kubelet using a stable release so as to not run kubelet tests with a single CRI implementation (other jobs mostly use stable containerd) That's different from jobs aimed at testing the development of a third party project (or worse requiring additional over-permissioning of our CI accounts etc for presubmit, or webhooks or), unless I've missed something. If there are jobs testing cri-o development, I think that would need to be discussed here. Similarly to create a cluster to test kubernetes we ultimately use other projects, but we are not operating CI for this projects. (So also we would not remove any of those, but we would say, a job testing cilium @ HEAD, the difference is in which repo's changes are under test) |
The containerd prow jobs are specifically testing compatibility with Kubernetes using node e2e tests. We have separate CI for core containerd and for CRI using critest that runs in GitHub Actions via the containerd org (and funded by CNCF). I think if we want to migrate containerd completely off prow jobs, we'd need guidance on how to best run the node e2e tests elsewhere. |
Sure, but every other landscape project especially CNI CSI implementations would argue the same and yet we're not hosting CI for those repos (unless they are a subproject).
SIG node should know best how to run node_e2e tests, but I think you could run these against a vagrant VM in actions as suggested by @upodroid in the slack thread. |
See also the umbrella issue at #7708 and previous discussion at kubernetes/test-infra#12863
Right now this represents an ambiguous policy gap where we only provide CI for Kubernetes subprojects ... except cadvisor and containerd. AFAICT all others are on their own non-kubernetes-provided CI now.
While the cost is likely not high, it is difficult to reason about a consistent policy currently, with these two small exceptions resolved we could pretty reasonably state that we provide CI for the Kubernetes project and its official subprojects, within reason (with some right reserved to deal with budget-busting usage and/or abuse e.g. "crypto mining").
Otherwise we need to explain coherent reasoning that does not open us up to hosting the entire landscape's CI. We already have a huge problem on our hands handling the 100s of Kubernetes subprojects, scale testing, content distribution, etc.
/sig k8s-infra
/sig testing
cc @kubernetes/sig-k8s-infra-leads @kubernetes/sig-testing-leads
The text was updated successfully, but these errors were encountered: