-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
object not found
when fetching git dependency
#13555
Comments
From my understanding, under the same CI environment, a retry seems to work? That's odd. Are they any custom Git config in the CI environment? |
I've noticed this happening in GitHub Actions, but notably on a variety of different runners, workflows, etc. You'll see in the original error btw that the commit of the repo it says it can't find does indeed exist: aptos-labs/aptos-indexer-processors@d44b2d2. But it's an orphan, not sure if that is relevant. |
Indeed orphans can be a problem, but usually only when the One thing to investigate is that it appears https://github.com/aptos-labs/aptos-indexer-processors.git is specified with three different commit revs. I don't recall how cargo handles fetching those separate revs. If it is randomly fetching one of them, and the others aren't "reachable", then that could be a problem. It could also depend on how github's servers decide what to send, since they don't always send a minimal set, and it might change depending on which server is accessed or the phase of the moon. |
Where can you see that? I can try fix that to reduce the odds of this problem happening. |
Here are the three commits:
I haven't got time creating a minimal reproducer with a similar layout though. @ehuss Would it be the case that |
Oh I see what you mean. Unfortunately this is intentional, we have something of a... complicated versioning scheme at the moment. |
Not directly, I don't think. The actions linked above seem to have some caching, but not of the cargo directory that I can see (and would likely be too large for the cache anyways). Cargo won't run GitHub runs |
I believe this orphan has been this way for quite a while, so it seems strange that at this point it'd be in this partially available state. I guess some bad state sharding on their side or something? So I suppose if we depend on a non orphan commit that'll improve our odds. On the cargo side, does cargo retry in this situation? |
OK, I think I see one possibility of what is happening. When fetching a repo, cargo doesn't know if a However, if you have exceeded the API rate limit, that function gets a 403 HTTP response, and returns I have opened #13563 to include the GITHUB_TOKEN to avoid the API rate limit on CI. Another option is cargo could assume something that is 40 hexadecimal characters is an commit hash, and not a tag, and use the single commit refspec. I'm not entirely certain about that, but seems relatively safe? @weihanglo WDYT? Generally, though, I would strongly recommend against using commits from PRs. |
Nice finding!
In #10807, we expect Git will eventually move to support SHA256 at some point for future compatibility. Not sure if we want to go back to assume it is always 40 characters. |
Use GITHUB_TOKEN in github_fast_path if available. This changes the GitHub fast-path to include an Authorization header if the GITHUB_TOKEN environment variable is set. This is intended to help with API rate limits, which is not too hard to hit in CI. If it hits the rate limit, then the fetch falls back to fetching all branches which is significantly slower for some repositories. This is also a partial solution for #13555, where the user has a `rev="<commit-hash>"` where that commit hash points to a PR. The default refspec of `["+refs/heads/*:refs/remotes/origin/*", "+HEAD:refs/remotes/origin/HEAD"]` will not fetch commits from PRs (unless they are reachable from some branch). There is some risk that if the user has a GITHUB_TOKEN that is invalid or expired, then this will cause `github_fast_path` to fail where it previously wouldn't. I think that is probably unlikely to happen, though. This also adds redirect support. Apparently GitHub now issues a redirect for `repos/{org}/{repo}/commits/{commit}` to `/repositories/{id}/commits/{commit}`. The test `https::github_works` exercises this code path, and will use the token on CI.
I don't know if this is the same case/bug or not, but I wanted to comment here and see before opening a new issue. We have a fairly repeatable issue with the same "object not found - no match for id" error that presents under CI when attempting to resolve It's only recently popped up, but it now happens more often than not, always for the same URL/SHA pair. You can see the most recent CI failure here: https://github.com/fish-shell/fish-shell/actions/runs/8988940232/job/24690902288 The repo and revision in question is this one: https://github.com/meh/rust-terminfo/commits/7259f5aa5786a9d396162da0d993e268f6163fb2/ The error happens after several other git-resident dependencies have been fetched and used OK as part of the build process. Using Do you have any pointers on what we can try or any additional information we can provide? |
CARGO_NET_GIT_FETCH_WITH_CLI uses the `git` executable instead of the rust git2 crate/lib, which speeds things up and is known to resolve some issues fetching the registry or individual crates. This is to work around a specific issue with git-resident Cargo.toml dependencies (e.g. terminfo) that keep randomly failing to download under macOS CI.
@mqudsi |
It looks like this is the change that introduced this bug, because it changed the logic of whether to add |
Problem
When building a crate that depends on other crates via git, sometimes you get an error like this:
You can see an instance of this failure here: https://github.com/Homebrew/homebrew-core/pull/165260/files. Which comes from here in case the error fails to show up on the first link: https://github.com/Homebrew/homebrew-core/actions/runs/8180427193/job/22368538918?pr=165260.
This only happens sometimes. It seems to happen more often in CI environments, I'm not sure I've seen it happen locally.
Steps
Repro is challenging because it occurs only sometimes. If you look at any of the past brew version bumps for the
aptos
formula you'll see that one of the CI steps fails at least once every time due to this issue: https://github.com/Homebrew/homebrew-core/pulls?q=is%3Apr+aptos+is%3Aclosed.The dep in question comes from here: https://github.com/aptos-labs/aptos-indexer-processors. That repo itself has git submodules, so that could be a factor.
Possible Solution(s)
I'm not sure how to fix this, mostly for now we've just been adding retries to get around it.
Notes
No response
Version
The text was updated successfully, but these errors were encountered: