Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running Github Action: openai.error.RateLimitError #104

Closed
ezzcodeezzlife opened this issue Jul 20, 2023 · 16 comments
Closed

Running Github Action: openai.error.RateLimitError #104

ezzcodeezzlife opened this issue Jul 20, 2023 · 16 comments
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@ezzcodeezzlife
Copy link

Hey, i get the following error when runnung PR-Agent as Github Action. I follow the installation steps.

During the "PR Agent action step" I get the following error. Important to note that there is only one open PR at the time. I also checked to run API calls with the same OpenAI key and it works with no problems.

Sorry for the big stacktrace but maybe it helps :

Run Codium-ai/pr-agent@main
  env:
    OPENAI_KEY: ***
    GITHUB_TOKEN: ***
/usr/bin/docker run --name c9a4a5c25221dd1fdc40a4b361asde6834f697_1d7b19 --label c9a4a5 --workdir /github/workspace --rm -e "OPENAI_KEY" -e "GITHUB_TOKEN" -e "HOME" -e "GITHUB_JOB" -e "GITHUB_REF" -e "GITHUB_SHA" -e "GITHUB_REPOSITORY" -e "GITHUB_REPOSITORY_OWNER" -e "GITHUB_REPOSITORY_OWNER_ID" -e "GITHUB_RUN_ID" -e "GITHUB_RUN_NUMBER" -e "GITHUB_RETENTION_DAYS" -e "GITHUB_RUN_ATTEMPT" -e "GITHUB_REPOSITORY_ID" -e "GITHUB_ACTOR_ID" -e "GITHUB_ACTOR" -e "GITHUB_TRIGGERING_ACTOR" -e "GITHUB_WORKFLOW" -e "GITHUB_HEAD_REF" -e "GITHUB_BASE_REF" -e "GITHUB_EVENT_NAME" -e "GITHUB_SERVER_URL" -e "GITHUB_API_URL" -e "GITHUB_GRAPHQL_URL" -e "GITHUB_REF_NAME" -e "GITHUB_REF_PROTECTED" -e "GITHUB_REF_TYPE" -e "GITHUB_WORKFLOW_REF" -e "GITHUB_WORKFLOW_SHA" -e "GITHUB_WORKSPACE" -e "GITHUB_ACTION" -e "GITHUB_EVENT_PATH" -e "GITHUB_ACTION_REPOSITORY" -e "GITHUB_ACTION_REF" -e "GITHUB_PATH" -e "GITHUB_ENV" -e "GITHUB_STEP_SUMMARY" -e "GITHUB_STATE" -e "GITHUB_OUTPUT" -e "RUNNER_OS" -e "RUNNER_ARCH" -e "RUNNER_NAME" -e "RUNNER_ENVIRONMENT" -e "RUNNER_TOOL_CACHE" -e "RUNNER_TEMP" -e "RUNNER_WORKSPACE" -e "ACTIONS_RUNTIME_URL" -e "ACTIONS_RUNTIME_TOKEN" -e "ACTIONS_CACHE_URL" -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/DLL/DLL":"/github/workspace" c9a4a5:c25221dd1asd61b3de6834f697
Traceback (most recent call last):
  File "/app/pr_agent/servers/github_action_runner.py", line 57, in <module>
    asyncio.run(run_action())
  File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/app/pr_agent/servers/github_action_runner.py", line 43, in run_action
    await PRReviewer(pr_url).review()
  File "/app/pr_agent/tools/pr_reviewer.py", line 70, in review
    self.prediction = await self._get_prediction()
  File "/app/pr_agent/tools/pr_reviewer.py", line 92, in _get_prediction
    response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
  File "/app/pr_agent/algo/ai_handler.py", line 60, in chat_completion
    response = await openai.ChatCompletion.acreate(
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 45, in acreate
    return await super().acreate(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
    response, _, api_key = await requestor.arequest(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 382, in arequest
    resp, got_stream = await self._interpret_async_response(result, stream)
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 726, in _interpret_async_response
    self._interpret_response_line(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
    raise self.handle_error_response(
openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details.

Please let me know what is the issue here, thanks. Love the project!

@ezzcodeezzlife
Copy link
Author

Just tested with another smaller repo, still no luck 😢

@okotek
Copy link
Contributor

okotek commented Jul 20, 2023

Hi

Can you check your rate limits under https://platform.openai.com/account/rate-limits and check the values for gpt-3.5-turbo and gpt-4?

We'll add handling for OpenAI rate limits soon, and probably fallback to gpt-3.5-turbo in case of small rate limit or unavailability of gpt-4

@okotek okotek added enhancement New feature or request good first issue Good for newcomers labels Jul 20, 2023
@okotek
Copy link
Contributor

okotek commented Jul 20, 2023

#105

@ezzcodeezzlife
Copy link
Author

ezzcodeezzlife commented Jul 20, 2023

image
@okotek

@okotek
Copy link
Contributor

okotek commented Jul 20, 2023

Could be that 40K is too small when the diff is moderate. I merged the PR, can try again and see if the retry policy helps?

@ezzcodeezzlife
Copy link
Author

unfortunately still the same issue. What TPM do you have and how did you get it? Any more ideas? @okotek

@KalleV
Copy link

KalleV commented Jul 20, 2023

Did you also check your token usage at https://platform.openai.com/account/usage? I'm curious if an unexpectedly high number of tokens or a burst of requests was used.

@ezzcodeezzlife
Copy link
Author

No real spike visible in the usage dash, but thanks for the info! @KalleV

@KalleV
Copy link

KalleV commented Jul 21, 2023

Nice, that's good to know. What if you click through the language model usage metrics? On my page, I saw a few show up with quite a few tokens like this one:

gpt-4-0613, 1 request
7,052 prompt + 218 completion = 7,270 tokens

@okotek
Copy link
Contributor

okotek commented Jul 23, 2023

#117 fallback models implementation

@shashank42
Copy link

shashank42 commented Jul 24, 2023

Got the same error. Seems the error is from github actions and not OpenAI API. I could be wrong.


Traceback (most recent call last):
  File "/app/pr_agent/servers/github_action_runner.py", line 57, in <module>
    asyncio.run(run_action())
  File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/app/pr_agent/servers/github_action_runner.py", line 53, in run_action
    await PRAgent().handle_request(pr_url, body)
  File "/app/pr_agent/agent/pr_agent.py", line 25, in handle_request
    await PRDescription(pr_url).describe()
  File "/app/pr_agent/tools/pr_description.py", line 40, in describe
    await retry_with_fallback_models(self._prepare_prediction)
  File "/app/pr_agent/algo/pr_processing.py", line [20](https://github.com/glip-gg/btx-game/actions/runs/5642014830/job/15281052282#step:3:21)8, in retry_with_fallback_models
    return await f(model)
  File "/app/pr_agent/tools/pr_description.py", line 55, in _prepare_prediction
    self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
  File "/app/pr_agent/algo/pr_processing.py", line 43, in get_pr_diff
    diff_files = list(git_provider.get_diff_files())
  File "/app/pr_agent/git_providers/github_provider.py", line 84, in get_diff_files
    for file in files:
  File "/usr/local/lib/python3.10/site-packages/github/PaginatedList.py", line 69, in __iter__
    newElements = self._grow()
  File "/usr/local/lib/python3.10/site-packages/github/PaginatedList.py", line 80, in _grow
    newElements = self._fetchNextPage()
  File "/usr/local/lib/python3.10/site-packages/github/PaginatedList.py", line [21](https://github.com/glip-gg/btx-game/actions/runs/5642014830/job/15281052282#step:3:22)3, in _fetchNextPage
    headers, data = self.__requester.requestJsonAndCheck(
  File "/usr/local/lib/python3.10/site-packages/github/Requester.py", line 442, in requestJsonAndCheck
    return self.__check(
  File "/usr/local/lib/python3.10/site-packages/github/Requester.py", line 487, in __check
    raise self.__createException(status, responseHeaders, data)
github.GithubException.RateLimitExceededException: 403 {"message": "API rate limit exceeded for installation ID [28](https://github.com/glip-gg/btx-game/actions/runs/5642014830/job/15281052282#step:3:29)441098.", "documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting"}

@ezzcodeezzlife
Copy link
Author

image

some started working for me, but commenting commands still leads to 403

@ezzcodeezzlife
Copy link
Author

@okotek @KalleV any more ideas around this? 🤔 thank you

@ilyadav
Copy link

ilyadav commented Jul 25, 2023

Hello,
it looks like now it is not openAI related, but GitHub token limitation
When using GITHUB_TOKEN, the rate limit is 1,000 requests per hour per repository.
If you exceed the rate limit, the response will have a 403 status and the x-ratelimit-remaining header will be 0:

please see https://docs.github.com/en/rest/overview/resources-in-the-rest-api?apiVersion=2022-11-28#rate-limits-for-requests-from-github-actions
and
https://docs.github.com/en/rest/overview/resources-in-the-rest-api?apiVersion=2022-11-28#rate-limits-for-requests-from-github-actions

as a fastest response I would suggest adding an exception to make message rather then fail

in pr_processing.py

from github import GithubException

    try:
        diff_files = list(git_provider.get_diff_files())
    except GithubException.RateLimitExceededException as e:
        logging.error('Rate limit exceeded for GitHub API.')

If you want, i can take this. for this small patch and would love work on more robust solution to find way to overcome this problem

Best regards
Ilya

@KalleV
Copy link

KalleV commented Jul 26, 2023

I think I managed to quickly replicate the error from the OP right in the https://platform.openai.com/playground while testing PR responses for a PR equivalent to ~4800 tokens (GPT4):

Rate limit reached for 10KTPM-200RPM in organization org-<id> on tokens per min. Limit: 10,000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.

@ezzcodeezzlife
Copy link
Author

It is fixed for me. Thank you for all your contributions ❤️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

5 participants