Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout during /improve command #178

Closed
zmeir opened this issue Aug 7, 2023 · 13 comments
Closed

Timeout during /improve command #178

zmeir opened this issue Aug 7, 2023 · 13 comments
Labels

Comments

@zmeir
Copy link
Contributor

zmeir commented Aug 7, 2023

I'm getting a timeout error when calling /improve. This seems to happen when the PR diff is fairly large. For example: 11 files changed, ~1k additions, ~300 deletions. In my configuration.toml I set pr_code_suggestions.num_code_suggestions=5

Traceback (most recent call last):
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/litellm/timeout.py", line 44, in wrapper
result = future.result(timeout=local_timeout_duration)
File "/home/vcap/deps/0/python/lib/python3.10/concurrent/futures/_base.py", line 460, in result
raise TimeoutError()
concurrent.futures._base.TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/vcap/app/pr_agent/algo/ai_handler.py", line 71, in chat_completion
response = await acompletion(
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/litellm/main.py", line 43, in acompletion
return await loop.run_in_executor(None, func)
File "/home/vcap/deps/0/python/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/litellm/utils.py", line 118, in wrapper
raise e
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/litellm/utils.py", line 107, in wrapper
result = original_function(*args, **kwargs)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/litellm/timeout.py", line 47, in wrapper
raise exception_to_raise(f"A timeout error occurred. The function call took longer than {local_timeout_duration} second(s).")
openai.error.Timeout: A timeout error occurred. The function call took longer than 60 second(s).
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/vcap/deps/0/python/lib/python3.10/logging/__init__.py", line 1100, in emit
msg = self.format(record)
File "/home/vcap/deps/0/python/lib/python3.10/logging/__init__.py", line 943, in format
return fmt.format(record)
File "/home/vcap/deps/0/python/lib/python3.10/logging/__init__.py", line 678, in format
record.message = record.getMessage()
File "/home/vcap/deps/0/python/lib/python3.10/logging/__init__.py", line 368, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "/home/vcap/app/pr_agent/servers/github_app.py", line 154, in <module>
start()
File "/home/vcap/app/pr_agent/servers/github_app.py", line 150, in start
uvicorn.run(app, host="0.0.0.0", port=int(os.environ.get("PORT", "3000")))
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/uvicorn/main.py", line 578, in run
server.run()
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
return asyncio.run(self.serve(sockets=sockets))
File "/home/vcap/deps/0/python/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/home/vcap/deps/0/python/lib/python3.10/asyncio/base_events.py", line 636, in run_until_complete
self.run_forever()
File "/home/vcap/deps/0/python/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
self._run_once()
File "/home/vcap/deps/0/python/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
handle._run()
File "/home/vcap/deps/0/python/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 428, in run_asgi
result = await app(  # type: ignore[func-returns-value]
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/applications.py", line 290, in __call__
await super().__call__(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette_context/middleware/raw_middleware.py", line 92, in __call__
await self.app(scope, receive, send_wrapper)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/routing.py", line 241, in app
raw_response = await run_endpoint_function(
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/routing.py", line 167, in run_endpoint_function
return await dependant.call(**values)
File "/home/vcap/app/pr_agent/servers/github_app.py", line 37, in handle_github_webhooks
response = await handle_request(body, event=request.headers.get("X-GitHub-Event", None))
File "/home/vcap/app/pr_agent/servers/github_app.py", line 110, in handle_request
await agent.handle_request(api_url, comment_body)
File "/home/vcap/app/pr_agent/agent/pr_agent.py", line 72, in handle_request
await command2class[action](pr_url, args=args).run()
File "/home/vcap/app/pr_agent/tools/pr_code_suggestions.py", line 50, in run
await retry_with_fallback_models(self._prepare_prediction)
File "/home/vcap/app/pr_agent/algo/pr_processing.py", line 218, in retry_with_fallback_models
return await f(model)
File "/home/vcap/app/pr_agent/tools/pr_code_suggestions.py", line 68, in _prepare_prediction
self.prediction = await self._get_prediction(model)
File "/home/vcap/app/pr_agent/tools/pr_code_suggestions.py", line 79, in _get_prediction
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
File "/home/vcap/app/pr_agent/algo/ai_handler.py", line 82, in chat_completion
logging.error("Error during OpenAI inference: ", e)
Message: 'Error during OpenAI inference: '
Arguments: (Timeout(message='A timeout error occurred. The function call took longer than 60 second(s).', http_status=None, request_id=None),)
WARNING:root:Failed to generate prediction with gpt-4: Traceback (most recent call last):
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/litellm/timeout.py", line 44, in wrapper
result = future.result(timeout=local_timeout_duration)
@okotek
Copy link
Contributor

okotek commented Aug 7, 2023

#181
@mrT23 can you review?

@okotek okotek added the fixed label Aug 8, 2023
@krrishdholakia
Copy link
Contributor

Hey @okotek were y'all able to get this - it's the force_timeout param

Happy to make a PR if required.

@zmeir
Copy link
Contributor Author

zmeir commented Aug 8, 2023

Was addressed in #181 as far as I can tell, although even with 180 seconds I'm still seeings timeouts with large PRs. I'll try creating a reproducible example.

@krrishdholakia
Copy link
Contributor

Hmm.. maybe we can remove timeouts from lite / make that an optional parameter, this should prevent these scenarios from occurring.

This was primarily added to deal with Azure / Replicate related issues where the model calls would hang for an unusual amount of time.

@krrishdholakia
Copy link
Contributor

Ok, tracking this on our end - BerriAI/litellm#73

Will aim to have an updated version tested + pushed out by EOD. I'll make a PR for the change, which can then run through the normal testing y'all have for PR-Agent as well.

@zmeir
Copy link
Contributor Author

zmeir commented Aug 8, 2023

This was primarily added to deal with Azure / Replicate related issues where the model calls would hang for an unusual amount of time.

It's possible that this is the issue I'm facing since I'm using Azure. I've only ever noticed it happen with /improve though, never with /review, /describe, etc.

@krrishdholakia
Copy link
Contributor

I'm guessing pre-lite it didn't timeout, so did it hang or always complete successfully?

@zmeir
Copy link
Contributor Author

zmeir commented Aug 8, 2023

From what I can tell it always completed successfully, but then again this is all pretty new stuff so it's possible I just didn't run into a large enough PR when using /improve in the pre-lite agent.

When I get the timeout again I'll try to run the pre-lite agent on the same PR and see if it works.

@krrishdholakia
Copy link
Contributor

sounds great! i'll put set_timeouts as an optional param, which should remove any lite-related blockers between you-your azure calls.

@krrishdholakia
Copy link
Contributor

update: default timeouts are now set to 600 seconds (~10 minutes)

@zmeir
Copy link
Contributor Author

zmeir commented Aug 13, 2023

update: tested /improve again today on a fairly large PR with the latest main version and it didn't time out. I'll close this issue for new, and if it resurfaces I'll reopen it / open a new one with more details.

@zmeir zmeir closed this as completed Aug 13, 2023
@mrT23
Copy link
Collaborator

mrT23 commented Sep 13, 2023

/similar_issue

@github-actions
Copy link
Contributor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants