Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unrecognized request argument supplied: azure on GitHub Actions #356

Closed
higebu opened this issue Oct 6, 2023 · 14 comments
Closed

Unrecognized request argument supplied: azure on GitHub Actions #356

higebu opened this issue Oct 6, 2023 · 14 comments

Comments

@higebu
Copy link

higebu commented Oct 6, 2023

On GitHub Actions, the error openai.error.InvalidRequestError: Unrecognized request argument supplied: azure is raised.

GitHub Actions yaml:

name: pr_agent
on:
  pull_request:
  issue_comment:
jobs:
  pr_agent_job:
    runs-on: ubuntu-latest
    permissions:
      issues: write
      pull-requests: write
      contents: write
    name: Run pr agent on every pull request, respond to user comments
    steps:
      - name: PR Agent action step
        id: pragent
        uses: Codium-ai/pr-agent@main
        env:
          OPENAI_KEY: ${{ secrets.OPENAI_KEY }}
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Full error log:

--- Logging error ---
Traceback (most recent call last):
  File "/app/pr_agent/algo/ai_handler.py", line 95, in chat_completion
    response = await acompletion(
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 97, in acompletion
    response =  await loop.run_in_executor(None, func_with_context)
  File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 718, in wrapper
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 677, in wrapper
    result = original_function(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/timeout.py", line 53, in wrapper
    result = future.result(timeout=local_timeout_duration)
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 458, in result
    return self.__get_result()
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.10/site-packages/litellm/timeout.py", line 42, in async_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 1196, in completion
    raise exception_type(
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 2842, in exception_type
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 2261, in exception_type
    raise original_exception
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 389, in completion
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 371, in completion
    response = openai.ChatCompletion.create(
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: Unrecognized request argument supplied: azure

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/logging/__init__.py", line 1100, in emit
    msg = self.format(record)
  File "/usr/local/lib/python3.10/logging/__init__.py", line 943, in format
    return fmt.format(record)
  File "/usr/local/lib/python3.10/logging/__init__.py", line 678, in format
    record.message = record.getMessage()
  File "/usr/local/lib/python3.10/logging/__init__.py", line [36](REDUCTED#step:3:37)8, in getMessage
    msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
  File "/app/pr_agent/servers/github_action_runner.py", line 92, in <module>
    asyncio.run(run_action())
  File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 636, in run_until_complete
    self.run_forever()
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
    self._run_once()
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
    handle._run()
  File "/usr/local/lib/python3.10/asyncio/events.py", line 80, in _run
    self._context.run(self._callback, *self._args)
  File "/app/pr_agent/servers/github_action_runner.py", line 86, in run_action
    await PRAgent().handle_request(url, body, notify=lambda: provider.add_eyes_reaction(comment_id))
  File "/app/pr_agent/agent/pr_agent.py", line 83, in handle_request
    await command2class[action](pr_url, args=args).run()
  File "/app/pr_agent/tools/pr_reviewer.py", line 109, in run
    await retry_with_fallback_models(self._prepare_prediction)
  File "/app/pr_agent/algo/pr_processing.py", line 216, in retry_with_fallback_models
    return await f(model)
  File "/app/pr_agent/tools/pr_reviewer.py", line 138, in _prepare_prediction
    self.prediction = await self._get_prediction(model)
  File "/app/pr_agent/tools/pr_reviewer.py", line 161, in _get_prediction
    response, finish_reason = await self.ai_handler.chat_completion(
  File "/app/pr_agent/algo/ai_handler.py", line 113, in chat_completion
    logging.error("Unknown error during OpenAI inference: ", e)
Message: 'Unknown error during OpenAI inference: '
Arguments: (InvalidRequestError(message='Unrecognized request argument supplied: azure', param=None, code=None, http_status=400, request_id=None),)
WARNING:root:Failed to generate prediction with gpt-4: Traceback (most recent call last):
  File "/app/pr_agent/algo/ai_handler.py", line 95, in chat_completion
    response = await acompletion(
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 97, in acompletion
    response =  await loop.run_in_executor(None, func_with_context)
  File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 718, in wrapper
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 677, in wrapper
    result = original_function(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/timeout.py", line 53, in wrapper
    result = future.result(timeout=local_timeout_duration)
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 458, in result
    return self.__get_result()
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.10/site-packages/litellm/timeout.py", line 42, in async_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 1196, in completion
    raise exception_type(
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 2842, in exception_type
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 2261, in exception_type
    raise original_exception
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 389, in completion
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line [37](REDUCTED#step:3:38)1, in completion
    response = openai.ChatCompletion.create(
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: Unrecognized request argument supplied: azure

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/pr_agent/algo/pr_processing.py", line 216, in retry_with_fallback_models
    return await f(model)
  File "/app/pr_agent/tools/pr_reviewer.py", line 1[38](REDUCTED#step:3:39), in _prepare_prediction
    self.prediction = await self._get_prediction(model)
  File "/app/pr_agent/tools/pr_reviewer.py", line 161, in _get_prediction
    response, finish_reason = await self.ai_handler.chat_completion(
  File "/app/pr_agent/algo/ai_handler.py", line 114, in chat_completion
    raise TryAgain from e
openai.error.TryAgain: <empty message>

--- Logging error ---
Traceback (most recent call last):
  File "/app/pr_agent/algo/ai_handler.py", line 95, in chat_completion
    response = await acompletion(
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 97, in acompletion
    response =  await loop.run_in_executor(None, func_with_context)
  File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 718, in wrapper
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 677, in wrapper
    result = original_function(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/timeout.py", line 53, in wrapper
    result = future.result(timeout=local_timeout_duration)
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 458, in result
    return self.__get_result()
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line [40](REDUCTED#step:3:41)3, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.10/site-packages/litellm/timeout.py", line [42](REDUCTED#step:3:43), in async_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 1196, in completion
    raise exception_type(
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 2842, in exception_type
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 2261, in exception_type
    raise original_exception
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 389, in completion
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 371, in completion
    response = openai.ChatCompletion.create(
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: Unrecognized request argument supplied: azure

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/logging/__init__.py", line 1100, in emit
    msg = self.format(record)
  File "/usr/local/lib/python3.10/logging/__init__.py", line 9[43](REDUCTED#step:3:44), in format
    return fmt.format(record)
  File "/usr/local/lib/python3.10/logging/__init__.py", line 678, in format
    record.message = record.getMessage()
  File "/usr/local/lib/python3.10/logging/__init__.py", line 368, in getMessage
    msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
  File "/app/pr_agent/servers/github_action_runner.py", line 92, in <module>
    asyncio.run(run_action())
  File "/usr/local/lib/python3.10/asyncio/runners.py", line [44](REDUCTED#step:3:45), in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 636, in run_until_complete
    self.run_forever()
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
    self._run_once()
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
    handle._run()
  File "/usr/local/lib/python3.10/asyncio/events.py", line 80, in _run
    self._context.run(self._callback, *self._args)
  File "/app/pr_agent/servers/github_action_runner.py", line 86, in run_action
    await PRAgent().handle_request(url, body, notify=lambda: provider.add_eyes_reaction(comment_id))
  File "/app/pr_agent/agent/pr_agent.py", line 83, in handle_request
    await command2class[action](pr_url, args=args).run()
  File "/app/pr_agent/tools/pr_reviewer.py", line 109, in run
    await retry_with_fallback_models(self._prepare_prediction)
  File "/app/pr_agent/algo/pr_processing.py", line 216, in retry_with_fallback_models
    return await f(model)
  File "/app/pr_agent/tools/pr_reviewer.py", line 138, in _prepare_prediction
    self.prediction = await self._get_prediction(model)
  File "/app/pr_agent/tools/pr_reviewer.py", line 161, in _get_prediction
    response, finish_reason = await self.ai_handler.chat_completion(
  File "/app/pr_agent/algo/ai_handler.py", line 113, in chat_completion
    logging.error("Unknown error during OpenAI inference: ", e)
Message: 'Unknown error during OpenAI inference: '
Arguments: (InvalidRequestError(message='Unrecognized request argument supplied: azure', param=None, code=None, http_status=400, request_id=None),)
WARNING:root:Failed to generate prediction with gpt-3.5-turbo-16k: Traceback (most recent call last):
  File "/app/pr_agent/algo/ai_handler.py", line 95, in chat_completion
    response = await acompletion(
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 97, in acompletion
    response =  await loop.run_in_executor(None, func_with_context)
  File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 718, in wrapper
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 677, in wrapper
    result = original_function(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/timeout.py", line 53, in wrapper
    result = future.result(timeout=local_timeout_duration)
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line [45](REDUCTED#step:3:46)8, in result
    return self.__get_result()
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.10/site-packages/litellm/timeout.py", line 42, in async_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 1196, in completion
    raise exception_type(
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 2842, in exception_type
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 2261, in exception_type
    raise original_exception
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 389, in completion
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 371, in completion
    response = openai.ChatCompletion.create(
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 1[53](REDUCTED#step:3:54), in create
    response, _, api_key = requestor.request(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: Unrecognized request argument supplied: azure

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/pr_agent/algo/pr_processing.py", line 216, in retry_with_fallback_models
    return await f(model)
  File "/app/pr_agent/tools/pr_reviewer.py", line 138, in _prepare_prediction
    self.prediction = await self._get_prediction(model)
  File "/app/pr_agent/tools/pr_reviewer.py", line 1[61](REDUCTED#step:3:62), in _get_prediction
    response, finish_reason = await self.ai_handler.chat_completion(
  File "/app/pr_agent/algo/ai_handler.py", line [114](REDUCTED#step:3:115), in chat_completion
    raise TryAgain from e
openai.error.TryAgain: <empty message>

ERROR:root:Failed to review PR: <empty message>

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
@mrT23
Copy link
Collaborator

mrT23 commented Oct 6, 2023

@krrishdholakia @zmeir

i was able to reproduce that with a new version of litellm
can you offer a stable solution ? we had several breaking issue with azure support

image

@krrishdholakia
Copy link
Contributor

Hi @mrT23,

Re: improving package stability

  • We're introducing semantic versioning + migration guides to better communicate changes

Open to any additional ideas / suggestions you might have for how we can improve things.

Re: azure
We realized that most providers (huggingface, bedrock, vertex ai, etc.) had similar issues. Instead of creating individual flags for them, we moved to having them be passed in either via the custom_llm_provider arg or as part of the model name - e.g. azure/<your-deployment-name>.

I think that could've been better communicated, and some of the changes we're making (e.g. semvars, guides, etc.) are related to that. We don't anticipate or want to make changes to completion() params, and i think the provider/model-name solution is probably going to be one we stick with for a while.

Have you seen any issues beyond this?

@krrishdholakia
Copy link
Contributor

I'm also happy to reintroduce the azure flag check, so we're backwards compatible.

@mrT23
Copy link
Collaborator

mrT23 commented Oct 6, 2023

@krrishdholakia
backward compatibility is very important to enable continuous smooth usage of litellm.

i reverted the line to:

            if self.azure:
                model = self.azure + "/" + model

is it correct now ?

image

@mrT23
Copy link
Collaborator

mrT23 commented Oct 6, 2023

maybe it should be:

            if self.azure:
                model = 'azure'+ "/" + model

?

@krrishdholakia
Copy link
Contributor

Yes - it should be 'azure/' + model

@krrishdholakia
Copy link
Contributor

I'm also going to reintroduce the flag check, in case others face similar issues.
Screenshot 2023-10-05 at 10 29 17 PM

@krrishdholakia
Copy link
Contributor

Please tag me in any issues that come up due to issues on our end @mrT23

Want to make sure we're a stable, reliable solution for y'all.

@krrishdholakia
Copy link
Contributor

Here's our azure docs for context.

https://docs.litellm.ai/docs/providers/azure

@krrishdholakia
Copy link
Contributor

Also adding testing for this scenario.
Screenshot 2023-10-05 at 10 32 09 PM

@mrT23
Copy link
Collaborator

mrT23 commented Oct 6, 2023

@krrishdholakia thanks for the quick response. i updated the code

@krrishdholakia
Copy link
Contributor

Sounds good - let me know how we can better communicate changes (if any occur).

@mrT23
Copy link
Collaborator

mrT23 commented Oct 6, 2023

@higebu
thanks a lot for reporting. we applied a fix, and the 'latest' docker should work now

note that you can also work against a stable docker, to ensure no surprises

uses: Codium-ai/[email protected]

https://github.com/Codium-ai/pr-agent/releases

@higebu
Copy link
Author

higebu commented Oct 6, 2023

@mrT23 @krrishdholakia Thank you for the quick fix!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants