-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reusing AsyncOpenAI client results in openai.APIConnectionError #1059
Comments
Thanks for the detailed report! @RobertCraigie can you take a look? |
I can report the same exact issue. I instantiate a global |
@DamianB-BitFlipper I'm relieved to hear I'm not the only one facing the hanging retry/timeout issue. Were you able to get a code snippet to reproduce the hanging retries? Unfortunately I was only able to repro the initial request failure outside of my application. |
I'm also unable to replicate this reliably outside of my application... Are you calling the AsyncClient within a Docker container? |
@DamianB-BitFlipper I see the timeout issue even when running a CLI version of my application outside of a Docker container |
Do you need anything to help with this issue? It's blocking us from moving to version 1.0 now, our unit tests fail after a few steps. I reproduce the issue systematically on OSX 14.3. |
@arnaud-secondlayer (or others) can you share your repro? |
I believe that our suspicion is that this relates to encode/httpcore#830 which httpx maintainer @tomchristie is currently working on. |
Here's a simple pytest based repro from me and @arnaud-secondlayer import pytest
from langchain_openai.embeddings import OpenAIEmbeddings
@pytest.fixture(scope="module") # initialized only once in the module
def embeddings():
return OpenAIEmbeddings(timeout=5)
TEST_STRINGS = [
"This is a test string.",
"This is another test string.",
"This is a third test string.",
"This is a fourth test string.",
"This is a fifth test string.",
]
@pytest.mark.parametrize("string", TEST_STRINGS)
async def test_embedding(embeddings: OpenAIEmbeddings, string):
embedding = await embeddings.aembed_query(string) |
@dumbPy can you share a full stack trace? |
Sure, here's what I could grab. Note that this happens only if I add timeout. without timeout it just waits forever, so there's no stack trace for that. Stack Trace
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/openai/_base_client.py:1437: self = <openai._base_client.AsyncHttpxClientWrapper object at 0x105c08a90>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpx/_client.py:1646: self = <openai._base_client.AsyncHttpxClientWrapper object at 0x105c08a90>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpx/_client.py:1674: self = <openai._base_client.AsyncHttpxClientWrapper object at 0x105c08a90>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpx/_client.py:1711: self = <openai._base_client.AsyncHttpxClientWrapper object at 0x105c08a90>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpx/_client.py:1748: self = <httpx.AsyncHTTPTransport object at 0x1100868d0>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpx/_transports/default.py:371: self = <httpcore.AsyncConnectionPool object at 0x110086890>, request = <Request [b'POST']>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpcore/_async/connection_pool.py:234: self = <httpcore.AsyncConnectionPool object at 0x110086890>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpcore/_async/connection_pool.py:195: self = <AsyncHTTPConnection ['https://api.openai.com:443', HTTP/1.1, CLOSED, Request Count: 1]>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpcore/_async/connection.py:173: self = <AsyncHTTP11Connection ['https://api.openai.com:443', CLOSED, Request Count: 1]>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpcore/_async/http11.py:253: self = <httpcore._backends.anyio.AnyIOStream object at 0x110057cd0>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpcore/_backends/anyio.py:54: self = TLSStream(transport_stream=<anyio._backends._asyncio.SocketStream object at 0x110a9a6d0>, standard_compatible=False, _...t at 0x1072c4590>, _read_bio=<_ssl.MemoryBIO object at 0x110a9dff0>, _write_bio=<_ssl.MemoryBIO object at 0x110a9dfc0>)
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/anyio/streams/tls.py:193: self = <anyio._backends._asyncio.SocketStream object at 0x110a9a6d0>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/anyio/_backends/_asyncio.py:1257:
uvloop/handles/stream.pyx:699:
uvloop/handles/handle.pyx:159: RuntimeError During handling of the above exception, another exception occurred: self = <openai.AsyncOpenAI object at 0x1100688d0>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/openai/_base_client.py:1437: self = <openai._base_client.AsyncHttpxClientWrapper object at 0x105c08a90>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpx/_client.py:1646: self = <openai._base_client.AsyncHttpxClientWrapper object at 0x105c08a90>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpx/_client.py:1674: self = <openai._base_client.AsyncHttpxClientWrapper object at 0x105c08a90>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpx/_client.py:1711: self = <openai._base_client.AsyncHttpxClientWrapper object at 0x105c08a90>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpx/_client.py:1748: self = <httpx.AsyncHTTPTransport object at 0x1100868d0>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpx/_transports/default.py:371: self = <httpcore.AsyncConnectionPool object at 0x110086890>, request = <Request [b'POST']>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpcore/_async/connection_pool.py:234: self = <httpcore.AsyncConnectionPool object at 0x110086890>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpcore/_async/connection_pool.py:195: self = <AsyncHTTPConnection ['https://api.openai.com:443', HTTP/1.1, CLOSED, Request Count: 1]>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpcore/_async/connection.py:173: self = <AsyncHTTP11Connection ['https://api.openai.com:443', CLOSED, Request Count: 1]>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpcore/_async/http11.py:253: self = <httpcore._backends.anyio.AnyIOStream object at 0x110057cd0>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpcore/_backends/anyio.py:54: self = TLSStream(transport_stream=<anyio._backends._asyncio.SocketStream object at 0x110a9a6d0>, standard_compatible=False, _...t at 0x1072c4590>, _read_bio=<_ssl.MemoryBIO object at 0x110a9dff0>, _write_bio=<_ssl.MemoryBIO object at 0x110a9dfc0>)
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/anyio/streams/tls.py:193: self = <anyio._backends._asyncio.SocketStream object at 0x110a9a6d0>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/anyio/_backends/_asyncio.py:1257:
uvloop/handles/stream.pyx:699:
uvloop/handles/handle.pyx:159: RuntimeError During handling of the above exception, another exception occurred: self = <openai.AsyncOpenAI object at 0x1100688d0>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/openai/_base_client.py:1437: self = <openai._base_client.AsyncHttpxClientWrapper object at 0x105c08a90>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpx/_client.py:1646: self = <openai._base_client.AsyncHttpxClientWrapper object at 0x105c08a90>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpx/_client.py:1674: self = <openai._base_client.AsyncHttpxClientWrapper object at 0x105c08a90>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpx/_client.py:1711: self = <openai._base_client.AsyncHttpxClientWrapper object at 0x105c08a90>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpx/_client.py:1748: self = <httpx.AsyncHTTPTransport object at 0x1100868d0>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpx/_transports/default.py:371: self = <httpcore.AsyncConnectionPool object at 0x110086890>, request = <Request [b'POST']>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpcore/_async/connection_pool.py:234: self = <httpcore.AsyncConnectionPool object at 0x110086890>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpcore/_async/connection_pool.py:195: self = <AsyncHTTPConnection ['https://api.openai.com:443', HTTP/1.1, CLOSED, Request Count: 1]>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpcore/_async/connection.py:173: self = <AsyncHTTP11Connection ['https://api.openai.com:443', CLOSED, Request Count: 1]>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpcore/_async/http11.py:253: self = <httpcore._backends.anyio.AnyIOStream object at 0x110057cd0>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/httpcore/_backends/anyio.py:54: self = TLSStream(transport_stream=<anyio._backends._asyncio.SocketStream object at 0x110a9a6d0>, standard_compatible=False, _...t at 0x1072c4590>, _read_bio=<_ssl.MemoryBIO object at 0x110a9dff0>, _write_bio=<_ssl.MemoryBIO object at 0x110a9dfc0>)
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/anyio/streams/tls.py:193: self = <anyio._backends._asyncio.SocketStream object at 0x110a9a6d0>
../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/anyio/_backends/_asyncio.py:1257:
uvloop/handles/stream.pyx:699:
uvloop/handles/handle.pyx:159: RuntimeError The above exception was the direct cause of the following exception: embeddings = OpenAIEmbeddings(client=<openai.resources.embeddings.Embeddings object at 0x11006aa50>, async_client=<openai.resources...kip_empty=False, default_headers=None, default_query=None, retry_min_seconds=4, retry_max_seconds=20, http_client=None)
pkg_tests/test_openai_embeddings.py:21: ../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/langchain_openai/embeddings/base.py:522: in aembed_query self = <openai.AsyncOpenAI object at 0x1100688d0>
E openai.APIConnectionError: Connection error. ../../../micromamba/envs/nlp-test/lib/python3.11/site-packages/openai/_base_client.py:1471: APIConnectionError ╭─ ~/secondlayerco/secondlayer/nlp-cloud-server sufiyan/auto…-adjustments *5 !10 ?10 1 ✘ 17s nlp-test Py pkg_tests/test_openai_embeddings.py ..... [100%] =============================================== warnings summary ================================================ -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html |
This appears to be because of your pytest setup, can you reproduce without using pytest? |
@RobertCraigie may be the pytest expedites the underlying issue (as the first reported issue isn't pytest related). It works fine without pytest for my above simple example with just 5 concurrent requestsimport asyncio
from openai import AsyncOpenAI
TEST_STRINGS = [
"This is a test string.",
"This is another test string.",
"This is a third test string.",
"This is a fourth test string.",
"This is a fifth test string.",
]
client = AsyncOpenAI()
async def get_embedding(string):
embedding = await client.embeddings.create(input=string, model="text-embedding-ada-002")
print("Embedding received for string:", string)
async def main():
return await asyncio.gather(*[get_embedding(string) for string in TEST_STRINGS])
asyncio.run(main()) But fails when ran inside pytestimport pytest
from openai import AsyncOpenAI
TEST_STRINGS = [
"This is a test string.",
"This is another test string.",
"This is a third test string.",
"This is a fourth test string.",
"This is a fifth test string.",
]
@pytest.fixture(scope="module") # initialized only once in the module
def client():
return AsyncOpenAI()
@pytest.mark.parametrize("string", TEST_STRINGS)
async def test_embedding(client: AsyncOpenAI, string):
return await client.embeddings.create(input=string, model="text-embedding-ada-002") with stack trace
From a user's perspective though; since the auth is simply a token without expiry, the error shouldn't happen either way, so far as the client is in reference. |
@dumbPy Async is sometimes tricky with pytest so it would be nice indeed to have this reproduced without pytest, just to have less moving parts and a more straightforward failing case. |
Can confirm that encode/httpcore#880 fixes the issue. pip3 uninstall httpcore && \
pip3 install git+https://github.com/encode/httpcore.git@clean-state-cancellations And run the above pytest tests, and it seem to pass. |
Terrific – thank you so much for investigating & sharing your results @dumbPy ! |
Please try installing |
Confirm this is an issue with the Python library and not an underlying OpenAI API
Describe the bug
Reusing an instance of
AsyncOpenAI
client for multiple calls ofasyncio.gather
results in anopenai.APIConnectionError
. Retried requests (either via theopenai
library directly orbackoff
decorator) succeed, but the first try of the second use of the client always fails.I suspect that this usage of
AsyncOpenAI
is not ideal, but the behavior nonetheless feels buggy. Even if the reuse of the client should fail, I'm confused understand why retries succeed.Bizarrely, in my application all retries after the initial
openai.APIConnectionError
result in unendingopenai.APITimeoutError
instead of success, but I am unable to repro this outside of the application. However, I strongly suspect that the issue is related as reusing the client solves both the initial error as well as the timeouts.To Reproduce
AsyncOpenAI
with no retries enabledAsyncOpenAI().chat.completions.create
to create a list ofFuture
objects (any number will do)asyncio.gather
to get the results of the API callsCode snippets
OS
macOS
Python version
Python 3.11.5
Library version
openai v1.6.1
The text was updated successfully, but these errors were encountered: