Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

langgraph: seg fault #20

Open
bossjones opened this issue Jan 5, 2025 · 9 comments
Open

langgraph: seg fault #20

bossjones opened this issue Jan 5, 2025 · 9 comments

Comments

@bossjones
Copy link
Owner

Got the following message when I tried to talk to my graph:

langgraph-api-1       | warning | /api/langgraph_api/api/assistants.py:36: PydanticDeprecatedSince20: The `schema` method is deprecated; use `model_json_schema` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.10/migration/

langgraph-api-1       | warning | /api/langgraph_api/api/assistants.py:219: PydanticDeprecatedSince20: The `__fields__` attribute is deprecated, use `model_fields` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.10/migration/

langgraph-api-1       | warning | /api/langgraph_api/api/assistants.py:218: PydanticDeprecatedSince20: The `__fields__` attribute is deprecated, use `model_fields` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.10/migration/

langgraph-api-1       | warning | /api/langgraph_api/api/assistants.py:218: PydanticDeprecatedSince20: The `schema` method is deprecated; use `model_json_schema` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.10/migration/

langgraph-api-1       | info | GET /assistants/166c0d22-706c-5a5b-9027-ea37f7308a85/schemas 200 96ms
langgraph-api-1       | info | GET /assistants/166c0d22-706c-5a5b-9027-ea37f7308a85/subgraphs 200 96ms
langgraph-api-1       | info | GET /assistants/166c0d22-706c-5a5b-9027-ea37f7308a85/graph 200 10ms
langgraph-api-1       | info | HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
langgraph-api-1       | info | HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
langgraph-api-1       | info | HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
langgraph-api-1       | [1:writes] Finished step 1 with writes to 1 channel:
langgraph-api-1       | - messages -> [AIMessage(content="Hello! I'm here and ready to help you with your ToDo list. How can I assist you today?", additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_d28bcae782'}, id='run-cfd118f1-3f24-4cec-bf51-d3536e3324db')]
langgraph-api-1       | [1:checkpoint] State at the end of step 1:
langgraph-api-1       | {'messages': [HumanMessage(content='hi how are you doing today?\n', additional_kwargs={}, response_metadata={}, id='0ec25255-39e9-4180-a644-14fe80e0515b'),
langgraph-api-1       |               AIMessage(content="Hello! I'm here and ready to help you with your ToDo list. How can I assist you today?", additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_d28bcae782'}, id='run-cfd118f1-3f24-4cec-bf51-d3536e3324db')]}
langgraph-api-1       | info | Background run succeeded
langgraph-api-1       | [1:writes] Finished step 1 with writes to 1 channel:
langgraph-api-1       | - messages -> [AIMessage(content="Hello! I'm here and ready to help you with your ToDo list. How can I assist you today?", additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_d28bcae782'}, id='run-cfd118f1-3f24-4cec-bf51-d3536e3324db')]
langgraph-api-1       | [1:checkpoint] State at the end of step 1:
langgraph-api-1       | {'messages': [HumanMessage(content='hi how are you doing today?\n', additional_kwargs={}, response_metadata={}, id='0ec25255-39e9-4180-a644-14fe80e0515b'),
langgraph-api-1       |               AIMessage(content="Hello! I'm here and ready to help you with your ToDo list. How can I assist you today?", additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_d28bcae782'}, id='run-cfd118f1-3f24-4cec-bf51-d3536e3324db')]}
langgraph-api-1       | info | Background run succeeded
langgraph-api-1       | [1:writes] Finished step 1 with writes to 1 channel:
langgraph-api-1       | - messages -> [AIMessage(content="Hello! I'm here and ready to help you with your ToDo list. How can I assist you today?", additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_d28bcae782'}, id='run-cfd118f1-3f24-4cec-bf51-d3536e3324db')]
langgraph-api-1       | [1:checkpoint] State at the end of step 1:
langgraph-api-1       | {'messages': [HumanMessage(content='hi how are you doing today?\n', additional_kwargs={}, response_metadata={}, id='0ec25255-39e9-4180-a644-14fe80e0515b'),
langgraph-api-1       |               AIMessage(content="Hello! I'm here and ready to help you with your ToDo list. How can I assist you today?", additional_kwargs={}, response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_d28bcae782'}, id='run-cfd118f1-3f24-4cec-bf51-d3536e3324db')]}
langgraph-api-1       | info | Background run succeeded
langgraph-api-1       | Fatal Python error: Segmentation fault
langgraph-api-1       | 
langgraph-api-1       | Thread 0x0000ffff5740f180 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/ssl.py", line 1103 in read
langgraph-api-1       |   File "/usr/local/lib/python3.12/ssl.py", line 1251 in recv_into
langgraph-api-1       |   File "/usr/local/lib/python3.12/socket.py", line 720 in readinto
langgraph-api-1       |   File "/usr/local/lib/python3.12/http/client.py", line 292 in _read_status
langgraph-api-1       |   File "/usr/local/lib/python3.12/http/client.py", line 331 in begin
langgraph-api-1       |   File "/usr/local/lib/python3.12/http/client.py", line 1428 in getresponse
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/urllib3/connection.py", line 516 in getresponse
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py", line 534 in _make_request
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py", line 787 in urlopen
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/requests/adapters.py", line 667 in send
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 703 in send
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 589 in request
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/langsmith/client.py", line 749 in request_with_retries
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/langsmith/client.py", line 1783 in _send_multipart_req
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/langsmith/client.py", line 1609 in _multipart_ingest_ops
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/langsmith/_internal/_background_thread.py", line 145 in _tracing_thread_handle_batch
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/langsmith/_internal/_background_thread.py", line 256 in tracing_control_thread_func
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1012 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1032 in _bootstrap
langgraph-api-1       | 
langgraph-api-1       | Thread 0x0000ffff5d20f180 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 355 in wait
langgraph-api-1       |   File "/usr/local/lib/python3.12/queue.py", line 171 in get
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 994 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1032 in _bootstrap
langgraph-api-1       | 
langgraph-api-1       | Thread 0x0000ffff57e0f180 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 355 in wait
langgraph-api-1       |   File "/usr/local/lib/python3.12/queue.py", line 171 in get
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 994 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1032 in _bootstrap
langgraph-api-1       | 
langgraph-api-1       | Thread 0x0000ffff8340f180 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 90 in _worker
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1012 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1032 in _bootstrap
langgraph-api-1       | 
langgraph-api-1       | Thread 0x0000ffff83e0f180 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 90 in _worker
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1012 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1032 in _bootstrap
langgraph-api-1       | 
langgraph-api-1       | Current thread 0x0000ffff92f6d020 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/traceback.py", line 336 in walk_stack
langgraph-api-1       |   File "/usr/local/lib/python3.12/traceback.py", line 392 in extended_frame_gen
langgraph-api-1       |   File "/usr/local/lib/python3.12/traceback.py", line 418 in _extract_from_extended_frame_gen
langgraph-api-1       |   File "/usr/local/lib/python3.12/traceback.py", line 395 in extract
langgraph-api-1       |   File "<shim>", line ??? in <interpreter trampoline>
langgraph-api-1       |   File "/api/langgraph_api/sse.py", line 59 in stream_response
langgraph-api-1       |   File "/api/langgraph_api/sse.py", line 31 in wrap
langgraph-api-1       |   File "/usr/local/lib/python3.12/asyncio/runners.py", line 118 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/asyncio/runners.py", line 194 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/uvicorn/server.py", line 66 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/uvicorn/_subprocess.py", line 80 in subprocess_started
langgraph-api-1       |   File "/usr/local/lib/python3.12/multiprocessing/process.py", line 108 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/multiprocessing/process.py", line 314 in _bootstrap
langgraph-api-1       |   File "/usr/local/lib/python3.12/multiprocessing/spawn.py", line 135 in _main
langgraph-api-1       |   File "/usr/local/lib/python3.12/multiprocessing/spawn.py", line 122 in spawn_main
langgraph-api-1       |   File "<string>", line 1 in <module>
langgraph-api-1       | 
langgraph-api-1       | Extension modules: greenlet._greenlet, _brotli, tornado.speedups, msgpack._cmsgpack, charset_normalizer.md, requests.packages.charset_normalizer.md, requests.packages.chardet.md, _cffi_backend, uvloop.loop, httptools.parser.parser, httptools.parser.url_parser, websockets.speedups, psutil._psutil_linux, psutil._psutil_posix, psycopg_binary.pq, psycopg_binary._psycopg, regex._regex, yaml._yaml, multidict._multidict, yarl._quoting_c, propcache._helpers_c, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket.mask, aiohttp._websocket.reader_c, frozenlist._frozenlist, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, PIL._imaging (total: 40)
langgraph-api-1       | Fatal Python error: Segmentation fault
langgraph-api-1       | 
langgraph-api-1       | Thread 0x0000ffff5740f180 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/ssl.py", line 1103 in read
langgraph-api-1       |   File "/usr/local/lib/python3.12/ssl.py", line 1251 in recv_into
langgraph-api-1       |   File "/usr/local/lib/python3.12/socket.py", line 720 in readinto
langgraph-api-1       |   File "/usr/local/lib/python3.12/http/client.py", line 292 in _read_status
langgraph-api-1       |   File "/usr/local/lib/python3.12/http/client.py", line 331 in begin
langgraph-api-1       |   File "/usr/local/lib/python3.12/http/client.py", line 1428 in getresponse
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/urllib3/connection.py", line 516 in getresponse
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py", line 534 in _make_request
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py", line 787 in urlopen
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/requests/adapters.py", line 667 in send
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 703 in send
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 589 in request
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/langsmith/client.py", line 749 in request_with_retries
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/langsmith/client.py", line 1783 in _send_multipart_req
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/langsmith/client.py", line 1609 in _multipart_ingest_ops
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/langsmith/_internal/_background_thread.py", line 145 in _tracing_thread_handle_batch
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/langsmith/_internal/_background_thread.py", line 256 in tracing_control_thread_func
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1012 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1032 in _bootstrap
langgraph-api-1       | 
langgraph-api-1       | Thread 0x0000ffff5d20f180 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 355 in wait
langgraph-api-1       |   File "/usr/local/lib/python3.12/queue.py", line 171 in get
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 994 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1032 in _bootstrap
langgraph-api-1       | 
langgraph-api-1       | Thread 0x0000ffff57e0f180 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 355 in wait
langgraph-api-1       |   File "/usr/local/lib/python3.12/queue.py", line 171 in get
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 994 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1032 in _bootstrap
langgraph-api-1       | 
langgraph-api-1       | Thread 0x0000ffff8340f180 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 90 in _worker
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1012 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1032 in _bootstrap
langgraph-api-1       | 
langgraph-api-1       | Thread 0x0000ffff83e0f180 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 90 in _worker
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1012 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1032 in _bootstrap
langgraph-api-1       | 
langgraph-api-1       | Current thread 0x0000ffff92f6d020 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/traceback.py", line 336 in walk_stack
langgraph-api-1       |   File "/usr/local/lib/python3.12/traceback.py", line 392 in extended_frame_gen
langgraph-api-1       |   File "/usr/local/lib/python3.12/traceback.py", line 418 in _extract_from_extended_frame_gen
langgraph-api-1       |   File "/usr/local/lib/python3.12/traceback.py", line 395 in extract
langgraph-api-1       |   File "<shim>", line ??? in <interpreter trampoline>
langgraph-api-1       |   File "/api/langgraph_api/sse.py", line 59 in stream_response
langgraph-api-1       |   File "/api/langgraph_api/sse.py", line 31 in wrap
langgraph-api-1       |   File "/usr/local/lib/python3.12/asyncio/runners.py", line 118 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/asyncio/runners.py", line 194 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/uvicorn/server.py", line 66 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/uvicorn/_subprocess.py", line 80 in subprocess_started
langgraph-api-1       |   File "/usr/local/lib/python3.12/multiprocessing/process.py", line 108 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/multiprocessing/process.py", line 314 in _bootstrap
langgraph-api-1       |   File "/usr/local/lib/python3.12/multiprocessing/spawn.py", line 135 in _main
langgraph-api-1       |   File "/usr/local/lib/python3.12/multiprocessing/spawn.py", line 122 in spawn_main
langgraph-api-1       |   File "<string>", line 1 in <module>
langgraph-api-1       | 
langgraph-api-1       | Extension modules: greenlet._greenlet, _brotli, tornado.speedups, msgpack._cmsgpack, charset_normalizer.md, requests.packages.charset_normalizer.md, requests.packages.chardet.md, _cffi_backend, uvloop.loop, httptools.parser.parser, httptools.parser.url_parser, websockets.speedups, psutil._psutil_linux, psutil._psutil_posix, psycopg_binary.pq, psycopg_binary._psycopg, regex._regex, yaml._yaml, multidict._multidict, yarl._quoting_c, propcache._helpers_c, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket.mask, aiohttp._websocket.reader_c, frozenlist._frozenlist, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, PIL._imaging (total: 40)
langgraph-api-1       | Fatal Python error: Segmentation fault
langgraph-api-1       | 
langgraph-api-1       | Thread 0x0000ffff5740f180 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/ssl.py", line 1103 in read
langgraph-api-1       |   File "/usr/local/lib/python3.12/ssl.py", line 1251 in recv_into
langgraph-api-1       |   File "/usr/local/lib/python3.12/socket.py", line 720 in readinto
langgraph-api-1       |   File "/usr/local/lib/python3.12/http/client.py", line 292 in _read_status
langgraph-api-1       |   File "/usr/local/lib/python3.12/http/client.py", line 331 in begin
langgraph-api-1       |   File "/usr/local/lib/python3.12/http/client.py", line 1428 in getresponse
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/urllib3/connection.py", line 516 in getresponse
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py", line 534 in _make_request
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py", line 787 in urlopen
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/requests/adapters.py", line 667 in send
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 703 in send
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 589 in request
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/langsmith/client.py", line 749 in request_with_retries
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/langsmith/client.py", line 1783 in _send_multipart_req
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/langsmith/client.py", line 1609 in _multipart_ingest_ops
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/langsmith/_internal/_background_thread.py", line 145 in _tracing_thread_handle_batch
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/langsmith/_internal/_background_thread.py", line 256 in tracing_control_thread_func
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1012 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1032 in _bootstrap
langgraph-api-1       | 
langgraph-api-1       | Thread 0x0000ffff5d20f180 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 355 in wait
langgraph-api-1       |   File "/usr/local/lib/python3.12/queue.py", line 171 in get
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 994 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1032 in _bootstrap
langgraph-api-1       | 
langgraph-api-1       | Thread 0x0000ffff57e0f180 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 355 in wait
langgraph-api-1       |   File "/usr/local/lib/python3.12/queue.py", line 171 in get
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 994 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1032 in _bootstrap
langgraph-api-1       | 
langgraph-api-1       | Thread 0x0000ffff8340f180 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 90 in _worker
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1012 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1032 in _bootstrap
langgraph-api-1       | 
langgraph-api-1       | Thread 0x0000ffff83e0f180 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 90 in _worker
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1012 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
langgraph-api-1       |   File "/usr/local/lib/python3.12/threading.py", line 1032 in _bootstrap
langgraph-api-1       | 
langgraph-api-1       | Current thread 0x0000ffff92f6d020 (most recent call first):
langgraph-api-1       |   File "/usr/local/lib/python3.12/traceback.py", line 336 in walk_stack
langgraph-api-1       |   File "/usr/local/lib/python3.12/traceback.py", line 392 in extended_frame_gen
langgraph-api-1       |   File "/usr/local/lib/python3.12/traceback.py", line 418 in _extract_from_extended_frame_gen
langgraph-api-1       |   File "/usr/local/lib/python3.12/traceback.py", line 395 in extract
langgraph-api-1       |   File "<shim>", line ??? in <interpreter trampoline>
langgraph-api-1       |   File "/api/langgraph_api/sse.py", line 59 in stream_response
langgraph-api-1       |   File "/api/langgraph_api/sse.py", line 31 in wrap
langgraph-api-1       |   File "/usr/local/lib/python3.12/asyncio/runners.py", line 118 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/asyncio/runners.py", line 194 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/uvicorn/server.py", line 66 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/site-packages/uvicorn/_subprocess.py", line 80 in subprocess_started
langgraph-api-1       |   File "/usr/local/lib/python3.12/multiprocessing/process.py", line 108 in run
langgraph-api-1       |   File "/usr/local/lib/python3.12/multiprocessing/process.py", line 314 in _bootstrap
langgraph-api-1       |   File "/usr/local/lib/python3.12/multiprocessing/spawn.py", line 135 in _main
langgraph-api-1       |   File "/usr/local/lib/python3.12/multiprocessing/spawn.py", line 122 in spawn_main
langgraph-api-1       |   File "<string>", line 1 in <module>
langgraph-api-1       | 
langgraph-api-1       | Extension modules: greenlet._greenlet, _brotli, tornado.speedups, msgpack._cmsgpack, charset_normalizer.md, requests.packages.charset_normalizer.md, requests.packages.chardet.md, _cffi_backend, uvloop.loop, httptools.parser.parser, httptools.parser.url_parser, websockets.speedups, psutil._psutil_linux, psutil._psutil_posix, psycopg_binary.pq, psycopg_binary._psycopg, regex._regex, yaml._yaml, multidict._multidict, yarl._quoting_c, propcache._helpers_c, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket.mask, aiohttp._websocket.reader_c, frozenlist._frozenlist, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, PIL._imaging (total: 40)
langgraph-postgres-1  | 2025-01-05 21:40:53.199 UTC [27] LOG:  checkpoint starting: time
langgraph-postgres-1  | 2025-01-05 21:40:53.199 UTC [27] LOG:  checkpoint starting: time
langgraph-postgres-1  | 2025-01-05 21:40:53.199 UTC [27] LOG:  checkpoint starting: time
langgraph-postgres-1  | 2025-01-05 21:40:55.621 UTC [27] LOG:  checkpoint complete: wrote 24 buffers (0.1%); 0 WAL file(s) added, 0 removed, 0 recycled; write=2.409 s, sync=0.006 s, total=2.423 s; sync files=21, longest=0.004 s, average=0.001 s; distance=43 kB, estimate=43 kB; lsn=0/173B458, redo lsn=0/173B420
langgraph-postgres-1  | 2025-01-05 21:40:55.621 UTC [27] LOG:  checkpoint complete: wrote 24 buffers (0.1%); 0 WAL file(s) added, 0 removed, 0 recycled; write=2.409 s, sync=0.006 s, total=2.423 s; sync files=21, longest=0.004 s, average=0.001 s; distance=43 kB, estimate=43 kB; lsn=0/173B458, redo lsn=0/173B420
langgraph-postgres-1  | 2025-01-05 21:40:55.621 UTC [27] LOG:  checkpoint complete: wrote 24 buffers (0.1%); 0 WAL file(s) added, 0 removed, 0 recycled; write=2.409 s, sync=0.006 s, total=2.423 s; sync files=21, longest=0.004 s, average=0.001 s; distance=43 kB, estimate=43 kB; lsn=0/173B458, redo lsn=0/173B420

Current thought process is the following (thx perplexity):

NOTE: Here is the potentially problematic file: https://github.com/langchain-ai/langsmith-sdk/blob/9a2e4961e389f4bdbe4b822c193e0b917e71d651/python/langsmith/_internal/_background_thread.py#L40C62-L40C71

Inside a Docker container, the behavior of os.cpu_count() (which is typically used to get the CPU count) can be misleading. Python's cpu_count() function usually reports the number of CPUs available on the host machine, not the number of CPUs allocated to the container[5][8]. This means that using cpu_count() directly in your ThreadPoolExecutor configuration might not accurately reflect the resources available to your container.

To address this issue, you have a few options:

  1. Use container-aware methods:
    Instead of relying on cpu_count(), you can use container-aware methods to determine the number of CPUs allocated to your container. For example:

    import os
    
    def get_container_cpu_count():
        try:
            with open('/sys/fs/cgroup/cpu/cpu.cfs_quota_us') as f:
                cfs_quota_us = int(f.read())
            with open('/sys/fs/cgroup/cpu/cpu.cfs_period_us') as f:
                cfs_period_us = int(f.read())
            container_cpus = cfs_quota_us // cfs_period_us
            return max(container_cpus, 1)
        except:
            return os.cpu_count() or 1
    
    HTTP_REQUEST_THREAD_POOL = cf.ThreadPoolExecutor(max_workers=get_container_cpu_count() * 3)

    This method reads the CPU quota and period from the cgroup filesystem, which reflects the container's CPU allocation[5].

  2. Use environment variables:
    You can pass the number of CPUs as an environment variable when running your Docker container and use that in your code:

    import os
    
    cpu_count = int(os.environ.get('CONTAINER_CPU_COUNT', 1))
    HTTP_REQUEST_THREAD_POOL = cf.ThreadPoolExecutor(max_workers=cpu_count * 3)

    Then, when running your container:

    docker run -e CONTAINER_CPU_COUNT=2 your_image
    
  3. Hardcode the value:
    If you know the exact number of CPUs you'll allocate to your container, you can hardcode this value:

    HTTP_REQUEST_THREAD_POOL = cf.ThreadPoolExecutor(max_workers=2 * 3)  # Assuming 2 CPUs

Remember that the optimal number of threads doesn't always directly correlate with the number of CPUs, especially for I/O-bound tasks. You might need to experiment to find the best configuration for your specific use case[7].

Also, keep in mind that if you're using the --cpuset-cpus flag when running your Docker container, this will limit which CPUs your container can use, but it won't change what os.cpu_count() reports inside the container[9].

Citations:
[1] https://ppl-ai-file-upload.s3.amazonaws.com/web/direct-files/39473573/54ebe82e-61c3-4079-ae0b-c388be238b27/paste.txt
[2] https://stackoverflow.com/questions/76415968/docker-container-using-only-one-cpu-core/76416074
[3] moby/moby#43587
[4] https://stackoverflow.com/questions/56195679/why-is-threadpoolexecutors-default-max-workers-decided-based-on-the-number-of-c
[5] python/cpython#80235
[6] https://www.tutorialspoint.com/how-many-cpus-does-a-docker-container-use
[7] https://superfastpython.com/threadpoolexecutor-number-of-threads/
[8] https://www.geeksforgeeks.org/python-os-cpu_count-method/
[9] https://stackoverflow.com/questions/47545960/how-to-check-the-number-of-cores-used-by-docker-container/51998044

@bossjones
Copy link
Owner Author

ok it's def loguru mixed w/ the enqueue setting. I just need to find a way to disable the intercept logger.

@bossjones
Copy link
Owner Author

Delgan/loguru#916

@bossjones
Copy link
Owner Author

Ok it looks like langsmith introduced the multiprocessing module in this merge langchain-ai/langsmith-sdk@0243f79, it's possible that reverting to a version before this avoids everything.

@bossjones
Copy link
Owner Author

Delgan/loguru#1264 logger.reinstall() will probably fix all of this for me as this change is happening in a different module all together, will play with some of the examples listed. Or I can just try removing enqueue?

@bossjones
Copy link
Owner Author

image

this is a very good point to realize Delgan/loguru#108 (comment)

@bossjones
Copy link
Owner Author

image

great insight, i'm new to multiprocessing module Delgan/loguru#108 (comment)

@bossjones
Copy link
Owner Author

also good info re: understanding langraph_api module

https://langchain-ai.github.io/langgraph/concepts/langgraph_server/#cron-jobs

old rendered Dockerfile source from Docker Desktop:


FROM langchain/langgraph-api:3.12

# Install system dependencies

ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update && apt-get install -y --no-install-recommends python3-dev python3 ca-certificates python3-numpy python3-setuptools python3-wheel python3-pip g++ gcc ninja-build cmake build-essential autoconf automake libtool libmagic-dev poppler-utils libreoffice libomp-dev tesseract-ocr tesseract-ocr-por libyaml-dev ffmpeg libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev python3-openssl git libpq5 libpq-dev libxml2-dev libxslt1-dev libcairo2-dev libgirepository1.0-dev libgraphviz-dev libjpeg-dev libopencv-dev libpango1.0-dev libprotobuf-dev protobuf-compiler rustc cargo libwebp-dev libzbar0 libzbar-dev imagemagick ghostscript pandoc aria2 zsh bash-completion libpq-dev pkg-config libssl-dev  openssl unzip gzip vim tree less sqlite3 && rm -rf /var/lib/apt/lists/*

ENV PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

# debugging, show the current directory and the contents of the deps directory, look for .venv which should not exist.

RUN ls -lta && echo `pwd` && ls -lta && tree && echo "PATH='/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'" >> ~/.bashrc && echo "PATH='/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'" >> ~/.profile

# Hopefully this will fix the path issue with langgraph studio grabbing the env vars from my host machine.

ENV TAPLO_VERSION=0.9.3

COPY ./install_taplo.sh .


RUN chmod +x install_taplo.sh && bash -x ./install_taplo.sh && mv taplo /usr/local/bin/taplo && rm install_taplo.sh

RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | env PATH='/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' bash -x -s -- -y

ENV PATH='/root/.cargo/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'

    
RUN curl --proto '=https' --tlsv1.2 -sSf https://just.systems/install.sh | bash -x -s -- --to /usr/bin

ADD https://astral.sh/uv/0.5.14/install.sh /uv-installer.sh

RUN env PATH='/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' bash -x /uv-installer.sh && rm /uv-installer.sh


ENV PATH='/root/.local/bin:/root/.cargo/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'


ENV UV_SYSTEM_PYTHON=1 \
    UV_PIP_DEFAULT_PYTHON=/usr/bin/python3 \

    UV_LINK_MODE=copy \
    UV_CACHE_DIR=/root/.cache/uv/ \

    PYTHONASYNCIODEBUG=1 \
    PYTHONFAULTHANDLER=1


WORKDIR /deps/democracy-exe

COPY pyproject.toml uv.lock ./
RUN --mount=type=cache,target=/root/.cache/uv --mount=type=bind,source=uv.lock,target=uv.lock --mount=type=bind,source=pyproject.toml,target=pyproject.toml --mount=type=bind,source=democracy_exe/requirements.txt,target=requirements.txt uv sync --frozen --no-install-project --verbose --no-dev && uv pip install --system -r requirements.txt -e . --no-deps --verbose


COPY . /deps/democracy-exe

RUN --mount=type=cache,target=/root/.cache/uv uv sync --verbose --no-dev --frozen && uv tool dir --bin && ls -lta && pwd && ls -lta /deps && tree /deps && cat ~/.bashrc && env && cat ~/.cargo/env && cat ~/.profile && echo "alias pip='uv pip'" >> ~/.bashrc && echo "alias pip='uv pip'" >> ~/.profile


ADD . /deps/democracy-exe

RUN --mount=type=cache,target=/root/.cache/pip PYTHONDONTWRITEBYTECODE=1 pip install -c /api/constraints.txt -e /deps/*

ENV LANGSERVE_GRAPHS='{"react":"/deps/democracy-exe/democracy_exe/agentic/workflows/react/graph.py:graph"}'

WORKDIR /deps/democracy-exe

CMD exec uvicorn langgraph_api.server:app --log-config /api/logging.json --no-access-log --host 0.0.0.0 --port 8000 --reload --reload-dir /deps/democracy-exe

@bossjones
Copy link
Owner Author

bossjones commented Jan 7, 2025

ok uvicorn might be the culprit cause this doesn't happen when I run it regularly via uv. mainly cause of the logging config being overwritten and we can't disable it.

https://medium.com/@muh.bazm/how-i-unified-logging-in-fastapi-with-uvicorn-and-loguru-6813058c48fc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant