You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am experimenting with various local Ollama models and have encountered an intermittent timeout error when using the Ellmer package. The error message is like this:
Error in `httr2::req_perform()`:
! Failed to perform HTTP request.
Caused by error in `curl::curl_fetch_memory()`:
! Timeout was reached [localhost]: Operation timed out after 60003 milliseconds with 0 bytes received
The value of 60003 varies but is always slightly greater than 60000. The main cause seems to be my computer's lack of GPU inference support.
Interestingly, this issue does not occur consistently; smaller models run without any timeout errors. I suspect that this behaviour may be linked to the curl package, which does not seem to honour getOption('timeout') as noted in this GitHub issue.
I have also seen a similar (closed) issue regarding the OpenAI API: Issue #213. Given that the curl and httr2 calls are embedded within the ellmer functions, I would like to kindly request that you consider providing end users with options to control the timeout settings. Alternatively, simply increasing the internal timeout could also be a viable solution.
Thank you for your attention to this matter
The text was updated successfully, but these errors were encountered:
Dear ellmer team,
I hope this message finds you well.
I am experimenting with various local Ollama models and have encountered an intermittent timeout error when using the Ellmer package. The error message is like this:
The value of 60003 varies but is always slightly greater than 60000. The main cause seems to be my computer's lack of GPU inference support.
Interestingly, this issue does not occur consistently; smaller models run without any timeout errors. I suspect that this behaviour may be linked to the
curl
package, which does not seem to honourgetOption('timeout')
as noted in this GitHub issue.I have also seen a similar (closed) issue regarding the OpenAI API: Issue #213. Given that the curl and httr2 calls are embedded within the ellmer functions, I would like to kindly request that you consider providing end users with options to control the timeout settings. Alternatively, simply increasing the internal timeout could also be a viable solution.
Thank you for your attention to this matter
The text was updated successfully, but these errors were encountered: