-
-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thread safety #294
Comments
We can wait for the authors to confirm, but from my understanding the result
There are some related discussions in #224 |
@GianlucaFicarelli Thanks for stepping in, and apologies to @brucehoff - I was kind of busy with other (paid) work... As @GianlucaFicarelli already pointed out, this is not a "safety" issue, but mainly a performance issue. The cache and internal data structures should be kept intact, but in a "thundering herd" scenario like this, calls will bypass the cache until the first result is ready. There have been several proposals to solve this, but at least so far these would have added complexity or simply serialize calls to the underlying function. In all cases, there would have been some performance impact on the general case, i.e. calling a cached function with different arguments simultaneously. Please note that the same holds for the standard library's If this is a common use case, I'd recommend synchronizing/serializing the calling function ( |
Until/unless the issue is fixed I recommend updating or removing the statement:
which is misleading. |
Describe the bug
I need an in-memory cache to be shared across threads. The doc’s say
If so, the code below should result in one cache “miss” followed by 999 cache hits. However it prints:
It’s like all 1000 threads got to enter the cache at the same time.
Expected result
Actual result
Reproduction steps
The text was updated successfully, but these errors were encountered: