-
-
Notifications
You must be signed in to change notification settings - Fork 31.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decimal hashing very slow, could be cached #58683
Comments
Tested on 3.2 Note that I noticed that Decimal is supposed to be faster in 3.3 but I thought I would bring this to light just in case its still relevant Decimal hashing is very slow, even for simple numbers. I found by caching the hash result, there is significant speed up whenever a Decimal value is reused. I created a class that inherits from Decimal and changed the __hash__ function to try/except a stored hash value as such: def __hash__(self):
try: return self.myHash;
except:
self.myHash = super().__hash__();
return self.myHash; Example code: d = dict();
start = time.time();
i = int(1);
j = int(2);
k = int(3);
for i in range(100000):
d[i,j,k] = 5;
print(time.time() - start);
d = dict();
start = time.time();
i = Decimal(1);
j = Decimal(2);
k = Decimal(3);
for i in range(100000):
d[i,j,k] = 5;
print(time.time() - start);
d = dict();
start = time.time();
i = CachingDecimal(1);
j = CachingDecimal(2);
k = CachingDecimal(3);
for i in range(100000):
d[i,j,k] = 5;
print(time.time() - start); Output In a real life example, I changed some of the values in my code from int to Decimal because non-whole numbers needed to be supported (and this seemed like the easiest way to do so without having to worry about my == comparisons breaking) and it slowed my code down massively. Changing to a CachingDecimal type sped up one code block from 92 seconds with Decimal to 7 seconds, which was much closer to the original int speed. Note that all of the relevant operations have to be overloaded to return the CachingDecimal type, which is a pain, so this makes a lot of sense to implement into the Decimal module. My understanding is that altering a Decimal value will always create a new Decimal object, so there shouldn't be any issues with the cached hash desyncing with the correct hash. Someone may want to check that though. Thanks, James |
It would be better if you provide a working script that demonstrates the issue. I have completed your example (attached). I ran the example on Python 3.1 and received: 0.249999046326 Then I ran on Python 3.3: 0.2100839614868164 As you can see, the new implementation is much faster. Benefit from caching decreased. I suppose, if we implement caching in C the difference will be more. Then I increased the size of the cycles in 10 times, and the time is almost equal (on Python 3): 1.8386573791503906 That I can't to explain. The time of cached version increased disproportionately, more than in 10 times. |
If I increase the cycles increased 10x with 3.2 I get: int: 0.4219999313354492 The sample you have provided is basically what I'm using. See attached What about worst case hash calculation time for Decimal? How does the C code handle that? This is if I increase the values themselves by 100x, same number of loops as above int: 0.5309998989105225 |
100x should be e100 |
I agree that caching the hash would be useful for 3.2, but the So let's focus on the C version in 3.3. These are the timings on a int: 0.537806510925293 These are the timings with a hacked C version that caches the hash: int: 0.5755119323730469 The hash calculation time depends on the size of the coefficient >>> Decimal(1e100)
Decimal('10000000000000000159028911097599180468360808563945281389781327557747838772170381060813469985856815104') So the numbers you are using have an unusually high precision for If you want well defined limits, I suggest using either: >>> Decimal('1e100')
Decimal('1E+100') Or, if the input really must be a float: >>> c = getcontext()
>>> c.create_decimal(1e100)
Decimal('1.000000000000000015902891110E+100') In that latter case, of course the conversion is inexact and |
I retract that. The exponent actually makes things worse. |
I recommend that __hash__ should use functools.lru_cache for caching. |
I presume you mean in 3.2? Have you looked at the source code for that decorator? It's fundamentally a try/except but with a lot more unnecessary bloat than is needed for caching a single int result from a function with no arguments. Its actually a lot slower. If this is likely going to see use in 3.3 then it would probably just be a long int since 3.3 uses C. 0 would indicate uncalculated. Hash function would have to be set up to never return 0. Also every function would need to be tested to make sure there isn't any "in-place" modification of the Decimal object that could alter the hash value. I like how the cached hash in 3.3 is faster than int for hashing. Shouldn't an int just return itself? Why would it be slower? |
@jimbofbx: You are correct that a value has to be reserved to indicate that the hash hasn't been (or can't be) computed, but python uses -1 instead of zero. An integer can't return itself because a hash is an unboxed integer; that said, there is a fast path for small enough integers. You can see the actual code at Since it is too late for 3.2, I suggest closing this and opening a separate 3.3 issue if the existing improvements are not sufficient. |
Why would you do such a thing? A hash value is a single 64-bit slot, no need to add the memory consumption of a whole dictionary and the runtime cost of a LRU eviction policy when you can simply cache the hash in the object itself (like we already do for strings)... |
It was a joke (I think). Taking into account the fact that LRU cache |
Damn. Shame on me for not understanding Raymond's humour :-) |
Le samedi 07 avril 2012 à 17:22 +0000, Raymond Hettinger a écrit :
I think patching the C version of Decimal would be more useful :) |
I think it would be a good idea to fix this in the Python version |
Stefan Krah has a good point. Since the only cost is an extra slot, and this is for users who have already chosen to use Decimal instead of a more efficient (but possibly less accurate) representation, even without the native speedups to Decimal ... I guess I'm now +1. It is clearly an implementation detail, so I don't think the docs need to be changed. I'm not sure the tests do, nor am I sure *how* to test for it in a black-box fashion. I have reviewed the patch and have no objections. So unless a new objection comes up in the next few days, I would say this is ready to be checked in. |
New changeset a012d5df2c73 by Stefan Krah in branch 'default': |
I changed the C version to cache the hash as well: For the submitted BTW, the tests already call both hash() and __hash__() on the same |
The patch for the Python version looks good to me. |
Oh, but used by James Hutchison approach is faster. When I disable the Unpatched: With rhettinger's LBYL patch: With EAFP patch: |
In the patch: This: should be: Checking for the AttributeError is very slightly slower. Not by a lot, but I think if we're going for speed we might as well be as fast as possible. I can't imagine any other exception coming from that try statement. Using except: pass as opposed to sticking everything inside the except statement is also very slightly slower as well Simple test case, 10 million loops: Exception code |
KeyboardInterrupt |
|
I ran the test several times and didn't see the difference between pass |
I don't see a need to cache the hash value in the pure Python version. |
The C version of decimal may not always be available. In particular, it is not compatible with C89. Therefore, efficiency of the pure Python version of decimal is important. Any chance to get it in Python 3.3? |
I agree with Raymond: I don't see a real need to patch the Python version here. If we do want to patch the Python version, I'd go with Raymond's simple patch. |
Closing as fixed (for the C version, at least). |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: