-
-
Notifications
You must be signed in to change notification settings - Fork 31.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gh-115999: Specialize LOAD_ATTR
for instance and class receivers in free-threaded builds
#128164
gh-115999: Specialize LOAD_ATTR
for instance and class receivers in free-threaded builds
#128164
Conversation
Look up a unicode key in an all unicode keys object along with the keys version, assigning one if not present. We need a keys version that is consistent with presence of the key for use in the guards.
Reading the shared keys version and looking up the key need to be performed atomically. Otherwise, a key inserted after the lookup could race with reading the version, causing us to incorrectly specialize that no key shadows the descriptor.
Everything starts out disabled
* Check that the type hasn't changed across lookups * Descr is now an owned ref
…_INST_ATTR_FROM_DICT
- Use atomic load for value - Use _Py_TryIncrefCompareStackRef for incref
- Use atomics and _Py_TryIncrefCompareStackRef in _LOAD_ATTR_SLOT - Pass type version during specialization
- Check that fget is deferred - Pass tp_version
Macros should be treated as terminators when searching for the assignment target of an expression involving PyStackRef_FromPyObjectNew
All instance loads are complete!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A lot of effort in this PR seems to go into LOAD_ATTR_WITH_HINT
. There is a lot of locking and unlocking and extra machinery to support having a value on the stack between uops.
Is it worth it?
Generally, I think any effort spent on LOAD_ATTR_WITH_HINT
is better spent on increasing the fraction of objects that use inlined values and can be handled by LOAD_ATTR_INSTANCE_VALUE
.
It is not just that LOAD_ATTR_INSTANCE_VALUE
is faster, it also means that the object is created faster and uses less memory.
Objects/dictobject.c
Outdated
@@ -1129,6 +1129,21 @@ dictkeys_generic_lookup(PyDictObject *mp, PyDictKeysObject* dk, PyObject *key, P | |||
return do_lookup(mp, dk, key, hash, compare_generic); | |||
} | |||
|
|||
static Py_hash_t | |||
check_keys_and_hash(PyDictKeysObject *dk, PyObject *key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't tell from the name what this is checking.
Maybe check_dict_is_unicode_only_and_key_has_hash_defined
?
Which leads me to the question: why check both of these things in a single function. The two checks appear unrelated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which leads me to the question: why check both of these things in a single function.
This is factoring out some common code into a helper. I'll split it into two helpers with more descriptive names.
} | ||
|
||
split op(_LOAD_ATTR_INSTANCE_VALUE, (offset/1, owner -- attr, null if (oparg & 1))) { | ||
PyObject *owner_o = PyStackRef_AsPyObjectBorrow(owner); | ||
PyObject **value_ptr = (PyObject**)(((char *)owner_o) + offset); | ||
PyObject *attr_o = *value_ptr; | ||
PyObject *attr_o = FT_ATOMIC_LOAD_PTR_ACQUIRE(*value_ptr); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we know that the values array is still valid at this point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a case that we are not handling correctly here: if, after loading value_ptr
, another thread invalidates the inline values, modifies an attribute, and reallocates another object at the same memory address as the previous attribute, we can incorrectly return a value that wasn't the attribute at that offset.
We need to either:
- Re-check
_PyObject_InlineValues(owner_o)->valid
at the end of this handler and clean-up and deopt if it's not still true. - Set the contents of the inlines values array to NULL when we mark it as invalid so that the
_Py_TryIncrefCompareStackRef
handles this case and correctly deopts.
I think the second options is probably simpler to implement and more efficient given that _LOAD_ATTR_INSTANCE_VALUE
is common and invalidating inline values is relatively rare.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Invalidating inline values might not be that rare.
I suggest gathering some stats before deciding how to handle this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LOAD_ATTR_INSTANCE_VALUE: 4,922,474,147
Materialize dict (on request) | 4,444,396 | 2.4%
Materialize dict (new key) | 476,375 | 0.3%
Materialize dict (too big) | 4,884 | 0.0%
Materialize dict (str subclass) | 0 | 0.0%
So, LOAD_ATTR_INSTANCE_VALUE is somewhere between 1,000x-10,000x more frequent than invalidating inline values. (Invalidating inline values requires first materializing the dictionary, but materializing the dictionary doesn't always invalidate the inline values).
DEOPT_IF(attr_o == NULL); | ||
#ifdef Py_GIL_DISABLED | ||
if (!_Py_TryIncrefCompareStackRef(value_ptr, attr_o, &attr)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does _Py_TryIncrefCompareStackRef
do?
Function names need to be self explanatory. The result of this function does not depend on a comparison, but on whether something has been modified. Maybe _Py_TryIncrefIfPointersConsistentStackRef
as it only works if the `PyObject ** and PyObject *pointers are consistent.
I know that _Py_TryIncrefCompareStackRef
was not introduced in this PR, but it is critical to the understanding of this code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is an atomic compare-and-incref function, similar to how _Py_atomic_compare_exchange
is an atomic compare-and-swap function. If the comparison succeeds, the value is incref'd and 1
is returned. 0
is returned on failure.
"Compare" is a more appropriate term than "consistent" here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still think "Compare" is misleading. The name should say what the function does, not how it does it.
Python/bytecodes.c
Outdated
PyObject *attr_o; | ||
if (!LOCK_OBJECT(dict)) { | ||
DEAD(dict); | ||
POP_DEAD_INPUTS(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems rather obscure. I see why you need to pop dict
, but DEAD(dict); POP_DEAD_INPUTS()
seems a rather convoluted way to do this.
Since you adding a new code generator macro, why not POP(dict)
?
I think that most people would expect that an explicitly popped value is no longer live, so there should be no need for a kill and a pop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I can look at that. POP
is already taken by a macro in ceval_macros.h
, but we could use POP_INPUT
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aside:
We shouldn't be using POP
, as the code generator should handle pops. We can remove POP
in another PR.
These are two logically separate operations
This avoids a race where another thread invalidates the values, overwrites an attribute stored in the values, and allocates a new object at the address present in the values.
The extra machinery to support having a value on the stack is also necessary for |
This PR finishes specialization for
LOAD_ATTR
in the free-threaded build by adding support for class and instance receivers.The bulk of it is dedicated to making the specialized instructions and the specialization logic thread-safe. This consists of using atomics / locks in the appropriate places, avoiding races in specialization related to reloading versions, and ensuring that the objects stored in inline caches remain valid when accessed (by only storing deferred objects). See the section on "Thread Safety" below for more details.
Additionally, making this work required a few unrelated changes to fix existing bugs or work around differences between the two builds that results from only storing deferred values (which causes in specialization failures in the free-threaded build when a value that would be stored in the cache is not deferred):
PyStackRef_FromPyObjectNew
.test_descr.MiscTests.test_type_lookup_mro_reference
to work when specialization fails (and also be a behavorial test).test_capi.test_type.TypeTests.test_freeze_meta
when running refleaks tests on free-threaded builds. Specialization failure triggers an existing bug.Single-threaded Performance
We're leaving a bit of perf on the table by only storing deferred objects: we can't specialize attribute lookups that resolve to class attributes (e.g. counters, settings). I haven't measured how much perf we're giving up, but I'd like to address that in a separate PR.
Scalability
The
object_cfunction
andpymethod
benchmarks are improved (1.4x slower -> 14.3x faster, 1.8x slower -> 13.0x faster, respectively). Other benchmarks appear unchanged.I would expect that
cmodule_function
would also improve, but it looks like the benchmark is bottlenecked on increfing theint.__floor__
method that is returned from the call to_PyObject_LookupSpecial
inmath_floor
(the incref happens in_PyType_LookupRef
, which is called byPyObject_LookupSpecial
):cpython/Modules/mathmodule.c
Line 1178 in 3879ca0
Raw numbers:
Thread Safety
Thread safety of specialized instructions is addressed in a few different ways:
_LOAD_ATTR_WITH_HINT
.Thread safety of specialization is addressed using similar techniques:
Stats
Following the instructions in the comment preceding
specialize_attr_loadclassattr
, I collected stats for the default build for both this PR and its merge base using./python -m test_typing test_re test_dis test_zlib
and compared them usingTools/scripts/summarize_stats.py
. The results forLOAD_ATTR
are nearly identical and are consistent with results from comparing the merge base against itself:--disable-gil
builds #115999