-
Notifications
You must be signed in to change notification settings - Fork 13.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Type inference failure involving binary operators, traits, references, and integer defaulting #36549
Comments
This is due to the fact that we "short-circuit" primitive operations. Since we know that In fact, this case is mentioned in a comment in the compiler explaining why we do this. For the middle two cases, the compiler effectively supplies the type hint for you, since it knows that I'm not sure how to deal with this. I'm not sure if we even can. We need to know the types to select the trait implementation, but in this case, the implementation is required to select the correct type. |
@Aatch I feel like making the types applied by integer type default to not be forced. Namely we could assign |
Edit: I have moved this concern to a new issue: #57447 Original postA much simpler demonstration: (relevant URLO thread) let _: f32 = 1. - 1.; // allowed
let _: f32 = 1. - &1.; // type error
let _: f32 = &1. - 1.; // type error
let _: f32 = &1. - &1.; // type error To a mortal like me, it seems that the only reason the first line works is because the compiler must have a special case for binary operations between two unconstrained "floating-point flavored" type inference variables. Can the compiler not just special case the latter three examples in the same way it special cases the first? |
// Finally, and most oddly, using an identity cast or type ascription
// from `u32` to `u32` also convinces the inference engine:
let c: &u32 = &5; ((c >> 8) as u32 & 0xff) as u8; I confess that this example is surprising. It's hard to picture what the current implementation actually looks like. :/ |
Never mind, I see now. These may be different issues. In my understanding, that last example works because the type of |
The original example now compiles on beta (I'm guessing due to the changes made to fix #57447 ?) |
We expect this to compile without error (reasoning below):
Compile error:
Problem: The
0xff
is getting inferred toi32
instead ofu32
, which would work.Explanation of my understanding of the code:
c
has type&u32
8
is defaulted toi32
(perhaps0xff
is also unhelpfully defaulted toi32
at this stage?)c >> 8
has typeu32
viaimpl<'a> Shr<i32> for &'a u32
(to the best of my knowledge)0xff
to infer tou32
to use theu32: BitAnd<u32>
impl, but it fails.Working examples (each with a slight twist):
Who thought identity casts were useless?
cc @retep998 @nagisa
The text was updated successfully, but these errors were encountered: