Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Type inference failure involving binary operators, traits, references, and integer defaulting #36549

Closed
solson opened this issue Sep 16, 2016 · 6 comments
Labels
A-inference Area: Type inference C-bug Category: This is a bug.

Comments

@solson
Copy link
Member

solson commented Sep 16, 2016

We expect this to compile without error (reasoning below):

let c: &u32 = &5; ((c >> 8) & 0xff) as u8

Compile error:

error: the trait bound `u32: std::ops::BitAnd<i32>` is not satisfied [--explain E0277]
  --> <anon>:10:27
   |>
10 |>         let c: &u32 = &5; ((c >> 8) & 0xff) as u8
   |>                           ^^^^^^^^^^^^^^^^^

Problem: The 0xff is getting inferred to i32 instead of u32, which would work.

Explanation of my understanding of the code:

  1. c has type &u32
  2. 8 is defaulted to i32 (perhaps 0xff is also unhelpfully defaulted to i32 at this stage?)
  3. c >> 8 has type u32 via impl<'a> Shr<i32> for &'a u32 (to the best of my knowledge)
  4. Next we expect 0xff to infer to u32 to use the u32: BitAnd<u32> impl, but it fails.

Working examples (each with a slight twist):

// The most obvious fix is to force the type of 0xff:
let c: &u32 = &5; ((c >> 8) & 0xffu32) as u8

// But that shouldn't be necessary! It works without that if `c: u32` or `*c` is used:
let c: u32 = 5; ((c >> 8) & 0xff) as u8
let c: &u32 = &5; ((*c >> 8) & 0xff) as u8

// Finally, and most oddly, using an identity cast or type ascription
// from `u32` to `u32` also convinces the inference engine:
let c: &u32 = &5; ((c >> 8) as u32 & 0xff) as u8
let c: &u32 = &5; ((c >> 8): u32 & 0xff) as u8

Who thought identity casts were useless?

cc @retep998 @nagisa

@Aatch
Copy link
Contributor

Aatch commented Sep 17, 2016

This is due to the fact that we "short-circuit" primitive operations. Since we know that i32 + i32 is always i32, we don't have to route through the trait system to determine the type of the result. In simple terms, the expression 1i32 + 2i32 has the type i32, whereas the expression a + b has the type <A as Add<B>>::Output (assuming that A and B aren't primitives), which we resolve later.

In fact, this case is mentioned in a comment in the compiler explaining why we do this. For the middle two cases, the compiler effectively supplies the type hint for you, since it knows that u32 << {integer} is always u32.

I'm not sure how to deal with this. I'm not sure if we even can. We need to know the types to select the trait implementation, but in this case, the implementation is required to select the correct type.

@nagisa
Copy link
Member

nagisa commented Sep 17, 2016

@Aatch I feel like making the types applied by integer type default to not be forced. Namely we could assign 8 and 0xff types i32 {defaulted} (i.e. with a defaulted flag or something), which would then allow inferrer to still figure out that <&u32 as Shr<i32 { defaulted }>::Output == i32 { defaulted } is in fact supposed to be <&u32 as Shr<i32 { defaulted }>::Output == u32 and replace the 2nd defaulted type with the type that could have been inferred.

@Mark-Simulacrum Mark-Simulacrum added the A-inference Area: Type inference label Jun 23, 2017
@Mark-Simulacrum Mark-Simulacrum added the C-bug Category: This is a bug. label Jul 26, 2017
@ExpHP
Copy link
Contributor

ExpHP commented Jan 8, 2019

Edit: I have moved this concern to a new issue: #57447


Original post

A much simpler demonstration: (relevant URLO thread)

let _: f32 = 1. - 1.;   // allowed
let _: f32 = 1. - &1.;  // type error
let _: f32 = &1. - 1.;  // type error
let _: f32 = &1. - &1.; // type error

To a mortal like me, it seems that the only reason the first line works is because the compiler must have a special case for binary operations between two unconstrained "floating-point flavored" type inference variables. Can the compiler not just special case the latter three examples in the same way it special cases the first?

@ExpHP
Copy link
Contributor

ExpHP commented Jan 8, 2019

// Finally, and most oddly, using an identity cast or type ascription
// from `u32` to `u32` also convinces the inference engine:
let c: &u32 = &5; ((c >> 8) as u32 & 0xff) as u8;

I confess that this example is surprising. It's hard to picture what the current implementation actually looks like. :/

@ExpHP
Copy link
Contributor

ExpHP commented Jan 8, 2019

Never mind, I see now. These may be different issues. In my understanding, that last example works because the type of c is annotated. (which means the issue in the OP is something even more bizarre!)

@Globidev
Copy link

The original example now compiles on beta (I'm guessing due to the changes made to fix #57447 ?)
Maybe we can close this issue ?

@varkor varkor closed this as completed Mar 27, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-inference Area: Type inference C-bug Category: This is a bug.
Projects
None yet
Development

No branches or pull requests

7 participants