add patch to fix quantization failure in PyTorch 1.11.0 on POWER #18489
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
(created using
eb --new-pr
)PyTorch-1.10.0_fix-fp16-quantization-without-fbgemm.patch
is missing which didn't apply even though pytorch/pytorch#84750 was merged. The merge is however only in PyTorch 2.0. The patch didn't apply because the code was reformatted in pytorch/pytorch@e60fd10This PR adds an updated version of that Patch which applies to all PyTorch 1.11-1.13 versions so far.