[PATCH] D141129: [InstCombine] Use KnownBits for lshr/add -> (a + b < a)

Sanjay Patel via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Jan 17 05:42:45 PST 2023


spatel added inline comments.


================
Comment at: llvm/test/Transforms/InstCombine/shift-add.ll:510
+
+define i32 @lshr_16_add_known_17_leading_zeroes(i32 %a, i32 %b) {
+; CHECK-LABEL: @lshr_16_add_known_17_leading_zeroes(
----------------
Pierre-vh wrote:
> lebedev.ri wrote:
> > spatel wrote:
> > > Demanded bits should always zap cases like this.
> > > A more interesting test would have just one side of the add with extra leading zeros:
> > > https://alive2.llvm.org/ce/z/c4_YAE
> > > 
> > > So we'd be back to creating 2 extra instructions vs. the original sequence. That doesn't seem like a good trade-off in IR.
> > Presumably we can simply avoid that unprofitable case,
> > since e.g. `lshr_16_add_known_16_leading_zeroes()` looks like an improvement?
> It does removes it, not sure why it didn't there. Maybe I forgot to rebase?
Yes - that seems like it would be ok; just restrict the pre-condition with ShAmt to equality (it was like that in an earlier draft?).


================
Comment at: llvm/test/Transforms/InstCombine/shift-add.ll:510
+
+define i32 @lshr_16_add_known_17_leading_zeroes(i32 %a, i32 %b) {
+; CHECK-LABEL: @lshr_16_add_known_17_leading_zeroes(
----------------
spatel wrote:
> Pierre-vh wrote:
> > lebedev.ri wrote:
> > > spatel wrote:
> > > > Demanded bits should always zap cases like this.
> > > > A more interesting test would have just one side of the add with extra leading zeros:
> > > > https://alive2.llvm.org/ce/z/c4_YAE
> > > > 
> > > > So we'd be back to creating 2 extra instructions vs. the original sequence. That doesn't seem like a good trade-off in IR.
> > > Presumably we can simply avoid that unprofitable case,
> > > since e.g. `lshr_16_add_known_16_leading_zeroes()` looks like an improvement?
> > It does removes it, not sure why it didn't there. Maybe I forgot to rebase?
> Yes - that seems like it would be ok; just restrict the pre-condition with ShAmt to equality (it was like that in an earlier draft?).
My comment might not have been clear:
1. If both sides of the "add" have more leading zeros than the shift amount (the test as shown in this patch `@lshr_16_add_known_17_leading_zeroes`), we already reduce that "0" via SimplifyDemandedBits(). AFAIK, this patch does not change anything with an example like that.
2. If only one side of the "add" has more leading zeros than the shift amount, then we probably can't get rid of the "and" mask by creating a "trunc". That's what I am seeing in the example I posted ( https://alive2.llvm.org/ce/z/c4_YAE ), and that's a test that I'd like to see included with this patch.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D141129/new/

https://reviews.llvm.org/D141129



More information about the llvm-commits mailing list