[llvm-commits] [llvm] r122801 - /llvm/trunk/lib/Transforms/Scalar/CodeGenPrepare.cpp

Cameron Zwarich zwarich at apple.com
Wed Jan 5 18:16:26 PST 2011


On Jan 5, 2011, at 3:44 PM, Jakob Stoklund Olesen wrote:

> On Jan 5, 2011, at 2:53 PM, Cameron Zwarich wrote:
>> I tried implementing this, only reiterating over each basic block after optimizing a memory instruction instead of rechecking the whole function. I got the following results on test-suite + SPEC2000 & SPEC2006. This seems to disagree with my previous printf-based approach to counting the extra improvements.
> 
> [... almost, but not quite identical results ..]
> 
>> I guess I'll still have to work at getting it closer in code quality. Maybe I'll need to make all of the optimizations feed into a worklist, which could be tricky.
> 
> I think an instruction-based work list should give you even better performance than anything basic on basic blocks.
> 
> Whenever an instruction is sunk, add all its operands to the work list. You should probably use a SetVector for stable results.

At first glance, this looked a bit iffy with a worklist:

    } else if (CallInst *CI = dyn_cast<CallInst>(I)) {
      // If we found an inline asm expession, and if the target knows how to
      // lower it to normal LLVM code, do so now.
      if (TLI && isa<InlineAsm>(CI->getCalledValue())) {
        if (TLI->ExpandInlineAsm(CI)) {
          BBI = BB.begin();
          // Avoid processing instructions out of order, which could cause
          // reuse before a value is defined.
          SunkAddrs.clear();
        } else
          // Sink address computing for memory operands into the block.
          MadeChange |= OptimizeInlineAsmInst(I, &(*CI), SunkAddrs);
      } else {
        // Other CallInst optimizations that don't need to muck with the
        // enclosing iterator here.
        MadeChange |= OptimizeCallInst(CI);
      }
    }

However, we could expand the inline assembly on the first pass and then do OptimizeInlineAsmInst and OptimizeCallInst on subsequent visits via the worklist.

Cameron



More information about the llvm-commits mailing list