[llvm-commits] [llvm] r111850 - /llvm/trunk/lib/Target/ARM/ARMFastISel.cpp

Jim Grosbach grosbach at apple.com
Mon Aug 23 15:52:36 PDT 2010


Hey Eric,

Glad to see this making progress. Loads and stores will likely be some of the more obnoxious pieces to deal with.

I was going to ask, "Is there a check somewhere to make sure we don't try to get here in Thumb or Thumb2 mode?" but you've already checked in exactly that. So moving on... 

getFrameIndexReference() doesn't check whether the reg/offset will be in range for an instruction, but rather just returns the appropriate base register and the offset from it for a frame index reference. The LDR instruction generated here may thus have an out-of-range offset in the case of large (> 4k) stack frames. At least for now, it may be reasonable to just punt for offsets that don't fit in the immediate.

Similarly, is there are check above this to make sure we're only dealing with GPR register class loads? Even in ARM mode, there's a number of others (see loadRegFromStackSlot() in ARMBaseInstrInfo.cpp for examples) that may need to be handled. Like the above, though, it's not unreasonable to just detect them and punt for now.

You're resolving frame index references directly. Is this happening after register allocation? I don't see how it can be, but could be wrong. If it's pre-regalloc, those offsets will be incorrect, as the code won't know how many callee-saved register slots are allocated for the function, nor how many spill slots. Perhaps you can use loadRegFromStackSlot() and storeRegToStackSlot() directly and the use the normal PEI code?

-Jim





On Aug 23, 2010, at 2:44 PM, Eric Christopher wrote:

> Author: echristo
> Date: Mon Aug 23 16:44:12 2010
> New Revision: 111850
> 
> URL: http://llvm.org/viewvc/llvm-project?rev=111850&view=rev
> Log:
> Start getting ARM loads/address computation going.
> 
> Modified:
>    llvm/trunk/lib/Target/ARM/ARMFastISel.cpp
> 
> Modified: llvm/trunk/lib/Target/ARM/ARMFastISel.cpp
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/ARM/ARMFastISel.cpp?rev=111850&r1=111849&r2=111850&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM/ARMFastISel.cpp (original)
> +++ llvm/trunk/lib/Target/ARM/ARMFastISel.cpp Mon Aug 23 16:44:12 2010
> @@ -101,8 +101,14 @@
>     virtual bool TargetSelectInstruction(const Instruction *I);
> 
>   #include "ARMGenFastISel.inc"
> +  
> +    // Instruction selection routines.
> +    virtual bool ARMSelectLoad(const Instruction *I);
> 
> +    // Utility routines.
>   private:
> +    bool ARMComputeRegOffset(const Instruction *I, unsigned &Reg, int &Offset);
> +    
>     bool DefinesOptionalPredicate(MachineInstr *MI, bool *CPSR);
>     const MachineInstrBuilder &AddOptionalDefs(const MachineInstrBuilder &MIB);
> };
> @@ -301,8 +307,75 @@
>   return ResultReg;
> }
> 
> +bool ARMFastISel::ARMComputeRegOffset(const Instruction *I, unsigned &Reg,
> +                                      int &Offset) {
> +  // Some boilerplate from the X86 FastISel.
> +  const User *U = NULL;
> +  Value *Op1 = I->getOperand(0);
> +  unsigned Opcode = Instruction::UserOp1;
> +  if (const Instruction *I = dyn_cast<Instruction>(Op1)) {
> +    // Don't walk into other basic blocks; it's possible we haven't
> +    // visited them yet, so the instructions may not yet be assigned
> +    // virtual registers.
> +    if (FuncInfo.MBBMap[I->getParent()] != FuncInfo.MBB)
> +      return false;
> +
> +    Opcode = I->getOpcode();
> +    U = I;
> +  } else if (const ConstantExpr *C = dyn_cast<ConstantExpr>(Op1)) {
> +    Opcode = C->getOpcode();
> +    U = C;
> +  }
> +
> +  if (const PointerType *Ty = dyn_cast<PointerType>(Op1->getType()))
> +    if (Ty->getAddressSpace() > 255)
> +      // Fast instruction selection doesn't support the special
> +      // address spaces.
> +      return false;
> +  
> +  switch (Opcode) {
> +    default: 
> +    //errs() << "Failing Opcode is: " << *Op1 << "\n";
> +    break;
> +    case Instruction::Alloca: {
> +      // Do static allocas.
> +      const AllocaInst *A = cast<AllocaInst>(Op1);
> +      DenseMap<const AllocaInst*, int>::iterator SI =
> +        FuncInfo.StaticAllocaMap.find(A);
> +      if (SI != FuncInfo.StaticAllocaMap.end())
> +        Offset =
> +          TM.getRegisterInfo()->getFrameIndexReference(*FuncInfo.MF,
> +                                                       SI->second, Reg);
> +      else
> +        return false;
> +      return true;
> +    }
> +  }
> +  return false;
> +}
> +
> +bool ARMFastISel::ARMSelectLoad(const Instruction *I) {
> +  
> +  unsigned Reg;
> +  int Offset;
> +  
> +  // See if we can handle this as Reg + Offset
> +  if (!ARMComputeRegOffset(I, Reg, Offset))
> +    return false;
> +    
> +    
> +  unsigned ResultReg = createResultReg(ARM::GPRRegisterClass);
> +  AddOptionalDefs(BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DL,
> +                          TII.get(ARM::LDR), ResultReg)
> +                  .addImm(0).addReg(Reg).addImm(Offset));
> +        
> +  return true;
> +}
> +
> bool ARMFastISel::TargetSelectInstruction(const Instruction *I) {
>   switch (I->getOpcode()) {
> +    case Instruction::Load:
> +      return ARMSelectLoad(I);
>     default: break;
>   }
>   return false;
> 
> 
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits





More information about the llvm-commits mailing list