[LLVMdev] TLI.getSetCCResultType() and/or MVT broken by design?

Villmow, Micah Micah.Villmow at amd.com
Fri Jul 27 11:51:32 PDT 2012


I'm running into lots of problems with this call back. Mostly the problem occurs because this callback is used before types are legalized. However, the code generator does not have a 1-1 correspondence between all LLVM types and the codegen types. This leads to problems when getSetCCResultType is passed in an invalid type, but has a valid LLVM type attached to it. An example is <3 x float>. getSetCCResultType can return any type, and in the AMDIL backends case, for a <3 x float>, returns the corresponding integer version of the vector. The problem comes in code like the following(comments removed):
This is from DAGCombiner.cpp:visitSIGN_EXTEND.
   EVT N0VT = N0.getOperand(0).getValueType();
...
      EVT SVT = TLI.getSetCCResultType(N0VT);
...
      if (VT.getSizeInBits() == SVT.getSizeInBits())
        return DAG.getSetCC(N->getDebugLoc(), VT, N0.getOperand(0),
                             N0.getOperand(1),
                             cast<CondCodeSDNode>(N0.getOperand(2))->get());

SVT.getSizeInBits() crashes, because TLI.getSetCCResultType returns an invalid MVT type and LLVMTy is NULL. Since there is no way to specify the LLVMTy manually, there is no way to fix this without finding all of the locations that use this and disabling them.

I'm disabling via VT.isPow2VectorType() because an extra check, but it seems like this isn't preferable.  So should I conditionalize the pre-liegalized check, or allow a backend to set the LLVMTy of newly created MVT types.

So, is the design to disallow backends to set this broken, or what is expected? Let me know what you think is the best way forward.

Thanks,
Micah
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20120727/b96c0a70/attachment.html>


More information about the llvm-dev mailing list