[LLVMdev] Transforming wide integer computations back to vector computations

Matt Pharr matt.pharr at gmail.com
Mon Jan 2 09:12:08 PST 2012


It seems that one of the optimization passes (it seems to be SROA) sometimes transforms computations on vectors of ints to computations on wide integer types; for example, I'm seeing code like the following after optimizations(*):

  %0 = bitcast <16 x i8> %float2uint to i128
  %1 = shl i128 %0, 8
  %ins = or i128 %1, 255
  %2 = bitcast i128 %ins to <16 x i8>

The back end I'm trying to get this code to go through (a hacked up version of the LLVM C backend(**)) doesn't support wide integer types, but is fine with the original vectors of integers; I'm wondering if there's a straightforward way to avoid having these computations on wide integer types generated in the first place or if there's pre-existing code that would transform this back to use the original vector types.

Thanks,
-matt

(*) It seems that this is happening with vectors of i8 and i16, but not i32 and i64; in some cases, this is leading to better code for i8/i16 vectors, in that an unnecessary store/load round-trip being optimized out for the i8/i16 case.  I can provide a test case/submit a bug if this would be useful.

(**) Additional CBE patches to come from this effort, pending turning aforementioned hacks into something a little cleaner/nicer.





More information about the llvm-dev mailing list