[LLVMdev] Adding masked vector load and store intrinsics

Hal Finkel hfinkel at anl.gov
Fri Oct 24 12:41:09 PDT 2014


----- Original Message -----
> From: "Adam Nemet" <anemet at apple.com>
> To: "Nadav Rotem" <nrotem at apple.com>
> Cc: dag at cray.com, llvmdev at cs.uiuc.edu
> Sent: Friday, October 24, 2014 2:03:24 PM
> Subject: Re: [LLVMdev] Adding masked vector load and store intrinsics
> 
> On Oct 24, 2014, at 11:38 AM, Nadav Rotem < nrotem at apple.com > wrote:
> 
> 
> On Oct 24, 2014, at 10:57 AM, Adam Nemet < anemet at apple.com > wrote:
> 
> 
> 
> On Oct 24, 2014, at 4:24 AM, Demikhovsky, Elena <
> elena.demikhovsky at intel.com > wrote:
> 
> 
> 
> 
> 
> Hi,
> 
> We would like to add support for masked vector loads and stores by
> introducing new target-independent intrinsics. The loop vectorizer
> will then be enhanced to optimize loops containing conditional
> memory accesses by generating these intrinsics for existing targets
> such as AVX2 and AVX-512. The vectorizer will first ask the target
> about availability of masked vector loads and stores. The SLP
> vectorizer can potentially be enhanced to use these intrinsics as
> well.
> 
> 
> 
> I am happy to hear that you are working on this because it means that
> in the future we would be able to teach the SLP Vectorizer to
> vectorize types of <3 x float>.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The intrinsics would be legal for all targets; targets that do not
> support masked vector loads or stores will scalarize them.
> 
> 
> 
> 
> +1. I think that this is an important requirement.
> 
> 
> 
> 
> 
> 
> I do agree that we would like to have one IR node to capture these so
> that they survive until ISel and that their specific semantics can
> be expressed. However, can you discuss the other options (new IR
> instructions, target-specific intrinsics) and why you went with
> target-independent intrinsics.
> 
> 
> 
> 
> I agree with the approach of adding target-independent masked memory
> intrinsics. One reason is that I would like to keep the vectorizers
> target independent (and use the target transform info to query the
> backends). I oppose adding new first-level instructions because we
> would need to teach all of the existing optimizations about the new
> instructions, and considering the limited usefulness of masked
> operations it is not worth the effort.
> 
> 
> Thanks, Nadav, that makes sense. Do you foresee any potential issues
> due to the limitation of what information can be attached to an
> intrinsic call vs. a store, e.g. alignment or alias info. I do
> remember from trying to optimize from-memory-broadcast intrinsics
> that the optimizers were pretty limited dealing with intrinsics
> accessing memory.

This is, hopefully, a bit better now than it was in the past. Nevertheless, our handling of these things is not bad to improve in general. Alignment it has, and alias metadata should just work (except perhaps for TBAA, but that should be easy to fix).

 -Hal

> 
> 
> Adam
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> My intuition would have been to go with target-specific intrinsics
> until we have something solid implemented and then potentially turn
> this into native IR instructions as the next step (for other
> targets, etc.). I am particularly worried whether we really want to
> generate these for targets that don’t have vector predication
> support.
> 
> 
> Probably not, but this is a cost-benefit decision that the
> vectorizers would need to make.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> There is also the related question of vector predicating any other
> instruction beyond just loads and stores which AVX512 supports. This
> is probably a smaller gain but should probably be part of the plan
> as well.
> 
> 
> Adam
> 
> 
> 
> 
> The addressed memory will not be touched for masked-off lanes. In
> particular, if all lanes are masked off no address will be accessed.
> 
> call void @llvm.masked.store (i32* %addr, <16 x i32> %data, i32 4,
> <16 x i1> %mask)
> 
> %data = call <8 x i32> @llvm.masked.load (i32* %addr, <8 x i32>
> %passthru, i32 4, <8 x i1> %mask)
> 
> where %passthru is used to fill the elements of %data that are
> masked-off (if any; can be zeroinitializer or undef).
> 
> Comments so far, before we dive into more details?
> 
> Thank you.
> 
> - Elena and Ayal
> 
> 
> 
> ---------------------------------------------------------------------
> Intel Israel (74) Limited
> 
> This e-mail and any attachments may contain confidential material for
> the sole use of the intended recipient(s). Any review or distribution
> by others is strictly prohibited. If you are not the intended
> recipient, please contact the sender and delete all copies.
> 
> 
> 
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
> 

-- 
Hal Finkel
Assistant Computational Scientist
Leadership Computing Facility
Argonne National Laboratory




More information about the llvm-dev mailing list