[all-commits] [llvm/llvm-project] b06a2a: [LoopVectorizer] Lower uniform loads as a single l...

Philip Reames via All-commits all-commits at lists.llvm.org
Mon Nov 23 15:32:46 PST 2020


  Branch: refs/heads/master
  Home:   https://github.com/llvm/llvm-project
  Commit: b06a2ad94f45abc18970ecc3cec93d140d036d8f
      https://github.com/llvm/llvm-project/commit/b06a2ad94f45abc18970ecc3cec93d140d036d8f
  Author: Philip Reames <listmail at philipreames.com>
  Date:   2020-11-23 (Mon, 23 Nov 2020)

  Changed paths:
    M llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
    M llvm/test/Transforms/LoopVectorize/X86/cost-model-assert.ll
    M llvm/test/Transforms/LoopVectorize/X86/uniform_mem_op.ll
    M llvm/test/Transforms/LoopVectorize/multiple-strides-vectorization.ll

  Log Message:
  -----------
  [LoopVectorizer] Lower uniform loads as a single load (instead of relying on CSE)

A uniform load is one which loads from a uniform address across all lanes. As currently implemented, we cost model such loads as if we did a single scalar load + a broadcast, but the actual lowering replicates the load once per lane.

This change tweaks the lowering to use the REPLICATE strategy by marking such loads (and the computation leading to their memory operand) as uniform after vectorization. This is a useful change in itself, but it's real purpose is to pave the way for a following change which will generalize our uniformity logic.

In review discussion, there was an issue raised with coupling cost modeling with the lowering strategy for uniform inputs.  The discussion on that item remains unsettled and is pending larger architectural discussion.  We decided to move forward with this patch as is, and revise as warranted once the bigger picture design questions are settled.

Differential Revision: https://reviews.llvm.org/D91398




More information about the All-commits mailing list