[PATCH] D24833: [LoopDataPrefetch/AArch64] Allow selective prefetching of symbolic strided accesses

Renato Golin via llvm-commits llvm-commits at lists.llvm.org
Mon Sep 26 10:11:18 PDT 2016


rengolin added a comment.

Hi Balaram,

This seems like a well made patch. Correctly enabling the feature, using the pre-fetch when it's profitable and with good tests.

I'll leave the remaining of the reviews and approval to Adam et al, but from my side, the change looks good.

cheers,
--renato


================
Comment at: test/Transforms/LoopDataPrefetch/AArch64/kryo-large-stride.ll:7
@@ -4,1 +6,3 @@
 ; RUN: opt -mcpu=kryo -mtriple=aarch64-gnu-linux -passes=loop-data-prefetch -S < %s | FileCheck %s --check-prefix=NO_LARGE_PREFETCH --check-prefix=ALL
+; RUN: opt -mcpu=kryo -mtriple=aarch64-gnu-linux -passes=loop-data-prefetch -S < %s | FileCheck %s --check-prefix=SYMBOLIC_PREFETCH --check-prefix=ALL
+; RUN: opt -mcpu=kryo -mtriple=aarch64-gnu-linux -passes=loop-data-prefetch -prefetch-degree=0 -S < %s | FileCheck %s --check-prefix=NO_SYMBOLIC_PREFETCH --check-prefix=ALL
----------------
Don't force the CPU here, we have the -prefetch-degree for that. Once we have a CPU that pre-fetches aren't profitable, we can use Kryo vs that one as an example, *in addition* to the flag-based ones.


https://reviews.llvm.org/D24833





More information about the llvm-commits mailing list