[PATCH] D35146: AMDGPU : Widen extending scalar loads to 32-bits

Matt Arsenault via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Jul 25 09:45:40 PDT 2017


arsenm added inline comments.


================
Comment at: lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp:130-135
+  /// \details \p For uniform, small type loads from constant memory, we need
+  //  check whether the loadinst alignment equals to or larger than 4 bytes and it's
+  //  not a volatile memory and datatype bitwidth is less than 32 bits. If those
+  //  conditions are satisfied, then do a widen scalar load.
+  //
+  /// \returns True.
----------------
The comment explains too specifically what it is doing, it should be describing intent and why.


================
Comment at: test/CodeGen/AMDGPU/widen_extending_scalar_loads.ll:1
+; RUN: opt -S -mtriple=amdgcn-- -amdgpu-codegenprepare < %s | FileCheck -check-prefix=GCN -check-prefix=HSA %s
+
----------------
These are IR check lines, so shouldn't use GCN. You also don't need or use the HSA check prefix


================
Comment at: test/CodeGen/AMDGPU/widen_extending_scalar_loads.ll:88-90
+; GCN-NEXT: trunc
+; GCN-NEXT: bitcast
+; GCN-NEXT: store
----------------
This should check the integer type/operands


================
Comment at: test/CodeGen/AMDGPU/widen_extending_scalar_loads.ll:172
+; GCN-NEXT: store
+define amdgpu_kernel void @constant_load_v2i8_addrespace1(<2 x i8> addrspace(1)* %out, <2 x i8> addrspace(1)* %in) #0 {
+  %ld = load <2 x i8>, <2 x i8> addrspace(1)* %in
----------------
Spelling _addrespace1


Repository:
  rL LLVM

https://reviews.llvm.org/D35146





More information about the llvm-commits mailing list