[PATCH] D126085: [RISCV] Add a subtarget feature to enable unaligned scalar loads and stores

Alex Bradbury via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu May 26 06:05:28 PDT 2022


asb added a comment.

This is all that's needed to hook up the appropriate attribute:

  diff --git a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVTargetStreamer.cpp b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVTargetStreamer.cpp
  index 5f9ed77d07cf..ac0c8113135a 100644
  --- a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVTargetStreamer.cpp
  +++ b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVTargetStreamer.cpp
  @@ -50,6 +50,9 @@ void RISCVTargetStreamer::emitTargetAttributes(const MCSubtargetInfo &STI) {
     else
       emitAttribute(RISCVAttrs::STACK_ALIGN, RISCVAttrs::ALIGN_16);
   
  +  if (STI.hasFeature(RISCV::FeatureUnalignedScalarMem))
  +    emitAttribute(RISCVAttrs::UNALIGNED_ACCESS, RISCVAttrs::ALLOWED);
  +
     auto ParseResult = RISCVFeatures::parseFeatureBits(
         STI.hasFeature(RISCV::Feature64Bit), STI.getFeatureBits());
     if (!ParseResult) {
  diff --git a/llvm/test/CodeGen/RISCV/attributes.ll b/llvm/test/CodeGen/RISCV/attributes.ll
  index 9d9f02ce52cb..0734b5a01b45 100644
  --- a/llvm/test/CodeGen/RISCV/attributes.ll
  +++ b/llvm/test/CodeGen/RISCV/attributes.ll
  @@ -147,6 +147,14 @@
   ; RV64COMBINEINTOZKN: .attribute 5, "rv64i2p0_zbkb1p0_zbkc1p0_zbkx1p0_zkn1p0_zknd1p0_zkne1p0_zknh1p0"
   ; RV64COMBINEINTOZKS: .attribute 5, "rv64i2p0_zbkb1p0_zbkc1p0_zbkx1p0_zks1p0_zksed1p0_zksh1p0"
   
  +; RUN: llc -mtriple=riscv32 %s -o - | FileCheck --check-prefix=ALIGNED %s
  +; RUN: llc -mtriple=riscv64 %s -o - | FileCheck --check-prefix=ALIGNED %s
  +; RUN: llc -mtriple=riscv32 -mattr=+unaligned-scalar-mem %s -o - | FileCheck --check-prefix=UNALIGNED %s
  +; RUN: llc -mtriple=riscv32 -mattr=+unaligned-scalar-mem %s -o - | FileCheck --check-prefix=UNALIGNED %s
  +
  +; ALIGNED-NOT: .attribute 6
  +; UNALIGNED: .attribute 6, 1
  +
   define i32 @addi(i32 %a) {
     %1 = add i32 %a, 1
     ret i32 %1

I think not emitting `Tag_RISCV_unaligned_access` in the absence of `+unaligned-scalar-mem` (as the above patch does) is probably the best we can do, as inline assembly or handwritten .s files may well include misaligned accesses, even without +unaligned-scalar-mem.

For the patch under review, ALIGN and NOALIGN are reversed in meaning. i.e. I'd expect `NOALIGN` to cover RUN invocations with ` -mattr=+unaligned-scalar-mem`


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D126085/new/

https://reviews.llvm.org/D126085



More information about the llvm-commits mailing list