[PATCH] D105264: [X86] AVX512FP16 instructions enabling 2/6

Pengfei Wang via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Aug 11 07:01:24 PDT 2021


pengfei updated this revision to Diff 365738.
pengfei marked 6 inline comments as done.
pengfei added a comment.

1. Rebase to the first merged FP16 patch.
2. Address Yuanke's comments.
3. Add parentheses around casts.
4. Remove OptForSize predicate for vmovsh.
5. Add more immediate value for encoding tests of vcmpph/sh.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D105264/new/

https://reviews.llvm.org/D105264

Files:
  clang/include/clang/Basic/BuiltinsX86.def
  clang/lib/CodeGen/CGBuiltin.cpp
  clang/lib/Headers/avx512fp16intrin.h
  clang/lib/Headers/avx512vlfp16intrin.h
  clang/lib/Sema/SemaChecking.cpp
  clang/test/CodeGen/X86/avx512fp16-builtins.c
  clang/test/CodeGen/X86/avx512vlfp16-builtins.c
  llvm/include/llvm/IR/IntrinsicsX86.td
  llvm/lib/Target/X86/AsmParser/X86AsmParser.cpp
  llvm/lib/Target/X86/MCTargetDesc/X86ATTInstPrinter.cpp
  llvm/lib/Target/X86/MCTargetDesc/X86InstPrinterCommon.cpp
  llvm/lib/Target/X86/X86ISelLowering.cpp
  llvm/lib/Target/X86/X86InstrAVX512.td
  llvm/lib/Target/X86/X86InstrFoldTables.cpp
  llvm/lib/Target/X86/X86InstrInfo.cpp
  llvm/lib/Target/X86/X86IntrinsicsInfo.h
  llvm/test/CodeGen/X86/avx512fp16-arith-intrinsics.ll
  llvm/test/CodeGen/X86/avx512fp16-arith-vl-intrinsics.ll
  llvm/test/CodeGen/X86/avx512fp16-arith.ll
  llvm/test/CodeGen/X86/avx512fp16-fmaxnum.ll
  llvm/test/CodeGen/X86/avx512fp16-fminnum.ll
  llvm/test/CodeGen/X86/avx512fp16-fold-load-binops.ll
  llvm/test/CodeGen/X86/avx512fp16-fold-xmm-zero.ll
  llvm/test/CodeGen/X86/avx512fp16-fp-logic.ll
  llvm/test/CodeGen/X86/avx512fp16-intrinsics.ll
  llvm/test/CodeGen/X86/avx512fp16-machine-combiner.ll
  llvm/test/CodeGen/X86/avx512fp16-mov.ll
  llvm/test/CodeGen/X86/avx512fp16-unsafe-fp-math.ll
  llvm/test/CodeGen/X86/fp-strict-scalar-cmp-fp16.ll
  llvm/test/CodeGen/X86/fp-strict-scalar-fp16.ll
  llvm/test/CodeGen/X86/pseudo_cmov_lower-fp16.ll
  llvm/test/CodeGen/X86/stack-folding-fp-avx512fp16.ll
  llvm/test/CodeGen/X86/stack-folding-fp-avx512fp16vl.ll
  llvm/test/CodeGen/X86/vec-strict-128-fp16.ll
  llvm/test/CodeGen/X86/vec-strict-256-fp16.ll
  llvm/test/CodeGen/X86/vec-strict-512-fp16.ll
  llvm/test/CodeGen/X86/vec-strict-cmp-128-fp16.ll
  llvm/test/CodeGen/X86/vec-strict-cmp-256-fp16.ll
  llvm/test/CodeGen/X86/vec-strict-cmp-512-fp16.ll
  llvm/test/CodeGen/X86/vector-reduce-fmax-nnan.ll
  llvm/test/CodeGen/X86/vector-reduce-fmin-nnan.ll
  llvm/test/MC/Disassembler/X86/avx512fp16.txt
  llvm/test/MC/Disassembler/X86/avx512fp16vl.txt
  llvm/test/MC/X86/avx512fp16.s
  llvm/test/MC/X86/avx512fp16vl.s
  llvm/test/MC/X86/intel-syntax-avx512fp16.s
  llvm/test/MC/X86/intel-syntax-avx512fp16vl.s



More information about the llvm-commits mailing list