[llvm] [IR] Add llvm `clmul` intrinsic (PR #140301)

Oscar Smith via llvm-commits llvm-commits at lists.llvm.org
Sun May 18 03:13:24 PDT 2025


https://github.com/oscardssmith updated https://github.com/llvm/llvm-project/pull/140301

>From 33bff4722463481e39b71f0090a46ed8c2111e16 Mon Sep 17 00:00:00 2001
From: Oscar Smith <oscardssmith at gmail.com>
Date: Fri, 16 May 2025 12:15:08 -0400
Subject: [PATCH 1/6] add clmul docs

---
 llvm/docs/LangRef.rst | 91 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 91 insertions(+)

diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index a1ae6611acd3c..6ca35ff700817 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -18054,6 +18054,97 @@ Example:
       %r = call i8 @llvm.fshr.i8(i8 255, i8 0, i8 15)  ; %r = i8: 254 (0b11111110)
       %r = call i8 @llvm.fshr.i8(i8 15, i8 15, i8 11)  ; %r = i8: 225 (0b11100001)
       %r = call i8 @llvm.fshr.i8(i8 0, i8 255, i8 8)   ; %r = i8: 255 (0b11111111)
+Syntax:
+"""""""
+
+This is an overloaded intrinsic. You can use ``llvm.fshr`` on any
+integer bit width or any vector of integer elements. Not all targets
+support all bit widths or vector types, however.
+
+::
+
+      declare i8  @llvm.fshr.i8 (i8 %a, i8 %b, i8 %c)
+      declare i64 @llvm.fshr.i64(i64 %a, i64 %b, i64 %c)
+      declare <2 x i32> @llvm.fshr.v2i32(<2 x i32> %a, <2 x i32> %b, <2 x i32> %c)
+
+Overview:
+"""""""""
+
+The '``llvm.fshr``' family of intrinsic functions performs a funnel shift right:
+the first two values are concatenated as { %a : %b } (%a is the most significant
+bits of the wide value), the combined value is shifted right, and the least
+significant bits are extracted to produce a result that is the same size as the
+original arguments. If the first 2 arguments are identical, this is equivalent
+to a rotate right operation. For vector types, the operation occurs for each
+element of the vector. The shift argument is treated as an unsigned amount
+modulo the element size of the arguments.
+
+Arguments:
+""""""""""
+
+The first two arguments are the values to be concatenated. The third
+argument is the shift amount. The arguments may be any integer type or a
+vector with integer element type. All arguments and the return value must
+have the same type.
+
+Example:
+""""""""
+
+.. code-block:: text
+
+      %r = call i8 @llvm.fshr.i8(i8 %x, i8 %y, i8 %z)  ; %r = i8: lsb_extract((concat(x, y) >> (z % 8)), 8)
+      %r = call i8 @llvm.fshr.i8(i8 255, i8 0, i8 15)  ; %r = i8: 254 (0b11111110)
+      %r = call i8 @llvm.fshr.i8(i8 15, i8 15, i8 11)  ; %r = i8: 225 (0b11100001)
+      %r = call i8 @llvm.fshr.i8(i8 0, i8 255, i8 8)   ; %r = i8: 255 (0b11111111)
+
+
+.. clmul:
+
+'``clmul.*``' Intrinsic
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Syntax
+"""""""
+
+This is an overloaded intrinsic. You can use ``llvm.clmul``
+on any integer bit width or vectors of integers.
+
+::
+
+      declare i16 @llvm.clmul.i16(i16 %a, i16 %b)
+      declare i32 @llvm.clmul.i32(i32 %a, i32 %b)
+      declare i64 @llvm.clmul.i64(i64 %a, i64 %b)
+      declare <4 x i32> @llvm.clmult.v4i32(<4 x i32> %a, <4 x i32> %b)
+
+Overview
+"""""""""
+
+The '``llvm.clmul``' family of intrinsics functions perform carryless multiplication
+(also known as xor multiplication) on the 2 arguments.
+
+Arguments
+""""""""""
+
+The arguments (%a and %b) and the result may be of integer types of any bit
+width, but they must have the same bit width. ``%a`` and ``%b`` are the two
+values that will undergo carryless multiplication.
+
+Semantics:
+""""""""""
+
+The ‘llvm.clmul’ intrinsic computes carryless multiply of ``%a`` and ``%b``, which is the result
+of applying the standard multiplication algorithm if you replace all of the aditions with exclusive ors.
+The vector intrinsics, such as llvm.clmul.v4i32, operate on a per-element basis and the element order is not affected.
+
+Examples
+"""""""""
+
+.. code-block:: llvm
+
+      %res = call i4 @llvm.clmul.i4(i4 1, i4 2)  ; %res = 2
+      %res = call i4 @llvm.clmul.i4(i4 5, i4 6)  ; %res = 14
+      %res = call i4 @llvm.clmul.i4(i4 -4, i4 2)  ; %res = -8
+      %res = call i4 @llvm.clmul.i4(i4 -4, i4 -5)  ; %res = -12
 
 Arithmetic with Overflow Intrinsics
 -----------------------------------

>From 5b4cbc50c3079902b5ade266568b4c5335a28bec Mon Sep 17 00:00:00 2001
From: oscarddssmith <oscar.smith at juliacomputing.com>
Date: Fri, 16 May 2025 13:38:04 -0400
Subject: [PATCH 2/6] fix

---
 llvm/docs/LangRef.rst | 77 ++++++++++---------------------------------
 1 file changed, 17 insertions(+), 60 deletions(-)

diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 6ca35ff700817..636f18f28610b 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -10471,8 +10471,8 @@ its two operands.
 
 .. note::
 
-	The instruction is implemented as a call to libm's '``fmod``'
-	for some targets, and using the instruction may thus require linking libm.
+    The instruction is implemented as a call to libm's '``fmod``'
+    for some targets, and using the instruction may thus require linking libm.
 
 
 Arguments:
@@ -18054,49 +18054,6 @@ Example:
       %r = call i8 @llvm.fshr.i8(i8 255, i8 0, i8 15)  ; %r = i8: 254 (0b11111110)
       %r = call i8 @llvm.fshr.i8(i8 15, i8 15, i8 11)  ; %r = i8: 225 (0b11100001)
       %r = call i8 @llvm.fshr.i8(i8 0, i8 255, i8 8)   ; %r = i8: 255 (0b11111111)
-Syntax:
-"""""""
-
-This is an overloaded intrinsic. You can use ``llvm.fshr`` on any
-integer bit width or any vector of integer elements. Not all targets
-support all bit widths or vector types, however.
-
-::
-
-      declare i8  @llvm.fshr.i8 (i8 %a, i8 %b, i8 %c)
-      declare i64 @llvm.fshr.i64(i64 %a, i64 %b, i64 %c)
-      declare <2 x i32> @llvm.fshr.v2i32(<2 x i32> %a, <2 x i32> %b, <2 x i32> %c)
-
-Overview:
-"""""""""
-
-The '``llvm.fshr``' family of intrinsic functions performs a funnel shift right:
-the first two values are concatenated as { %a : %b } (%a is the most significant
-bits of the wide value), the combined value is shifted right, and the least
-significant bits are extracted to produce a result that is the same size as the
-original arguments. If the first 2 arguments are identical, this is equivalent
-to a rotate right operation. For vector types, the operation occurs for each
-element of the vector. The shift argument is treated as an unsigned amount
-modulo the element size of the arguments.
-
-Arguments:
-""""""""""
-
-The first two arguments are the values to be concatenated. The third
-argument is the shift amount. The arguments may be any integer type or a
-vector with integer element type. All arguments and the return value must
-have the same type.
-
-Example:
-""""""""
-
-.. code-block:: text
-
-      %r = call i8 @llvm.fshr.i8(i8 %x, i8 %y, i8 %z)  ; %r = i8: lsb_extract((concat(x, y) >> (z % 8)), 8)
-      %r = call i8 @llvm.fshr.i8(i8 255, i8 0, i8 15)  ; %r = i8: 254 (0b11111110)
-      %r = call i8 @llvm.fshr.i8(i8 15, i8 15, i8 11)  ; %r = i8: 225 (0b11100001)
-      %r = call i8 @llvm.fshr.i8(i8 0, i8 255, i8 8)   ; %r = i8: 255 (0b11111111)
-
 
 .. clmul:
 
@@ -24335,14 +24292,14 @@ Examples:
 
 .. code-block:: text
 
-	 %r = call <8 x i64> @llvm.experimental.vp.strided.load.v8i64.i64(i64* %ptr, i64 %stride, <8 x i64> %mask, i32 %evl)
-	 ;; The operation can also be expressed like this:
+     %r = call <8 x i64> @llvm.experimental.vp.strided.load.v8i64.i64(i64* %ptr, i64 %stride, <8 x i64> %mask, i32 %evl)
+     ;; The operation can also be expressed like this:
 
-	 %addr = bitcast i64* %ptr to i8*
-	 ;; Create a vector of pointers %addrs in the form:
-	 ;; %addrs = <%addr, %addr + %stride, %addr + 2 * %stride, ...>
-	 %ptrs = bitcast <8 x i8* > %addrs to <8 x i64* >
-	 %also.r = call <8 x i64> @llvm.vp.gather.v8i64.v8p0i64(<8 x i64* > %ptrs, <8 x i64> %mask, i32 %evl)
+     %addr = bitcast i64* %ptr to i8*
+     ;; Create a vector of pointers %addrs in the form:
+     ;; %addrs = <%addr, %addr + %stride, %addr + 2 * %stride, ...>
+     %ptrs = bitcast <8 x i8* > %addrs to <8 x i64* >
+     %also.r = call <8 x i64> @llvm.vp.gather.v8i64.v8p0i64(<8 x i64* > %ptrs, <8 x i64> %mask, i32 %evl)
 
 
 .. _int_experimental_vp_strided_store:
@@ -24386,7 +24343,7 @@ The '``llvm.experimental.vp.strided.store``' intrinsic stores the elements of
 '``val``' in the same way as the :ref:`llvm.vp.scatter <int_vp_scatter>` intrinsic,
 where the vector of pointers is in the form:
 
-	``%ptrs = <%ptr, %ptr + %stride, %ptr + 2 * %stride, ... >``,
+    ``%ptrs = <%ptr, %ptr + %stride, %ptr + 2 * %stride, ... >``,
 
 with '``ptr``' previously casted to a pointer '``i8``', '``stride``' always interpreted as a signed
 integer and all arithmetic occurring in the pointer type.
@@ -24396,14 +24353,14 @@ Examples:
 
 .. code-block:: text
 
-	 call void @llvm.experimental.vp.strided.store.v8i64.i64(<8 x i64> %val, i64* %ptr, i64 %stride, <8 x i1> %mask, i32 %evl)
-	 ;; The operation can also be expressed like this:
+     call void @llvm.experimental.vp.strided.store.v8i64.i64(<8 x i64> %val, i64* %ptr, i64 %stride, <8 x i1> %mask, i32 %evl)
+     ;; The operation can also be expressed like this:
 
-	 %addr = bitcast i64* %ptr to i8*
-	 ;; Create a vector of pointers %addrs in the form:
-	 ;; %addrs = <%addr, %addr + %stride, %addr + 2 * %stride, ...>
-	 %ptrs = bitcast <8 x i8* > %addrs to <8 x i64* >
-	 call void @llvm.vp.scatter.v8i64.v8p0i64(<8 x i64> %val, <8 x i64*> %ptrs, <8 x i1> %mask, i32 %evl)
+     %addr = bitcast i64* %ptr to i8*
+     ;; Create a vector of pointers %addrs in the form:
+     ;; %addrs = <%addr, %addr + %stride, %addr + 2 * %stride, ...>
+     %ptrs = bitcast <8 x i8* > %addrs to <8 x i64* >
+     call void @llvm.vp.scatter.v8i64.v8p0i64(<8 x i64> %val, <8 x i64*> %ptrs, <8 x i1> %mask, i32 %evl)
 
 
 .. _int_vp_gather:

>From 7fba2378f39b85661953998db23acb94299e592d Mon Sep 17 00:00:00 2001
From: oscarddssmith <oscar.smith at juliacomputing.com>
Date: Fri, 16 May 2025 14:10:49 -0400
Subject: [PATCH 3/6] hook up to RISCV backend

---
 llvm/include/llvm/IR/Intrinsics.td                | 8 ++++++++
 llvm/lib/Target/RISCV/RISCVISelLowering.cpp       | 2 ++
 llvm/test/CodeGen/RISCV/rv64zbc-zbkc-intrinsic.ll | 6 +++---
 3 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td
index e1a135a5ad48e..1857829910340 100644
--- a/llvm/include/llvm/IR/Intrinsics.td
+++ b/llvm/include/llvm/IR/Intrinsics.td
@@ -1431,6 +1431,8 @@ let IntrProperties = [IntrNoMem, IntrSpeculatable, IntrWillReturn] in {
       [LLVMMatchType<0>, LLVMMatchType<0>, LLVMMatchType<0>]>;
   def int_fshr : DefaultAttrsIntrinsic<[llvm_anyint_ty],
       [LLVMMatchType<0>, LLVMMatchType<0>, LLVMMatchType<0>]>;
+  def int_clmul : DefaultAttrsIntrinsic<[llvm_anyint_ty],
+      [LLVMMatchType<0>, LLVMMatchType<0>, LLVMMatchType<0>]>;
 }
 
 let IntrProperties = [IntrNoMem, IntrSpeculatable, IntrWillReturn,
@@ -2103,6 +2105,12 @@ let IntrProperties = [IntrNoMem, IntrNoSync, IntrWillReturn] in {
                                LLVMMatchType<0>,
                                LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
                                llvm_i32_ty]>;
+  def int_vp_clmul : DefaultAttrsIntrinsic<[ llvm_anyvector_ty ],
+                             [ LLVMMatchType<0>,
+                               LLVMMatchType<0>,
+                               LLVMMatchType<0>,
+                               LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
+                               llvm_i32_ty]>;
   def int_vp_sadd_sat : DefaultAttrsIntrinsic<[ llvm_anyvector_ty ],
                              [ LLVMMatchType<0>,
                                LLVMMatchType<0>,
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index fae2cda13863d..6167c375755fd 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -10348,6 +10348,7 @@ SDValue RISCVTargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
     return DAG.getNode(RISCVISD::MOPRR, DL, XLenVT, Op.getOperand(1),
                        Op.getOperand(2), Op.getOperand(3));
   }
+  case Intrinsic::clmul:
   case Intrinsic::riscv_clmul:
     return DAG.getNode(RISCVISD::CLMUL, DL, XLenVT, Op.getOperand(1),
                        Op.getOperand(2));
@@ -14284,6 +14285,7 @@ void RISCVTargetLowering::ReplaceNodeResults(SDNode *N,
       Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res));
       return;
     }
+    case Intrinsic::clmul:
     case Intrinsic::riscv_clmul: {
       if (!Subtarget.is64Bit() || N->getValueType(0) != MVT::i32)
         return;
diff --git a/llvm/test/CodeGen/RISCV/rv64zbc-zbkc-intrinsic.ll b/llvm/test/CodeGen/RISCV/rv64zbc-zbkc-intrinsic.ll
index aa9e89bc20953..5017f9f4853b5 100644
--- a/llvm/test/CodeGen/RISCV/rv64zbc-zbkc-intrinsic.ll
+++ b/llvm/test/CodeGen/RISCV/rv64zbc-zbkc-intrinsic.ll
@@ -4,7 +4,7 @@
 ; RUN: llc -mtriple=riscv64 -mattr=+zbkc -verify-machineinstrs < %s \
 ; RUN:   | FileCheck %s -check-prefix=RV64ZBC-ZBKC
 
-declare i64 @llvm.riscv.clmul.i64(i64 %a, i64 %b)
+declare i64 @llvm.clmul.i64(i64 %a, i64 %b)
 
 define i64 @clmul64(i64 %a, i64 %b) nounwind {
 ; RV64ZBC-ZBKC-LABEL: clmul64:
@@ -26,7 +26,7 @@ define i64 @clmul64h(i64 %a, i64 %b) nounwind {
   ret i64 %tmp
 }
 
-declare i32 @llvm.riscv.clmul.i32(i32 %a, i32 %b)
+declare i32 @llvm.clmul.i32(i32 %a, i32 %b)
 
 define signext i32 @clmul32(i32 signext %a, i32 signext %b) nounwind {
 ; RV64ZBC-ZBKC-LABEL: clmul32:
@@ -34,7 +34,7 @@ define signext i32 @clmul32(i32 signext %a, i32 signext %b) nounwind {
 ; RV64ZBC-ZBKC-NEXT:    clmul a0, a0, a1
 ; RV64ZBC-ZBKC-NEXT:    sext.w a0, a0
 ; RV64ZBC-ZBKC-NEXT:    ret
-  %tmp = call i32 @llvm.riscv.clmul.i32(i32 %a, i32 %b)
+  %tmp = call i32 @llvm.clmul.i32(i32 %a, i32 %b)
   ret i32 %tmp
 }
 

>From cdb3b9c6b118d1e98d77c94fdcaf1c4394a9ced8 Mon Sep 17 00:00:00 2001
From: Oscar Smith <oscardssmith at gmail.com>
Date: Sun, 18 May 2025 01:01:55 -0400
Subject: [PATCH 4/6] add lowering

---
 llvm/include/llvm/CodeGen/ISDOpcodes.h        |  3 +++
 llvm/lib/CodeGen/IntrinsicLowering.cpp        | 23 +++++++++++++++++++
 .../SelectionDAG/SelectionDAGBuilder.cpp      |  6 +++++
 3 files changed, 32 insertions(+)

diff --git a/llvm/include/llvm/CodeGen/ISDOpcodes.h b/llvm/include/llvm/CodeGen/ISDOpcodes.h
index 9f66402e4c820..9c32e7717979c 100644
--- a/llvm/include/llvm/CodeGen/ISDOpcodes.h
+++ b/llvm/include/llvm/CodeGen/ISDOpcodes.h
@@ -751,6 +751,9 @@ enum NodeType {
   ROTR,
   FSHL,
   FSHR,
+  
+  /// CLMUL operator
+  CLMUL,
 
   /// Byte Swap and Counting operators.
   BSWAP,
diff --git a/llvm/lib/CodeGen/IntrinsicLowering.cpp b/llvm/lib/CodeGen/IntrinsicLowering.cpp
index 1518ead7698be..71fb29f33a468 100644
--- a/llvm/lib/CodeGen/IntrinsicLowering.cpp
+++ b/llvm/lib/CodeGen/IntrinsicLowering.cpp
@@ -199,6 +199,25 @@ static Value *LowerCTLZ(LLVMContext &Context, Value *V, Instruction *IP) {
   return LowerCTPOP(Context, V, IP);
 }
 
+/// Emit the code to lower clmul of V1, V2 before the specified instruction IP.
+static Value *LowerCLMUL(LLVMContext &Context, Value *V1, Value *V2, Instruction *IP) {
+
+  IRBuilder<> Builder(IP);
+
+  unsigned BitSize = V1->getType()->getPrimitiveSizeInBits();
+  Value *Res = ConstantInt::get(V1->getType(), 0);
+  Value *Zero = ConstantInt::get(V1->getType(), 0);
+  Value *One = ConstantInt::get(V1->getType(), 1);
+  for (unsigned i = 1; i < BitSize; i <<= 1) {
+    Value *LowBit = Builder.CreateAnd(V1, One, "clmul.isodd");
+    Value *Pred = Builder.CreateSelect(LowBit, V2, Zero, "clmul.V2_or_zero");
+    Res = Builder.CreateXor(Res, Pred, "clmul.Res");
+    V1 = Builder.CreateLShr(V1, One, "clmul.V1");
+    V2 = Builder.CreateShl(V2, One, "clmul.V2");
+  }
+  return LowerCTPOP(Context, Res, IP);
+}
+
 static void ReplaceFPIntrinsicWithCall(CallInst *CI, const char *Fname,
                                        const char *Dname,
                                        const char *LDname) {
@@ -262,6 +281,10 @@ void IntrinsicLowering::LowerIntrinsicCall(CallInst *CI) {
     CI->replaceAllUsesWith(LowerCTLZ(Context, CI->getArgOperand(0), CI));
     break;
 
+  case Intrinsic::clmul:
+    CI->replaceAllUsesWith(LowerCLMUL(Context, CI->getArgOperand(0), CI->getArgOperand(1), CI));
+    break;
+
   case Intrinsic::cttz: {
     // cttz(x) -> ctpop(~X & (X-1))
     Value *Src = CI->getArgOperand(0);
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 3ebd3a4b88097..a4e350702ebc8 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -7188,6 +7188,12 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
     }
     return;
   }
+  case Intrinsic::clmul: {
+    SDValue Op1 = getValue(I.getArgOperand(0));
+    SDValue Op2 = getValue(I.getArgOperand(1));
+    setValue(&I, DAG.getNode(ISD::CLMUL, sdl, Op1.getValueType(), Op1, Op2));
+    return;
+  }
   case Intrinsic::sadd_sat: {
     SDValue Op1 = getValue(I.getArgOperand(0));
     SDValue Op2 = getValue(I.getArgOperand(1));

>From c36897d63d32048f4d9f55bc2873f6adfd2a2315 Mon Sep 17 00:00:00 2001
From: Oscar Smith <oscardssmith at gmail.com>
Date: Sun, 18 May 2025 01:03:08 -0400
Subject: [PATCH 5/6] typo fix

---
 llvm/docs/LangRef.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 636f18f28610b..8b4ceaa06275b 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -18071,7 +18071,7 @@ on any integer bit width or vectors of integers.
       declare i16 @llvm.clmul.i16(i16 %a, i16 %b)
       declare i32 @llvm.clmul.i32(i32 %a, i32 %b)
       declare i64 @llvm.clmul.i64(i64 %a, i64 %b)
-      declare <4 x i32> @llvm.clmult.v4i32(<4 x i32> %a, <4 x i32> %b)
+      declare <4 x i32> @llvm.clmul.v4i32(<4 x i32> %a, <4 x i32> %b)
 
 Overview
 """""""""

>From 312cc2300cb2cc3a171fcfe2c58af4b3f52ef7fd Mon Sep 17 00:00:00 2001
From: Oscar Smith <oscardssmith at gmail.com>
Date: Sun, 18 May 2025 06:13:15 -0400
Subject: [PATCH 6/6] Update llvm/include/llvm/CodeGen/ISDOpcodes.h

Co-authored-by: Jay Foad <jay.foad at gmail.com>
---
 llvm/include/llvm/CodeGen/ISDOpcodes.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/llvm/include/llvm/CodeGen/ISDOpcodes.h b/llvm/include/llvm/CodeGen/ISDOpcodes.h
index 9c32e7717979c..fc3b3b26cbe5e 100644
--- a/llvm/include/llvm/CodeGen/ISDOpcodes.h
+++ b/llvm/include/llvm/CodeGen/ISDOpcodes.h
@@ -752,7 +752,7 @@ enum NodeType {
   FSHL,
   FSHR,
   
-  /// CLMUL operator
+  /// Carryless multiplication operator
   CLMUL,
 
   /// Byte Swap and Counting operators.



More information about the llvm-commits mailing list