[Mlir-commits] [llvm] [mlir] [SPIR-V] Add support for the SPIR-V extension SPV_INTEL_tensor_float32_conversion (PR #150090)
llvmlistbot at llvm.org
llvmlistbot at llvm.org
Wed Jul 30 08:40:54 PDT 2025
================
@@ -311,6 +311,27 @@ LogicalResult INTELConvertFToBF16Op::verify() {
return success();
}
+//===----------------------------------------------------------------------===//
+// spirv.INTELRoundFToTF32Op
+//===----------------------------------------------------------------------===//
+
+LogicalResult INTELRoundFToTF32Op::verify() {
+ auto operandType = getOperand().getType();
+ auto resultType = getResult().getType();
+ // ODS checks that vector result type and vector operand type have the same
+ // shape.
+ if (auto vectorType = llvm::dyn_cast<VectorType>(operandType)) {
+ unsigned operandNumElements = vectorType.getNumElements();
+ unsigned resultNumElements =
+ llvm::cast<VectorType>(resultType).getNumElements();
----------------
YixingZhang007 wrote:
Thanks for the suggestion! I have added the vector non-scalable check and update the code as following:
` auto operandVectorType = dyn_cast<VectorType>(operandType);
auto resultVectorType = dyn_cast<VectorType>(resultType);
if (operandVectorType && resultVectorType) {
if (operandVectorType.isScalable() || resultVectorType.isScalable()) {
return emitOpError("scalable vectors are not supported");
}
if (operandVectorType.getNumElements() !=
resultVectorType.getNumElements()) {
return emitOpError(
"operand and result must have same number of elements");
}`
Similar changes are also made for `INTELConvertFToBF16Op` and `INTELConvertBF16ToFOp` as they are having the same problems. The MLIR part of this PR have been moved to a new PR https://github.com/llvm/llvm-project/pull/151337 and these changes are made there. :)
https://github.com/llvm/llvm-project/pull/150090
More information about the Mlir-commits
mailing list