[llvm] [llvm] Ensure that soft float targets don't use float/vector code for memops. (PR #107022)
Alex Rønne Petersen via llvm-commits
llvm-commits at lists.llvm.org
Mon May 5 05:11:45 PDT 2025
================
@@ -2023,6 +2023,11 @@ class TargetLoweringBase {
return LLT();
}
+ bool useIntScalarMemOps(const AttributeList &FuncAttributes) const {
----------------
alexrp wrote:
> "useIntScalarMemOps" Why int? Why scalar? What mem ops? In what context? The core change has nothing to do with any of these things
The `IntScalar` term is used in other places in the repo and just seems to mean non-vector integers. Granted, the `Mem` part is unnecessary because the function could equally well apply to other areas than just `getOptimalMemOpType()` which I'm modifying in this PR.
Essentially the question being asked is: Are we able/allowed to use float/vector instructions? If `useSoftFloat()` is true (usually a result of CPU feature flags), then the answer is *technically* yes but it would be pointless work because the instructions would just be converted to soft float equivalents immediately after, and likely result in worse code than had we just used the int-scalar path to begin with. If `noimplicitfloat` is set on the function, then the answer is a definite no, probably because we're compiling kernel code or similar.
So, perhaps `avoidFloatOrVectorOps()` or something like that?
> useSoftFloat is better but also vague. use "soft" float for what?
Well, indeed, that's the problem I was alluding to above re: how backends are inconsistent about what they take it to mean.
---
I also forgot to mention the `use-soft-float` function attribute, which each backend typically transforms into a subtarget CPU feature flag (like `soft-float` or `hard-float`) which finally informs `useSoftFloat()`. It's all quite confusing and messy.
https://github.com/llvm/llvm-project/pull/107022
More information about the llvm-commits
mailing list