[llvm-dev] [cfe-dev] Handling of FP denormal values

Cameron McInally via llvm-dev llvm-dev at lists.llvm.org
Tue Sep 17 10:30:03 PDT 2019


On Tue, Sep 17, 2019 at 12:27 PM Kaylor, Andrew <andrew.kaylor at intel.com> wrote:
>
> >> At the command-line level, I don't see a lot of value in separating the two flags. At the Function/Loop/Block/Instruction level, separating the two would be more useful though. E.g. normalizing input/output; or sacrificing accuracy to speed up a hot loop.
>
> > EDIT: 'accuracy' should be 'precision'.
>
>
>
> How do you imagine that being specified in the local scope? Two ways that come to mind would be a pragma or an intrinsic. The pragma would probably be the cleanest, though more work for the front end. I suspect most architectures already have intrinsics to control this locally, but we could possibly add a target-independent intrinsic that would be better for the optimizer.

Good question. I haven't thought about it. I don't know if I have a
strong opinion either. It's pretty clear that something will be
needed, since tracking bits being flipped in the control register is
dubious.

It's probably a question for the CFE experts. Assuming that we add
FTZ/DAZ fast math flags, what would be the best way to attach a
FTZ/DAZ fast math flag to an individual IR instruction? Is that
currently done for other FMFs? Or are they just toggled by the higher
level -ffast-math and friends?

> But I think you want this to set or clear a flag on individual operations to help with instruction selection, right?

I think that would be useful. Well, at least I imagine it could be
useful. My personal experience is that users want all-or-nothing
regarding DAZ+FTZ, so a command-line switch would be sufficient.


More information about the llvm-dev mailing list