[llvm-dev] SROA and volatile memcpy/memset

Chandler Carruth via llvm-dev llvm-dev at lists.llvm.org
Wed Nov 11 07:40:19 PST 2015


I'm pretty sure volatile access voids your performance warranty....

I assume the issue is that the loads and stores aren't combined late in the
back end because we propagate the volatile? I think the fix for performance
is "don't use volatile". I'm sure you've looked at that option, but we'll
need a lot more context on what problem you're actually hitting to provide
more realistic options.

I think TTI is a very bad fit here -- target customization would really
hurt the entire canonicalization efforts of the middle end....

On Wed, Nov 11, 2015, 10:34 Krzysztof Parzyszek via llvm-dev <
llvm-dev at lists.llvm.org> wrote:

> On 11/11/2015 9:28 AM, Chandler Carruth wrote:
> > So, here is the model that LLVM is using: a volatile memcpy is lowered
> > to a loop of loads and stores of indeterminate width. As such, splitting
> > a memcpy is always valid.
> >
> > If we want a very specific load and store width for volatile accesses, I
> > think that the frontend should generate concrete loads and stores of a
> > type with that width. Ultimately, memcpy is a pretty bad model for
> > *specific* width accesses, it is best at handling indeterminate sized
> > accesses, which is exactly what doesn't make sense for device backed
> > volatile accesses.
> >
>
> Yeah, the remark about devices I made in my post was a result of a
> "last-minute" thought to add some rationale.  It doesn't actually apply
> to SROA, since there are no devices that are mapped to the stack, which
> is what SROA is interested in.
>
> The concern with the testcase I attached is really about performance.
> Would it be reasonable to control the splitting in SROA via TTI?
>
> -Krzysztof
>
> --
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
> hosted by The Linux Foundation
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20151111/ea02e827/attachment.html>


More information about the llvm-dev mailing list