[llvm-dev] Why x86_64 divq is not used for 128-bit by 64-bit division?
Paweł Bylica via llvm-dev
llvm-dev at lists.llvm.org
Thu Oct 19 13:58:35 PDT 2017
In my example, I'm truncating the result.
But anyway, are you saying that when the result of divq does not fit in
64-bit the result stored in the register is undefined?
On Thu, Oct 19, 2017 at 10:54 PM, Craig Topper <craig.topper at gmail.com>
wrote:
> divq only produces a 64-bit result. There's way for the compiler to know
> you didn't divide a greater than 64-bit number by 1 or some other value
> that required a large result.
>
> ~Craig
>
> On Thu, Oct 19, 2017 at 1:47 PM, Paweł Bylica via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
>
>> Hi there,
>>
>> Let's have this C code:
>>
>> unsigned long div(unsigned __int128 n, unsigned long d)
>> {
>> return n / d;
>> }
>>
>> I would assume that the divq is the perfect match here. But the compiler
>> generates the
>> code that calls the __udivti3 procedure which performs 128-bit by
>> 128-bit division.
>>
>> Why is divq not used here?
>>
>> - Paweł
>>
>> _______________________________________________
>> LLVM Developers mailing list
>> llvm-dev at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20171019/d73feddc/attachment.html>
More information about the llvm-dev
mailing list