[LLVMdev] Loss of precision with very large branch weights

Xinliang David Li davidxl at google.com
Fri Apr 24 13:23:26 PDT 2015


On Fri, Apr 24, 2015 at 1:10 PM, Diego Novillo <dnovillo at google.com> wrote:
>
>
> On Fri, Apr 24, 2015 at 3:44 PM, Xinliang David Li <davidxl at google.com>
> wrote:
>>
>> On Fri, Apr 24, 2015 at 12:29 PM, Diego Novillo <dnovillo at google.com>
>> wrote:
>> >
>> >
>> > On Fri, Apr 24, 2015 at 3:28 PM, Xinliang David Li <davidxl at google.com>
>> > wrote:
>> >>
>> >> yes -- for count representation, 64 bit is needed. The branch weight
>> >> here is different and does not needs to be 64bit to represent branch
>> >> probability precisely.
>> >
>> >
>> > Actually, the branch weights are really counts.
>>
>> No -- I think that is our original proposal (including changing the
>> meaning of MD_prof meta data) :).
>
>
> Sure.  Though they kind of are. They get massaged and smoothed when
> branch_weights are placed from the raw counts, but for sufficiently small
> values they are very close to counts.

right.

>
>>
>>
>> >They get converted to
>> > frequencies.  For frequencies, we don't really need 64bits, as they're
>> > just
>> > comparative values that can be squished into 32bits.  It's the branch
>> > weights being 32 bit quantities that are throwing off the calculations.
>>
>> Do you still see the issue after fixing bhe bug (limit without scaling) in
>>  BranchProbabilityInfo::calcMetadataWeights ?
>
>
> That's the fix I was contemplating initially. I was curious at whether
> moving to 64bit would make this easier.


It is certainly easier but with a cost :)

david

>
>
> Diego.



More information about the llvm-dev mailing list