[LLVMdev] Loss of precision with very large branch weights
Xinliang David Li
davidxl at google.com
Fri Apr 24 12:44:40 PDT 2015
On Fri, Apr 24, 2015 at 12:29 PM, Diego Novillo <dnovillo at google.com> wrote:
>
>
> On Fri, Apr 24, 2015 at 3:28 PM, Xinliang David Li <davidxl at google.com>
> wrote:
>>
>> yes -- for count representation, 64 bit is needed. The branch weight
>> here is different and does not needs to be 64bit to represent branch
>> probability precisely.
>
>
> Actually, the branch weights are really counts.
No -- I think that is our original proposal (including changing the
meaning of MD_prof meta data) :). After many rounds of discussions, I
think what we eventually settled to is to
1) use 64 bit value to represent function entry count
2) keep branch weight representation and meaning as it is
Changing weights to 64bit for can slightly increase memory usage. In
fact, what we want longer term is get rid of 'weights' and just use a
fixed point representation for branch probability. For blocks with 2
targets, such info can be attached at Block (source) level, thus
further saving memory.
>They get converted to
> frequencies. For frequencies, we don't really need 64bits, as they're just
> comparative values that can be squished into 32bits. It's the branch
> weights being 32 bit quantities that are throwing off the calculations.
Do you still see the issue after fixing bhe bug (limit without scaling) in
BranchProbabilityInfo::calcMetadataWeights ?
David
>
>
> Diego.
More information about the llvm-dev
mailing list