[llvm-commits] [llvm] r132613 - in /llvm/trunk: include/llvm/Analysis/BranchProbabilityInfo.h include/llvm/InitializePasses.h lib/Analysis/Analysis.cpp lib/Analysis/BranchProbabilityInfo.cpp lib/Analysis/CMakeLists.txt

Frits van Bommel fvbommel at gmail.com
Sat Jun 4 00:56:22 PDT 2011


On 4 June 2011 03:16, Andrew Trick <atrick at apple.com> wrote:
> +  // Default weight value. Used when we don't have information about the edge.
> +  static const unsigned int DEFAULT_WEIGHT = 16;

> +  DenseMap<Edge, unsigned> Weights;

> +  unsigned getMaxWeightFor(BasicBlock *BB) const {
> +    return UINT_MAX / BB->getTerminator()->getNumSuccessors();
> +  }

(And many more instances...)

Given that your design doc mailed to LLVMDev[1] talks about

> Representation: Map of 32-bit unsigned int "edge weight" per CFG edge

and

> One of the goals of the branch profile framework is to keep the final output of compilation independent of floating point imprecision. In other words, we want LLVM to be a deterministic cross-compiler.

Shouldn't you use 'uint32_t' instead of 'unsigned (int)' throughout
(and UINT32_MAX instead of UINT_MAX, etc.)?
As it is now, the edge weights might be different if LLVM is compiled
on a machine where an unsigned int is e.g. 16 or 64 bits because they
are dependent on the precision of unsigned integers instead of
floating point numbers. This doesn't seem to fit with the goal of
keeping the output the same over all platforms.


[1]: http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-June/040491.html




More information about the llvm-commits mailing list