[PATCH] D36864: [Profile] backward propagate profile data in jump-threading

David Li via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Aug 18 10:32:53 PDT 2017


davidxl marked 6 inline comments as done.
davidxl added inline comments.


================
Comment at: lib/Transforms/Scalar/JumpThreading.cpp:159
+//  can not be greater than 1%.
+
+static void UpdatePredecessorProfileMetadata(PHINode *PN, BasicBlock *BB) {
----------------
chandlerc wrote:
> Would be good to connect this comment to the arguments and behavior of this function.
> 
> (Don't get me wrong, this comment is *excellent* for explaining why we can do this back propagation and how to compute it! I just also want comments that explain how this routine fits into the model.)
Annotated the comments with variable names in the code.


================
Comment at: lib/Transforms/Scalar/JumpThreading.cpp:194
+
+    BP = (CI->getZExtValue() ? BranchProbability::getBranchProbability(
+                                   TrueWeight, TrueWeight + FalseWeight)
----------------
chandlerc wrote:
> This code only handle i1 PHIs that directly feed conditional branches, right?
> 
> Provided that's so, I would assert that CI is a 1 bit integer here, and then test whether it the null value or not rather than zexting and extracting it.
There is an early check of one bit type. I am not clear what is the suggestion about (regarding extracting value).


================
Comment at: lib/Transforms/Scalar/JumpThreading.cpp:207
+    uint64_t PredTrueWeight, PredFalseWeight;
+    // FIXME: we currently only set the profile data when it is missing.
+    // With PGO, this can be turned on too to refine the profile with
----------------
chandlerc wrote:
> "we" -> "We"
> 
> I wonder, since what we have is fundamentally an upper bound, would it make sense to go ahead and apply this even when there is existing profile info?
> 
> Even for static hints, I can see this happening. For example, the pre-inlining code might have some relatively weak hint that is, after inlining, bounded by a powerful hint.
> 
> Or is your concern doing this without measuring the impact for PGO? If that's why, I would make that more clear -- we *intend* to always use this to refine weights, but PGO requires more careful tuning to make sure this doesn't regress anything.
> 
> Otherwise, I'd just go ahead and implement this as clamping any existing weights.
Yes, the latter .  I have seen issues with PGO so without more testing, I will leave this for nonPGO case for now.


https://reviews.llvm.org/D36864





More information about the llvm-commits mailing list