[PATCH] D92334: [CSSPGO][llvm-profgen] Pseudo probe decoding and disassembling

Lei Wang via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Dec 10 12:15:33 PST 2020


wlei added inline comments.


================
Comment at: llvm/tools/llvm-profgen/PseudoProbe.h:127
+  // \p ContextStack is populated in root to leaf order
+  void getInlineContext(std::list<std::string> &ContextStack,
+                        GUIDProbeFunctionMap &GUID2FuncMAP, bool ShowName);
----------------
wlei wrote:
> hoy wrote:
> > wlei wrote:
> > > hoy wrote:
> > > > Nit: consider using `SmallVector` for fewer allocations and better locality.
> > > Thanks for your suggestions. I intend to use `std::list` here to use `push_front()` to avoid some string reversion.
> > > Because our input is the tree leaf, need to travel backwards to extract the context.
> > > e.g. we have two probes, each has inlined context, like foo (main -> foo)  and baz (bar -> baz) 
> > > we first process `foo`, extract the context: "main @ foo", "main" is pushed front of the context.
> > > then we process` baz`, extract the context: "bar @ baz", at last putting them together will get the full context "main @ foo @ bar @ baz". 
> > Thanks for the explanation. Can we use vector `push_back` to generate a reversed context first and then return a new vector by reversing it? We tend to not use `std::list` when possible because every time a new node is added to the list, an underlying `malloc` call will be triggered. Also nodes in list are not memory consecutive and vectorization cannot be done on a list.
> Yeah, that's a good idea, we can iterate the `vector` in reversed order no need for reversing the vector. Let me change this! Thanks!
@hoy  I'm just aware that we can use post-order traversal to fill in the vector recursively, which will make the context in caller-callee order, no need to do reversing. Also I'm aware that we need to do compression on the vector not the string,  caller-callee order is easy for this. The inlined context size for one probe should be small, no stack overflow risk. So I just updated to change this, what do you think?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D92334/new/

https://reviews.llvm.org/D92334



More information about the llvm-commits mailing list