[Lldb-commits] [PATCH] D33035: Tool for using Intel(R) Processor Trace hardware feature
Abhishek via Phabricator via lldb-commits
lldb-commits at lists.llvm.org
Thu Jul 6 08:16:52 PDT 2017
abhishek.aggarwal added a comment.
In https://reviews.llvm.org/D33035#799656, @clayborg wrote:
> In https://reviews.llvm.org/D33035#799404, @abhishek.aggarwal wrote:
>
> > In https://reviews.llvm.org/D33035#799058, @clayborg wrote:
> >
> > > So std::vector shouldn't be used in a public API. You should make a class that wraps your objects. LLDB's public API has lldb::SBInstruction and lldb::SBInstructionList as examples. std::vector on other systems compiles differently for debug and release builds and things will crash.
> >
> >
> > Hi Greg .. Does it go for the tools built on top of LLDB (e.g. in my case where feature library is not a part of liblldb shared lib but built on top of it)? If yes, then I will proceed to remove std::vector from C++ public API of the tool and create a new class for it.
>
>
> It really comes down to the issue of how stable you want this API to be. If this API is a "you are expected to rebuild any tools you make against this API every time we release a new version" then you can really do anything you want. If you want a stable API to build against, you could choose to follow the LLDB public API rules: no inheritance, fixed number of member variables that never change (which usually means a std::shared_ptr<T>, std::unique_ptr<T> or a T*), and no virtual functions. This effectively renders the C++ API into a C API, but it does force you to have your public API classes have an internal implementation (T) and then the class that you export for your API be the public facing API that everyone uses externally. The nice thing about this is that swig has a really easy time creating all of the script bindings for you if you do it this way. So it really depends on what your future use case of this API is.
>
> >> If you need this via swig for internal kind of stuff, then use a typemap where you map std::vector<T> to a list() with T instances in it in python.
> >
> > I want to provide a python interface for the tool's C++ public API as well so that the API can be used in python modules as well. Therefore, I think typemapping to list() will not solve the problem. Am I right?
>
> I believe it would fix your issue for you. You write the glue code that can translate any argument (std::vector<ptdecoder::PTInstruction>) into something more python like like a list of PTInstruction python objects. We do this in a lot of places with LLDB.
>
> >> I am a bit confused here as well. Are we exporting a plug-in specific python bindings for the Intel PT data? It seems like it would be nice to wrap this API into the SBTrace or other lldb interface? Am I not understanding this correctly?
> >
> > Probably, I didn't understand this comment of yours. We are not exporting python binding for Intel PT data. It is a python binding for Tool's C++ API and this Tool is built on top of LLDB. Did I answer your question? Please let me know if I misunderstood your comment.
>
> Now I understand. I didn't realize this was a stand alone tool. The main thing to figure out is how stable you want this C++ API that you are exporting to be. If that isn't an issue, feel free to use anything you want in that API. If stability is an issue you want to solve and you want to vend a C++ API, you might think about adopting LLDB's public API rules to help ensure stability.
Hi Greg .. Thanks a lot for your feedback and detailed explanation. I am choosing to follow what LLDB SB API are following. Removing std::vector<> from interface. Submitting a new patch.
https://reviews.llvm.org/D33035
More information about the lldb-commits
mailing list