[lldb-dev] bindings as service idea

Todd Fiala via lldb-dev lldb-dev at lists.llvm.org
Thu Nov 19 08:00:20 PST 2015


For the benefit of continuity in conversation, here is what you had to say
about it before:

> One possibility (which I mentioned to you offline, but I'll put it here for
others to see) is that we make a swig bot which is hosted in the cloud much
like our public build bots.  We provide a Python script that can be run on
your machine, which sends requests over to the swig bot to run swig and
send back the results.  Availability of the service would be governed by
the SLA of Google Compute Engine, viewable
here:https://cloud.google.com/compute/sla?hl=en

> If we do something like this, it would allow us to raise the SWIG version
in the upstream, and at that point I can see some benefit in checking the
bindings in.  Short of that, I still dont' see the value proposition in
checking bindings in to the repo.  [bits deleted]

> If it means we can get off of SWIG 1.x in the upstream, I will do the work
to make remote swig generation service and get it up and running.


I'd like feedback from others on this.  Is this something we want to
consider doing?

>From my perspective, this seems reasonable to look into doing if we:

(a) have the service code available, and

(b) if we so choose, we can readily have the script hit another server
(so that a consumer can have the entire setup on an internal network),
and

(c: option 1) be able to fall back to generate with swig locally as we
do now in the event that we can't hit the server

(c: option 2) rather than fall back to swig generation, use swig
generation as primary (as it is now) but, if a swig is not found, then
do the get-bindings-as-a-service flow.

This does open up multiple ways to do something, but I think we need
to avoid a failure mode that says "Oops, you can't hit the network.
Sorry, no lldb build for you."


Reasoning:

For (a): just so we all know what we're using.

For (b): I can envision production build flows that will not want to
be hitting a third-party server.  We shouldn't require that.

For (c): we don't want to prevent building in scenarios that can't hit
a network during the build.


-Todd


On Wed, Nov 18, 2015 at 10:58 PM, Todd Fiala <todd.fiala at gmail.com> wrote:

>
>
> On Wed, Nov 18, 2015 at 10:06 PM, Todd Fiala <todd.fiala at gmail.com> wrote:
>
>> Hey Zachary,
>>
>> I think the time pressure has gotten the better of me, so I want to
>> apologize for getting snippy about the static bindings of late.  I am
>> confident we will get to a good solution for removing that dependency, but
>> I can certainly wait for a solution (using an alternate approach in our
>> branch) until we arrive at something more palatable to everyone.
>>
>> Regarding the bindings as service idea:
>>
>> How quickly do you think you could flesh out the bindings as a service
>> idea?  With a relatively strong dislike for the static approach I'm taking,
>> I can back off that and just use my current code here in a downstream
>> branch for now.  Ultimately I want to remove the requirement for swig, but
>> I can probably achieve that without doing it in upstream if we're going to
>> have some solution there at some point ideally sooner than later.
>>
>> Also - I think you were going to send me a swig 3.x binding to try out
>> (I'd need the LLDBWrapPythoh.cpp and the lldb.py, and you'd just need to
>> let me know if it still needs to be post-processed or it would need to be
>> done).  Can we shoot for trying that out maybe tomorrow?
>>
>>
> Hey I got these, Zachary.  They just didn't go in my inbox.
>
>
>> Thanks!
>> --
>> -Todd
>>
>
>
>
> --
> -Todd
>



-- 
-Todd
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/lldb-dev/attachments/20151119/16d0a216/attachment.html>


More information about the lldb-dev mailing list