[lldb-dev] bindings as service idea
Todd Fiala via lldb-dev
lldb-dev at lists.llvm.org
Thu Nov 19 10:17:37 PST 2015
I'm out next week, but I can help if needed after that.
Related to all this, you have mentioned a few times that there are newer
swig features you want to use.
Can you enumerate the features not present in 1.x but present in 3.x that
you want to take advantage of, and what benefits they will bring us? (I'm
not referring to bug fixes in bindings, but actual features that bring
something new that we didn't have before).
Thanks!
-Todd
On Thu, Nov 19, 2015 at 10:14 AM, Zachary Turner <zturner at google.com> wrote:
> I wasn't planning on working on this immediately, but given the outcome of
> the recent static bindings work, I can re-prioritize. I don't know how
> long it will take, because honestly writing this kind of thing in Python is
> new to me.. to make an understatement. But I'll get it done. Give me
> until mid next week and I'll post an update.
>
> On Thu, Nov 19, 2015 at 10:12 AM Todd Fiala <todd.fiala at gmail.com> wrote:
>
>> On Thu, Nov 19, 2015 at 9:44 AM, Zachary Turner <zturner at google.com>
>> wrote:
>>
>>> Just to re-iterate, if we use the bindings as a service, then I envision
>>> checking the bindings in. This addresses a lot of the potential pitfalls
>>> you point out, such as the "oops, you can't hit the network, no build for
>>> you" and the issue of production build flows not wanting to hit a third
>>> party server, etc.
>>>
>>> So if we do that, then I don't think falling back to local generation
>>> will be an issue (or important) in practice. i.e. it won't matter if you
>>> can't hit the network. The reason I say this is that if you can't hit the
>>> network you can't check in code either. So, sure, there might be a short
>>> window where you can't do a local build , but that would only affect you if
>>> you were actively modifying a swig interface file AND you were actively
>>> without a network connection. The service claims 99.95% uptime, and it's
>>> safe to say we are looking at significantly less than 100% usage of the
>>> server (given checked in bindings), so I think we're looking at once a year
>>> -- if that -- that anyone anywhere has an issue with being able to access
>>> the service.
>>>
>>>
>> That seems fine.
>>
>>
>>> And, as you said, the option can be provided to change the host that the
>>> service runs on, so someone could run one internally.
>>>
>>> But do note, that if the goal here is to get the SWIG version bumped in
>>> the upstream, then we will probably take advantage of some of these new
>>> SWIG features, which may not work in earlier versions of SWIG. So you
>>> should consider how useful it will be to be able to run this server
>>> internally, because if you can't run a new version of swig locally, then
>>> can you run it internally anywhere? I don't know, I'll leave that for you
>>> to figure out.
>>>
>>>
>> That also seems fine. And yes, we can work it out on our end.
>>
>> We'd need to make sure that developer flows would pick up the need to
>> generate the bindings again if binding surface area changed, but that is no
>> different than now.
>>
>>
>>> Either way, it will definitely have the ability to use a different host,
>>> because that's the easiest way to debug theclient and server (i.e. run them
>>> on the same machine with 127.0.0.1)
>>>
>>>
>> Yep, sounds right.
>>
>>
>>> On Thu, Nov 19, 2015 at 8:00 AM Todd Fiala <todd.fiala at gmail.com> wrote:
>>>
>>>> For the benefit of continuity in conversation, here is what you had to
>>>> say about it before:
>>>>
>>>> > One possibility (which I mentioned to you offline, but I'll put it here for
>>>> others to see) is that we make a swig bot which is hosted in the cloud much
>>>> like our public build bots. We provide a Python script that can be run on
>>>> your machine, which sends requests over to the swig bot to run swig and
>>>> send back the results. Availability of the service would be governed by
>>>> the SLA of Google Compute Engine, viewable here:https://cloud.google.com/compute/sla?hl=en
>>>>
>>>> > If we do something like this, it would allow us to raise the SWIG version
>>>> in the upstream, and at that point I can see some benefit in checking the
>>>> bindings in. Short of that, I still dont' see the value proposition in
>>>> checking bindings in to the repo. [bits deleted]
>>>>
>>>> > If it means we can get off of SWIG 1.x in the upstream, I will do the work
>>>> to make remote swig generation service and get it up and running.
>>>>
>>>>
>>>> I'd like feedback from others on this. Is this something we want to consider doing?
>>>>
>>>> From my perspective, this seems reasonable to look into doing if we:
>>>>
>>>> (a) have the service code available, and
>>>>
>>>> (b) if we so choose, we can readily have the script hit another server (so that a consumer can have the entire setup on an internal network), and
>>>>
>>>> (c: option 1) be able to fall back to generate with swig locally as we do now in the event that we can't hit the server
>>>>
>>>> (c: option 2) rather than fall back to swig generation, use swig generation as primary (as it is now) but, if a swig is not found, then do the get-bindings-as-a-service flow.
>>>>
>>>> This does open up multiple ways to do something, but I think we need to avoid a failure mode that says "Oops, you can't hit the network. Sorry, no lldb build for you."
>>>>
>>>>
>>>> Reasoning:
>>>>
>>>> For (a): just so we all know what we're using.
>>>>
>>>> For (b): I can envision production build flows that will not want to be hitting a third-party server. We shouldn't require that.
>>>>
>>>> For (c): we don't want to prevent building in scenarios that can't hit a network during the build.
>>>>
>>>>
>>>> -Todd
>>>>
>>>>
>>>> On Wed, Nov 18, 2015 at 10:58 PM, Todd Fiala <todd.fiala at gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Nov 18, 2015 at 10:06 PM, Todd Fiala <todd.fiala at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hey Zachary,
>>>>>>
>>>>>> I think the time pressure has gotten the better of me, so I want to
>>>>>> apologize for getting snippy about the static bindings of late. I am
>>>>>> confident we will get to a good solution for removing that dependency, but
>>>>>> I can certainly wait for a solution (using an alternate approach in our
>>>>>> branch) until we arrive at something more palatable to everyone.
>>>>>>
>>>>>> Regarding the bindings as service idea:
>>>>>>
>>>>>> How quickly do you think you could flesh out the bindings as a
>>>>>> service idea? With a relatively strong dislike for the static approach I'm
>>>>>> taking, I can back off that and just use my current code here in a
>>>>>> downstream branch for now. Ultimately I want to remove the requirement for
>>>>>> swig, but I can probably achieve that without doing it in upstream if we're
>>>>>> going to have some solution there at some point ideally sooner than later.
>>>>>>
>>>>>> Also - I think you were going to send me a swig 3.x binding to try
>>>>>> out (I'd need the LLDBWrapPythoh.cpp and the lldb.py, and you'd just need
>>>>>> to let me know if it still needs to be post-processed or it would need to
>>>>>> be done). Can we shoot for trying that out maybe tomorrow?
>>>>>>
>>>>>>
>>>>> Hey I got these, Zachary. They just didn't go in my inbox.
>>>>>
>>>>>
>>>>>> Thanks!
>>>>>> --
>>>>>> -Todd
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> -Todd
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> -Todd
>>>>
>>>
>>
>>
>> --
>> -Todd
>>
>
--
-Todd
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/lldb-dev/attachments/20151119/3a244d53/attachment.html>
More information about the lldb-dev
mailing list