[llvm-dev] [RFC] LLVM Security Group and Process
JF Bastien via llvm-dev
llvm-dev at lists.llvm.org
Mon Nov 18 13:12:30 PST 2019
> On Nov 18, 2019, at 11:42 AM, Chris Bieneman <chris.bieneman at me.com> wrote:
> Hey JF,
> Thanks for putting this RFC together. LLVM security issues are very important, and I'm really glad someone is focusing attention here.
> I'm generally in agreement with much of what you have proposed. I do have a few thoughts I'd like to bring up.
> Having the group appointed by the board seems a bit odd to me. Historically the board has not involved itself technical processes. I'm curious what the board's thoughts are relating to this level of involvement in project direction (I know you wanted proposal feedback on Phabricator, but I think the role of the board is something worth discussing here).
I consulted the board before sending out the RFC, and they didn’t express concerns about this specific point. I’m happy to have another method to find the right seed group, but that’s the best I could come up with :)
> My other meta thought is about focus and direction for the group. How do you define security issues?
> To give you where I'm coming from. One of the big concerns I have at the moment is about running LLVM in secure execution contexts, where we care about bugs in the compiler that could influence code generation, not just the code generation itself. Historically, I believe, the security focus of LLVM has primarily been on generated code, do you see this group tackling both sides of the problem?
That’s indeed a difficult one!
I think there are two aspects to this: for a non-LLVM contributor, it doesn’t really matter. If they think it’s a security thing, we should go through this process. They shouldn’t need to try to figure out what LLVM thinks is its security boundary, that’s the project’s job. So I want to be fairly accepting, and let people file things that aren’t actually security related as security issues, because that has lower risk of folks doing the wrong thing (filing security issues as not security related).
On the flip side, I think it’s up to the project to figure it out. I used to care about what you allude to when working on PNaCl, but nowadays I mostly just care about the code generation. If we have people with that type of concern in the security group then we’re in a good position to handle those problems. If nobody represents that concern, then I don’t think we can address them, even if we nominally care. In other words: it’s security related if an LLVM contributor signs up to shepherd this kind of issue through the security process. If nobodies volunteers, then it’s not something the security process can handle. That might point out a hole in our coverage, one we should address.
Does that make sense?
>> On Nov 15, 2019, at 10:58 AM, JF Bastien via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>> Hello compiler enthusiasts,
>> The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project.
>> A draft proposal for how we could organize such a group and what its process could be is available on Phabricator <https://reviews.llvm.org/D70326>. The proposal starts with a list of goals for the process and Security Group, repeated here:
>> The LLVM Security Group has the following goals:
>> Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community.
>> Organize fixes, code reviews, and release management for said issues.
>> Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings.
>> Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects.
>> Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process <https://cve.mitre.org/>.
>> We’re looking for answers to the following questions:
>> On this list: Should we create a security group and process?
>> On this list: Do you agree with the goals listed in the proposal?
>> On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
>> On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
>> On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue:
>> Are you an LLVM contributor (individual or representing a company)?
>> Are you involved with security aspects of LLVM (if so, which)?
>> Do you maintain significant downstream LLVM changes?
>> Do you package and deploy LLVM for others to use (if so, to how many people)?
>> Is your LLVM distribution based on the open-source releases?
>> How often do you usually deploy LLVM?
>> How fast can you deploy an update?
>> Does your LLVM distribution handle untrusted inputs, and what kind?
>> What’s the threat model for your LLVM distribution?
>> Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples:
>> https://webkit.org/security-policy/ <https://webkit.org/security-policy/>
>> https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md <https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md>
>> https://wiki.mozilla.org/Security <https://wiki.mozilla.org/Security>
>> https://www.openbsd.org/security.html <https://www.openbsd.org/security.html>
>> https://security-team.debian.org/security_tracker.html <https://security-team.debian.org/security_tracker.html>
>> https://www.python.org/news/security/ <https://www.python.org/news/security/>When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better.
>> I’ll go first in answering my own questions above:
>> Yes! We should create a security group and process.
>> We agree with the goals listed.
>> We think the proposal is exactly right, but would like to hear the community’s opinions.
>> Here’s how we approach the security of LLVM:
>> I contribute to LLVM as an Apple employee.
>> I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases.
>> We maintain significant downstream changes.
>> We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers.
>> Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple <https://github.com/apple>.
>> We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule.
>> This depends on which release of LLVM is affected.
>> Yes, our distribution sometimes handles untrusted input.
>> The threat model is highly variable depending on the particular language front-ends being considered.
>> Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process.
>> LLVM Developers mailing list
>> llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev