[llvm-dev] [RFC] LLVM Security Group and Process

JF Bastien via llvm-dev llvm-dev at lists.llvm.org
Wed Dec 4 15:36:54 PST 2019


> On Nov 26, 2019, at 6:31 PM, Kostya Serebryany <kcc at google.com> wrote:
> 
> On this list: Should we create a security group and process?
> 
> Yes, as long as it is a funded mandate by several major contributors. 
> We can't run it as a volunteer group. 

I expect that major corporate contributors will want some of their employees involved. Is that the kind of funding you’re looking for? Or something additional?


> Also, someone (this group, or another) should do proactive work on hardening the 
> sensitive parts of LLVM, otherwise it will be a whack-a-mole. 
> Of course, will need to decide what are those sensitive parts first. 
>  
> On this list: Do you agree with the goals listed in the proposal?
> 
> In general - yes. 
> Although some details worry me. 
> E.g. I would try to be stricter with disclosure dates. 
> > public within approximately fourteen weeks of the fix landing in the LLVM repository
> is too slow imho. it hurts the attackers less than it hurts the project. 
> oss-fuzz will adhere to the 90/30 policy

This specific bullet followed the Chromium policy:
https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md#Can-you-please-un_hide-old-security-bugs <https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md#Can-you-please-un_hide-old-security-bugs>

Quoting it:
Our goal is to open security bugs to the public once the bug is fixed and the fix has been shipped to a majority of users. However, many vulnerabilities affect products besides Chromium, and we don’t want to put users of those products unnecessarily at risk by opening the bug before fixes for the other affected products have shipped.

Therefore, we make all security bugs public within approximately 14 weeks of the fix landing in the Chromium repository. The exception to this is in the event of the bug reporter or some other responsible party explicitly requesting anonymity or protection against disclosing other particularly sensitive data included in the vulnerability report (e.g. username and password pairs).

I think the same rationale applies to LLVM.


> On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
> 
> The process seems to be too complicated, but no strong opinion here. 
> Do we have another example from a project of similar scale? 

Yes, the email lists some. WebKit’s process resembles the one I propose, but I’ve expanded some of the points which it left unsaid. i.e. in many cases it has the same content, but not as spelled out.


> On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
> 
> commented on GitHub vs crbug
>  
> On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue:
> Are you an LLVM contributor (individual or representing a company)?
> Yes,  representing Google. 
> Are you involved with security aspects of LLVM (if so, which)?
> 
> To some extent:
> * my team owns tools that tend to find security bugs (sanitizers, libFuzzer)
> * my team co-owns oss-fuzz, which automatically sends security bugs to LLVM 
>  
> Do you maintain significant downstream LLVM changes?
> 
> no
>  
> Do you package and deploy LLVM for others to use (if so, to how many people)?
> 
> not my team
>  
> Is your LLVM distribution based on the open-source releases?
> 
> no
>  
> How often do you usually deploy LLVM?
> 
> In some ecosystems LLVM is deployed ~ every two-three weeks. 
> In others it takes months. 
>  
> How fast can you deploy an update?
> 
> For some ecosystems we can turn around in several days. 
> For others I don't know.  
>  
> Does your LLVM distribution handle untrusted inputs, and what kind?
> 
> Third party OSS code that is often pulled automatically. 
>  
> What’s the threat model for your LLVM distribution?
> 
> Speculating here. I am not a real security expert myself
> * A developer getting a bug report and running clang/llvm on the "buggy" input, compromising the developer's desktop. 
> * A major opensource project is compromised and it's code is changed in a subtle way that triggers a vulnerability in Clang/LLVM.
>   The opensource code is pulled into an internal repo and is compiled by clang, compromising a machine on the build farm. 
> * A vulnerability in a run-time library, e.g. crbug.com/606626 <http://crbug.com/606626> or crbug.com/994957 <http://crbug.com/994957>
> * (???) Vulnerability in a LLVM-based JIT triggered by untrusted bitcode. <2-nd hand knowledge>
> * (???) an optimizer introducing a vulnerability into otherwise memory-safe code (we've seen a couple of such in load & store widening)
> * (???) deficiency in a hardening pass (CFI, stack protector, shadow call stack) making the hardening inefficient.   
> 
> My 2c on the policies: if we actually treat some area of LLVM security-critical, 
> we must not only ensure that a reported bug is fixed, but also that the affected component gets
> additional testing, fuzzing, and hardening afterwards. 
> E.g. for crbug.com/994957 <http://crbug.com/994957> I'd really like to see a fuzz target as a form of regression testing.

Thanks, this is great stuff!


> --kcc 
>  
> 
> On Sat, Nov 16, 2019 at 8:23 AM JF Bastien via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
> Hello compiler enthusiasts,
> 
> 
> The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project.
> 
> A draft proposal for how we could organize such a group and what its process could be is available on Phabricator <https://reviews.llvm.org/D70326>. The proposal starts with a list of goals for the process and Security Group, repeated here:
> 
> The LLVM Security Group has the following goals:
> Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community.
> Organize fixes, code reviews, and release management for said issues.
> Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings.
> Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects.
> Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process <https://cve.mitre.org/>.
> 
> We’re looking for answers to the following questions:
> On this list: Should we create a security group and process?
> On this list: Do you agree with the goals listed in the proposal?
> On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
> On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
> On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue:
> Are you an LLVM contributor (individual or representing a company)?
> Are you involved with security aspects of LLVM (if so, which)?
> Do you maintain significant downstream LLVM changes?
> Do you package and deploy LLVM for others to use (if so, to how many people)?
> Is your LLVM distribution based on the open-source releases?
> How often do you usually deploy LLVM?
> How fast can you deploy an update?
> Does your LLVM distribution handle untrusted inputs, and what kind?
> What’s the threat model for your LLVM distribution?
> 
> Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples:
> https://webkit.org/security-policy/ <https://webkit.org/security-policy/>
> https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md <https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md>
> https://wiki.mozilla.org/Security <https://wiki.mozilla.org/Security>
> https://www.openbsd.org/security.html <https://www.openbsd.org/security.html>
> https://security-team.debian.org/security_tracker.html <https://security-team.debian.org/security_tracker.html>
> https://www.python.org/news/security/ <https://www.python.org/news/security/>When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better.
> 
> 
> I’ll go first in answering my own questions above:
> Yes! We should create a security group and process.
> We agree with the goals listed.
> We think the proposal is exactly right, but would like to hear the community’s opinions.
> Here’s how we approach the security of LLVM:
> I contribute to LLVM as an Apple employee.
> I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases.
> We maintain significant downstream changes.
> We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers.
> Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple <https://github.com/apple>.
> We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule.
> This depends on which release of LLVM is affected.
> Yes, our distribution sometimes handles untrusted input.
> The threat model is highly variable depending on the particular language front-ends being considered.
> Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process.
> 
> 
> Thanks,
> 
> JF
> 
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191204/337c6d11/attachment.html>


More information about the llvm-dev mailing list