[llvm-dev] [RFC] LLVM Security Group and Process
James Y Knight via llvm-dev
llvm-dev at lists.llvm.org
Mon Nov 18 13:51:10 PST 2019
I think it's great to make a policy for reporting security bugs.
But first, yes, we need to be clear as to what sorts of things we consider
as "security bugs", and what we do not. We need to be clear on this, both
for users to know what they should depend on, and for LLVM contributors to
know when they should be raising a flag, if they discover or fix something
themselves.
We could just keep on doing our usual development process, and respond
only to *externally-reported* issues with the security-response routine.
But I don't think that would be a good idea. Creating a process whereby
anyone outside the project can report security issues, and for which we'll
coordinate disclosure and create backports and such is all well and
good...but if we don't then also do (or at least *try* to do!) the same for
issues discovered and fixed within the community, is there even a point?
So, if we're going to expand what we consider a security bug beyond the
present "effectively nothing", I think it is really important to be a bit
more precise about what it's being expanded to.
For example, I think it should generally be agreed that a bug in Clang
which allows arbitrary-code-execution in the compiler, given a specially
crafted source-file, should not be considered a security issue. A bug, yes,
but not a security issue, because we do not consider the use-case of
running the compiler in privileged context to be a supported operation. But
also the same for parsing a source-file into a clang AST -- which might
even happen automatically with editor integration. Seems less obviously
correct, but, still, the reality today. And, IMO, the same stance should
also apply to feeding arbitrary bitcode into LLVM. (And I get the
unfortunate feeling that last statement might not find universal agreement.)
Given the current architecture and state of the project, I think it would
be rather unwise to pretend that any of those are secure operations, or to
try to support them as such with a security response process. Many compiler
crashes seem likely to be security bugs, if someone is trying hard enough.
If every time such a bug was fixed, it got a full security-response
triggered, with embargos, CVEs, backports, etc...that just seems
unsustainable. Maybe it would be *nice* to support this, but I think we're
a long way from there currently.
However, all that said -- based on timing and recent events, perhaps your
primary goal here is to establish a process for discussing LLVM patches to
workaround not-yet-public CPU errata, and issues of that nature. In that
case, the need for the security response group is primarily to allow
developing quality LLVM patches based on not-yet-public information about
other people's products. That seems like a very useful thing to formalize,
indeed, and doesn't need any changes in llvm developers' thinking. So if
that's what we're talking about, let's be clear about it.
On Mon, Nov 18, 2019 at 2:43 PM Chris Bieneman via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> Hey JF,
>
> Thanks for putting this RFC together. LLVM security issues are very
> important, and I'm really glad someone is focusing attention here.
>
> I'm generally in agreement with much of what you have proposed. I do have
> a few thoughts I'd like to bring up.
>
> Having the group appointed by the board seems a bit odd to me.
> Historically the board has not involved itself technical processes. I'm
> curious what the board's thoughts are relating to this level of involvement
> in project direction (I know you wanted proposal feedback on Phabricator,
> but I think the role of the board is something worth discussing here).
>
> My other meta thought is about focus and direction for the group. How do
> you define security issues?
>
> To give you where I'm coming from. One of the big concerns I have at the
> moment is about running LLVM in secure execution contexts, where we care
> about bugs in the compiler that could influence code generation, not just
> the code generation itself. Historically, I believe, the security focus of
> LLVM has primarily been on generated code, do you see this group tackling
> both sides of the problem?
>
> Thanks,
> -Chris
>
> On Nov 15, 2019, at 10:58 AM, JF Bastien via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
>
> Hello compiler enthusiasts,
>
> The Apple LLVM team would like to propose that a new a security process
> and an associated private LLVM Security Group be created under the umbrella
> of the LLVM project.
>
> A draft proposal for how we could organize such a group and what its
> process could be is available on Phabricator
> <https://reviews.llvm.org/D70326>. The proposal starts with a list of
> goals for the process and Security Group, repeated here:
>
> The LLVM Security Group has the following goals:
>
> 1. Allow LLVM contributors and security researchers to disclose
> security-related issues affecting the LLVM project to members of the LLVM
> community.
> 2. Organize fixes, code reviews, and release management for said
> issues.
> 3. Allow distributors time to investigate and deploy fixes before wide
> dissemination of vulnerabilities or mitigation shortcomings.
> 4. Ensure timely notification and release to vendors who package and
> distribute LLVM-based toolchains and projects.
> 5. Ensure timely notification to users of LLVM-based toolchains whose
> compiled code is security-sensitive, through the CVE process
> <https://cve.mitre.org/>.
>
>
> We’re looking for answers to the following questions:
>
> 1. *On this list*: Should we create a security group and process?
> 2. *On this list*: Do you agree with the goals listed in the proposal?
> 3. *On this list*: at a high-level, what do you think should be done
> differently, and what do you think is exactly right in the draft proposal?
> 4. *On the Phabricator code review*: going into specific details, what
> do you think should be done differently, and what do you think is exactly
> right in the draft proposal?
> 5. *On this list*: to help understand where you’re coming from with
> your feedback, it would be helpful to state how you personally approach
> this issue:
> 1. Are you an LLVM contributor (individual or representing a
> company)?
> 2. Are you involved with security aspects of LLVM (if so, which)?
> 3. Do you maintain significant downstream LLVM changes?
> 4. Do you package and deploy LLVM for others to use (if so, to how
> many people)?
> 5. Is your LLVM distribution based on the open-source releases?
> 6. How often do you usually deploy LLVM?
> 7. How fast can you deploy an update?
> 8. Does your LLVM distribution handle untrusted inputs, and what
> kind?
> 9. What’s the threat model for your LLVM distribution?
>
>
> Other open-source projects have security-related groups and processes.
> They structure their group very differently from one another. This proposal
> borrows from some of these projects’ processes. A few examples:
>
> - https://webkit.org/security-policy/
> -
> https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md
> - https://wiki.mozilla.org/Security
> - https://www.openbsd.org/security.html
> - https://security-team.debian.org/security_tracker.html
> - https://www.python.org/news/security/
>
> When providing feedback, it would be great to hear if you’ve dealt with
> these or other projects’ processes, what works well, and what can be done
> better.
>
>
> I’ll go first in answering my own questions above:
>
> 1. Yes! We should create a security group and process.
> 2. We agree with the goals listed.
> 3. We think the proposal is exactly right, but would like to hear the
> community’s opinions.
> 4. Here’s how we approach the security of LLVM:
> 1. I contribute to LLVM as an Apple employee.
> 2. I’ve been involved in a variety of LLVM security issues, from
> automatic variable initialization to security-related diagnostics, as well
> as deploying these mitigations to internal codebases.
> 3. We maintain significant downstream changes.
> 4. We package and deploy LLVM, both internally and externally, for
> a variety of purposes, including the clang, Swift, and mobile GPU shader
> compilers.
> 5. Our LLVM distribution is not directly derived from the
> open-source release. In all cases, all non-upstream public patches for our
> releases are available in repository branches at
> https://github.com/apple.
> 6. We have many deployments of LLVM whose release schedules vary
> significantly. The LLVM build deployed as part of Xcode historically has
> one major release per year, followed by roughly one minor release every 2
> months. Other releases of LLVM are also security-sensitive and don’t follow
> the same schedule.
> 7. This depends on which release of LLVM is affected.
> 8. Yes, our distribution sometimes handles untrusted input.
> 9. The threat model is highly variable depending on the particular
> language front-ends being considered.
>
> Apple is involved with a variety of open-source projects and their
> disclosures. For example, we frequently work with the WebKit community to
> handle security issues through their process.
>
>
> Thanks,
>
> JF
>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191118/6b6c3f90/attachment.html>
More information about the llvm-dev
mailing list