[llvm-dev] [RFC] LLVM Security Group and Process

Robinson, Paul via llvm-dev llvm-dev at lists.llvm.org
Mon Nov 18 14:31:27 PST 2019


One problem with defining away “arbitrary code execution in Clang” as “not security relevant” is that you are inevitably making probably-wrong assumptions about the set of all possible execution contexts.

Case in point:  Sony, being on the security-sensitive side these days, has an internal mandate that we incorporate CVE fixes into open-source products that we deliver.  As it happens, we deliver some GNU Binutils tools with our PS4 toolchain.  There are CVEs against Binutils, so we were mandated to incorporate these patches.  “?” I said, wondering how some simple command-line tool could have a CVE.  Well, it turns out, lots of the Binutils code is packaged in libraries, and some of those libraries can be used by (apparently) web servers, so through some chain of events it would be possible for a web client to induce Bad Stuff on a server (hopefully no worse than a DoS, but that’s still a security issue).  Ergo, security-relevant patch in GNU Binutils.

For *my* product’s delivery, the CVEs would be irrelevant.  (Who cares if some command-line tool can crash if you feed it a bogus PE file; clearly not a security issue.)  But, for someone else’s product, it would be a security issue.  You can be sure that the people responsible for Binutils dealt with it as a security issue.

So, yeah, arbitrary code-execution in Clang, or more obviously in the JIT, is a potential security issue.  Clangd probably should worry about this kind of stuff too.  And we should be ready to handle it that way.
--paulr

From: llvm-dev <llvm-dev-bounces at lists.llvm.org> On Behalf Of James Y Knight via llvm-dev
Sent: Monday, November 18, 2019 4:51 PM
To: Chris Bieneman <chris.bieneman at me.com>
Cc: llvm-dev <llvm-dev at lists.llvm.org>
Subject: Re: [llvm-dev] [RFC] LLVM Security Group and Process

I think it's great to make a policy for reporting security bugs.

But first, yes, we need to be clear as to what sorts of things we consider as "security bugs", and what we do not. We need to be clear on this, both for users to know what they should depend on, and for LLVM contributors to know when they should be raising a flag, if they discover or fix something themselves.

We could just keep on doing our usual development process, and respond only to externally-reported issues with the security-response routine. But I don't think that would be a good idea. Creating a process whereby anyone outside the project can report security issues, and for which we'll coordinate disclosure and create backports and such is all well and good...but if we don't then also do (or at least try to do!) the same for issues discovered and fixed within the community, is there even a point?

So, if we're going to expand what we consider a security bug beyond the present "effectively nothing", I think it is really important to be a bit more precise about what it's being expanded to.

For example, I think it should generally be agreed that a bug in Clang which allows arbitrary-code-execution in the compiler, given a specially crafted source-file, should not be considered a security issue. A bug, yes, but not a security issue, because we do not consider the use-case of running the compiler in privileged context to be a supported operation. But also the same for parsing a source-file into a clang AST -- which might even happen automatically with editor integration. Seems less obviously correct, but, still, the reality today. And, IMO, the same stance should also apply to feeding arbitrary bitcode into LLVM. (And I get the unfortunate feeling that last statement might not find universal agreement.)

Given the current architecture and state of the project, I think it would be rather unwise to pretend that any of those are secure operations, or to try to support them as such with a security response process. Many compiler crashes seem likely to be security bugs, if someone is trying hard enough. If every time such a bug was fixed, it got a full security-response triggered, with embargos, CVEs, backports, etc...that just seems unsustainable. Maybe it would be nice to support this, but I think we're a long way from there currently.


However, all that said -- based on timing and recent events, perhaps your primary goal here is to establish a process for discussing LLVM patches to workaround not-yet-public CPU errata, and issues of that nature. In that case, the need for the security response group is primarily to allow developing quality LLVM patches based on not-yet-public information about other people's products. That seems like a very useful thing to formalize, indeed, and doesn't need any changes in llvm developers' thinking. So if that's what we're talking about, let's be clear about it.

On Mon, Nov 18, 2019 at 2:43 PM Chris Bieneman via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote:
Hey JF,

Thanks for putting this RFC together. LLVM security issues are very important, and I'm really glad someone is focusing attention here.

I'm generally in agreement with much of what you have proposed. I do have a few thoughts I'd like to bring up.

Having the group appointed by the board seems a bit odd to me. Historically the board has not involved itself technical processes. I'm curious what the board's thoughts are relating to this level of involvement in project direction (I know you wanted proposal feedback on Phabricator, but I think the role of the board is something worth discussing here).

My other meta thought is about focus and direction for the group. How do you define security issues?

To give you where I'm coming from. One of the big concerns I have at the moment is about running LLVM in secure execution contexts, where we care about bugs in the compiler that could influence code generation, not just the code generation itself. Historically, I believe, the security focus of LLVM has primarily been on generated code, do you see this group tackling both sides of the problem?

Thanks,
-Chris


On Nov 15, 2019, at 10:58 AM, JF Bastien via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote:

Hello compiler enthusiasts,

The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project.

A draft proposal for how we could organize such a group and what its process could be is available on Phabricator<https://reviews.llvm.org/D70326>. The proposal starts with a list of goals for the process and Security Group, repeated here:

The LLVM Security Group has the following goals:

  1.  Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community.
  2.  Organize fixes, code reviews, and release management for said issues.
  3.  Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings.
  4.  Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects.
  5.  Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process<https://cve.mitre.org/>.

We’re looking for answers to the following questions:

  1.  On this list: Should we create a security group and process?
  2.  On this list: Do you agree with the goals listed in the proposal?
  3.  On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
  4.  On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
  5.  On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue:

     *   Are you an LLVM contributor (individual or representing a company)?
     *   Are you involved with security aspects of LLVM (if so, which)?
     *   Do you maintain significant downstream LLVM changes?
     *   Do you package and deploy LLVM for others to use (if so, to how many people)?
     *   Is your LLVM distribution based on the open-source releases?
     *   How often do you usually deploy LLVM?
     *   How fast can you deploy an update?
     *   Does your LLVM distribution handle untrusted inputs, and what kind?
     *   What’s the threat model for your LLVM distribution?

Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples:

  *   https://webkit.org/security-policy/
  *   https://chromium.googlesource.com/chromium/src/+/lkgr/docs/security/faq.md
  *   https://wiki.mozilla.org/Security
  *   https://www.openbsd.org/security.html
  *   https://security-team.debian.org/security_tracker.html
  *   https://www.python.org/news/security/
When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better.


I’ll go first in answering my own questions above:

  1.  Yes! We should create a security group and process.
  2.  We agree with the goals listed.
  3.  We think the proposal is exactly right, but would like to hear the community’s opinions.
  4.  Here’s how we approach the security of LLVM:

     *   I contribute to LLVM as an Apple employee.
     *   I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases.
     *   We maintain significant downstream changes.
     *   We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers.
     *   Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple.
     *   We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule.
     *   This depends on which release of LLVM is affected.
     *   Yes, our distribution sometimes handles untrusted input.
     *   The threat model is highly variable depending on the particular language front-ends being considered.
Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process.


Thanks,

JF

_______________________________________________
LLVM Developers mailing list
llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

_______________________________________________
LLVM Developers mailing list
llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191118/11e51025/attachment.html>


More information about the llvm-dev mailing list