[llvm-dev] [RFC] Speculative Execution Side Effect Suppression for Mitigating Load Value Injection
Constable, Scott D via llvm-dev
llvm-dev at lists.llvm.org
Thu Mar 26 10:29:17 PDT 2020
I’m not in a position to provide some concrete use cases, but there are at least some users for whom manual mitigation of inline assembly is too much of a burden. I think a prudent approach would be to provide an opt-in flag to enable automated mitigation of inline assembly, a kind of yes-I-know-what-I-am-doing feature, where we make it clear that the feature carries at least two caveats:
1. Bytecode may be left unmitigated.
2. If the correctness of the inline assembly code depends on the number of bytes in any contiguous code sequence (e.g., a manually computed jump table), then the mitigation may break the code.
The GNU features that Matt pointed to have these same caveats.
From: llvm-dev <llvm-dev-bounces at lists.llvm.org> On Behalf Of Matthew Riley via llvm-dev
Sent: Wednesday, March 25, 2020 3:21 PM
To: James Y Knight <jyknight at google.com>
Cc: llvm-dev <llvm-dev at lists.llvm.org>; Topper, Craig <craig.topper at intel.com>
Subject: Re: [llvm-dev] [RFC] Speculative Execution Side Effect Suppression for Mitigating Load Value Injection
I'm also a bit unclear on that point. I think one input here has to be: what are some example, existing codebases we want to mitigate, and what should the user experience be to mitigate them? I don't think we can make good engineering tradeoffs without having concrete use cases to evaluate.
Another point: it seems some mitigation options have already been added to the GNU toolchain<https://www.phoronix.com/scan.php?page=news_item&px=GNU-Assembler-LVI-Options>. We should try very hard to make sure the experience doesn't diverge unnecessarily between users of gcc and clang.
On Fri, Mar 20, 2020 at 6:02 PM James Y Knight <jyknight at google.com<mailto:jyknight at google.com>> wrote:
One question I have is regarding the mitigation for inline or standalone assembly files. Generally, I dislike having the assembler mangle code -- it should just emit exactly what you ask it to, and not be "smart", and such mitigations are really best done in the compiler.
But, if there is going to be an implementation of these mitigations added to assembly (which there's some movement towards doing, although I'm not clear as to the outcome) it's not clear to me that doing it in both places is important. Do we really need both?
On Fri, Mar 20, 2020 at 6:14 PM Zola Bridges via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote:
I want to clarify the purpose and design of SESES. Thus far, I've characterized it as an LVI mitigation which is somewhat incorrect.
SESES was built as a "big hammer." It is intended to protect against many side channel vulnerabilities (Spectre v1, Spectre v4, LVI, etc, etc) even though it was built in response to LVI.
For folks protecting against LVI, this is an option for mitigation. This is also an option for folks who want to try to mitigate against speculative execution vulnerabilities as a whole and who don't have high performance needs. As mentioned in the documentation this is not necessarily foolproof, but it's as close as we can get to closing all side channels.
On Wed, Mar 18, 2020 at 2:03 PM Zola Bridges <zbrid at google.com<mailto:zbrid at google.com>> wrote:
Scott and I have been working to make our patches upstreamable. I'd like to hear more feedback.
I would only upstream my patches if the community felt it would be beneficial/desirable. It would be nice to have more discussion to make it easier to make a decision within the next week or two.
What are your thoughts on the following topics?
* Should SESES be upstreamed? Are there any concerns about upstreaming it?
* Should Scott's approach be upstreamed? Are there any concerns about upstreaming it?
* Are there reasons to upstream both approaches?
* Are there reasons against upstreaming both approaches?
I'm particularly interested in hearing from folks who may use one of these mitigations.
For example, Jethro from Fortanix provided feedback (in the #backends Discord channel) that he would be most interested in seeing Scott approach upstreamed due to the performance advantage.
On Tue, Mar 10, 2020 at 10:23 AM Zola Bridges <zbrid at google.com<mailto:zbrid at google.com>> wrote:
Some Intel processors have a newly disclosed vulnerability named Load Value Injection.
One pager on Load Value Injection:
Deep dive on Load Value Injection:
I wrote this compiler pass that can be used as a last resort mitigation. This pass is based on ideas from Chandler Carruth and Intel. This pass is primarily intended to share with the community as a basis for experimentation and may not be production ready. We are open to upstreaming this pass if there is interest from the community. It can be removed if it becomes a maintenance burden.
Intel has also created a mitigation that they have shared: http://lists.llvm.org/pipermail/llvm-dev/2020-March/139842.html
We look forward to sharing information and ideas about both.
The documentation in this email lists the performance I saw for variants of the mitigation that are potential optimizations for Load Value Injection. The flags can be used to turn on optimization techniques for different builds. They are turned off by default. Each variant is not guaranteed to be as secure as the full mitigation.
Here is a link to the first patch: https://reviews.llvm.org/D75939
Here is a link to the documentation patch: https://reviews.llvm.org/D75940
Links to other related patches
I'd like to request comments, feedback, and discussion.
Beyond that, we would also like guidance on whether to upstream this pass.
From the documentation: Overview of the mitigation
As the name suggests, the "speculative execution side effect suppression" mitigation aims to prevent any effects of speculative execution from escaping into the microarchitectural domain where they could be observed, thereby closing off side channel information leaks.
In the case of Load Value Injection, we assume that speculative loads from memory (due to explicit memory access instructions or control flow instructions like RET) may receive injected data due to address aliasing, and we ensure these injected values are not allowed to steer later speculative memory accesses to impact cache contents.
The mitigation is implemented as a compiler pass that inserts a speculation barrier (LFENCE) just before:
* Each memory read instruction
* Each memory write instruction
* The first branch instruction in a group of terminators at the end of a basic block
This is something of a last-resort mitigation: it is expected to have extreme performance implications and it may not be a complete mitigation because it relies on enumerating specific side channel mechanisms. However, it is applicable to more variants and styles of gadgets that can reach speculative execution side channels than just traditional Spectre Variant 1 gadgets which speculative load hardening (SLH) targets much more narrowly but more efficiently.
While there is a slight risk that this mitigation will be ineffective against future side channels, we believe there is still significant value in closing two side channel classes that are most actively exploited today: control-flow based (branch predictor or icache) and cache timing. Control flow side channels are closed by preventing speculative execution into conditionals and indirect branches. Cache timing side channels are closed by preventing speculative execution of reads and writes.
We believe this mitigation will be most useful in situations where code is handling extremely sensitive secrets that must not leak, and where a hit to performance is tolerable in service of that overriding goal. As we've mentioned, the original target of this mitigation was the threat of LVI against SGX enclaves instrumenting critically important secrets.
LLVM Developers mailing list
llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev