[llvm-dev] RFC: Speculative Load Hardening (a Spectre variant #1 mitigation)

Chandler Carruth via llvm-dev llvm-dev at lists.llvm.org
Thu Apr 5 02:17:54 PDT 2018


On Thu, Apr 5, 2018 at 2:07 AM Kristof Beyls <Kristof.Beyls at arm.com> wrote:

> Hi Chandler,
>
> Thank you very much for sharing this!
>
> The RFC is pretty lengthy but the far majority of it makes sense to me.
> I’m sure I’m forgetting to react to some aspects below, but I thought I’d
> summarize some initial thoughts and questions I had after reading the RFC
> end-to-end.
>
> * I believe the same high-level principles you outline can also be used to
> implement the same protection on the Arm instruction sets (AArch64
> and AArch32). The technique you describe is dependent on being able to do
> an “unpredicted conditional update of a register's value". For the Arm
> architecture, the guarantee for the conditional update to be unpredicted
> can come from using the new CSDB instruction – see documentation at
> https://developer.arm.com/support/security-update/download-the-whitepaper.
>

I think even without this the practical guarantee can be met. But let's
discuss that off line and in more depth as it doesn't have *too* much to do
with the compiler side of this and is really just an ARM architecture
question.


>
> * It seems you suggest 2 ways to protect against side-channel attacks
> leaking speculatively-loaded secret data: either protect by zero-ing
> out address bits that may represent secret data, or zero-ing out loaded
> data. In the first case (zero-ing out address bits) – wouldn’t you have
> to apply that to addresses used in stores too, next to addresses used in
> loads?
>

Stores don't intrinsically leak data. Specifically, if you have never
loaded secret data, nothing you store can leak secret data.
1) You can't be storing the secret data itself, you never loaded it.
2) You can't be storing to an address influenced by the secret data, that
too would require loading it.

So once loads are hardened, stores shouldn't matter.


>
> * IIUC, you state that constant-offset stack locations and global
> variables don’t need protection. For option 1 (zero-ing out the address bit
> that may represent secret data) – I can understand the rationale for why
> constant offset stack locations and global variables don’t need
> protection. But I’m wondering what the detailed rationale is for not
> needing protection on option 2 (zero-ing out the value loaded): what
> guarantees that no secret info can be located on the stack or in a global
> variable? Or did I misunderstand the proposal?
>

It isn't that we don't *need* protection. It is that e don't *provide*
protection and insist programs keep their secret data elsewhere. This will
certainly require changes to software to achieve. But without this, the
performance becomes much worse because of reloads of spilled registers and
such being impacted. Plus, it would preclude the address based approach.


>
> * For x86 specifically, you explain how the low 2gb and high 2gb of
> address space should be protected by the OS. I wonder if this +-2gb range
> could be reduced sharply by letting the compiler not generate 32 bit
> constant offsets in address calculations, but at most a much smaller
> constant offset? I assume limiting that may have only a very small effect
> on code quality – and might potentially ease the requirements on the OS?
>

Yes. Specifically, this seems like the only viable path for 32-bit
architectures. I freely admit my only focus was on a 64-bit architecture
and so I didn't really spend any time on this.

For a 64-bit architecture, 2gb is as easy to protect as a smaller region,
so it seems worth keeping the performance.


>
>
> Thanks!
>
>
> Kristof
>
>
> On 23 Mar 2018, at 11:56, Chandler Carruth via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
>
> Hello all,
>
> I've been working for the last month or so on a comprehensive mitigation
> approach to variant #1 of Spectre. There are a bunch of reasons why this is
> desirable:
> - Critical software that is unlikely to be easily hand-mitigated (or where
> the performance tradeoff isn't worth it) will have a compelling option.
> - It gives us a baseline on performance for hand-mitigation.
> - Combined with opt-in or opt-out, it may give simpler hand-mitigation.
> - It is instructive to see *how* to mitigate code patterns.
>
> A detailed design document is available for commenting here:
>
> https://docs.google.com/document/d/1wwcfv3UV9ZnZVcGiGuoITT_61e_Ko3TmoCS3uXLcJR0/edit
> (I pasted this in markdown format at the bottom of the email as well.)
>
> I have also published a very early prototype patch that implements this
> design:
> https://reviews.llvm.org/D44824
> This is the patch I've used to collect the performance data on the
> approach. It should be fairly functional but is a long way from being ready
> to review in detail, much less land. I'm posting so folks can start seeing
> the overall approach and can play with it if they want. Grab it here:
>
> Comments are very welcome! I'd like to keep the doc and this thread
> focused on discussion of the high-level technique for hardening, and the
> code review thread for discussion of the techniques used to implement this
> in LLVM.
>
> Thanks all!
> -Chandler
>
> ….
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180405/2b79879c/attachment.html>


More information about the llvm-dev mailing list