[cfe-dev] Zero'ing Registers on Function Return

Russell Harmon eatnumber1 at google.com
Fri Sep 12 10:02:16 PDT 2014


I'm somewhat of a fan of Paul's solution - disallowing calls to non
annotated functions.

Would clearing the stack implicitly help all that much if the programmer
has already properly cleared the sensitive via a call to memset_s?

I was wrong in saying to clear the caller owned registers, although we
should also clear all the argument registers on return.

CIL

On Fri Sep 12 2014 at 1:11:42 AM Szabolcs Nagy <nsz at port70.net> wrote:

> * Russell Harmon <eatnumber1 at google.com> [2014-09-12 02:30:39 +0000]:
> > I've been thinking about the issues with securely zero'ing buffers that
> > Colin Percival discusses in his blog article
> > <http://www.daemonology.net/blog/2014-09-06-zeroing-
> buffers-is-insufficient.html>,
> > and I think I'd like to take a stab at fixing it in clang. Here's my
> > proposal:
> >
> > Add a function attribute, say __attribute__((clear_regs_on_return))
> which
> > when a thus annotated function returns will zero all callee owned
> registers
> > and spill slots. Then, all unused caller owned registers will be
> > immediately cleared by the caller after return.
>
> while true that the abstract machine of c cannot make sure that there are
> no lower level leaks (the lower layers are always allowed to hold on to
> state at the wrong time and copy it somewhere that the abstract machine
> cannot observe) there is a way to avoid information leaks in practice
>
> instead of trying to figure out what are the possible leaks and using
> workarounds for them (like volatile function pointer memset) just reexecute
> the same code path the secret computation has taken, this is also useful
> for
> verifying that the cryptographic computation was not miscompiled (which
> happens and can have catastrophic consequences), this is the "self test
> trick"
> the author seems to be unaware of although it is used in practice:
>
> http://git.musl-libc.org/cgit/musl/tree/src/crypt/crypt_blowfish.c#n760
>
> eg. this is the crypt code in musl libc contributed by Solar Designer
>

I also wasn't aware of this technique, although it's making quite a few
assumptions about the behavior of the compiler. Although an interesting
technique, I'd prefer some better guarantees around clearing of hidden
state.

Trying to avoid a philosophical debate, I understand this is a difficult
problem, but I don't think that means it's not worthwhile to attempt.

there are ways in which this can still break in theory but it works well
> when the language is compiled ahead of time and there is no heavy runtime
> (so the exact same code path is taken and the exact same state is clobbered
> one just has to make sure that the "test" cannot be optimized away by the
> compiler or not inlined with different choice for temporaries).
>
> > As for why, I'm concerned with the case where a memory disclosure
> > vulnerability exposes all or a portion of sensitive data via either
> spilled
> > registers or infrequently used registers (xmm). If an attacker is able to
> > analyze a binary for situations wherein sensitive data will be spilled,
> > leveraging a memory disclosure vulnerability it's likely one could craft
> an
> > exploit that reveals sensitive data.
>
> in general a 'no info leak' attribute is hard to do (the proposed
> semantics in the article are grossly underspecified)
>
> the compiler cannot give strong guarantees: the state it is aware of
> might not be everything (eg on a high level backend target where state
> is handled dynamically, or timing related leaks) and it is hard to apply
> recursively: if a function with such attr calls other functions which
> also spill registers, then even the proposed "zero all used registers"
> is problematic
>

I'm not trying to deal with every case. I'm specifically trying to deal
with hardening in case of memory disclosure bugs. An attacker e.x. reading
from the swap device directly is outside of the scope of this protection,
as you require more than just a memory disclosure to exploit.


> what is probably doable is a non-recursive version (which still can be a
> help to crypto code, but harder to use correctly). however i suspect even
> that's non-trivial to specify in terms of the abstract machine of c
>
> for recursive things i think the type system has to be invoked: eg a
> sensitive type qualifier that marks state which the compiler has to
> cleanup after.
>

I'm not clear on why disallowing calls to non-annotated functions from
within an annotated function won't handle these issues.

however this whole issue is hard because it only matters if code already
> invoked ub (otherwise the state left around is not observable by the
> attacker), so probably this kind of hardening is entirely the wrong
> approach and anything that deals with sensitive data should just be
> completely isolated (priv sep, different process etc)
>

Agreed, priv sep is another important feature to have when dealing with
secure data, but I see this as a component of a defense-in-depth approach,
and in my opinion saying that a program shouldn't perform ub isn't really a
sound argument to begin with.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20140912/be56eb6d/attachment.html>


More information about the cfe-dev mailing list