[llvm] r211705 - Random Number Generator (llvm)

Nick Lewycky nlewycky at google.com
Thu Jul 10 13:39:24 PDT 2014


On 10 July 2014 13:02, Geremy Condra <gcondra at google.com> wrote:

> On Wed, Jul 9, 2014 at 12:17 AM, Nick Lewycky <nicholas at mxc.ca> wrote:
>
>> Stephen Crane wrote:
>>
>>> On Tue, Jul 8, 2014 at 11:26 AM, Geremy Condra<gcondra at google.com>
>>>  wrote:
>>>
>>>> It seems much better to me to use a CPRNG than to rely on something
>>>> like MT,
>>>> which has significant weaknesses despite its long period. As long as I
>>>> can
>>>> hand it /dev/urandom or an equivalent seed file *and actually use that*
>>>> I
>>>> don't 1000% care, but using a non-cryptographic RNG on top of that is
>>>> very
>>>> smelly.
>>>>
>>>
>>> I completely agree with you that a CSPRNG would be best. However, we
>>> got so much pushback from the mailing list that I felt it was better
>>> to start small. Keeping the current interface and adding an optional
>>> better implementation underneath seems like the way to go here.
>>>
>>
>> I'm not opposed to a CSPRNG here, but I am concerned. Firstly I don't see
>> why we should need it and I'd like the consumers of the random stream to
>> ensure that aren't relying on any particular strength of the random stream.
>> If they want to do a hash on the RNG output to prevent correlation, the
>> caller should do that. Second, I'm not sure I trust us LLVMers to maintain
>> a cryptographically strong RNG. I don't know that we have the skill set for
>> that.
>>
>
> Thus my suggestion to use an external stream of randomness, which requires
> essentially zero cryptographic skill to audit and reduces the amount of
> code to boot.
>

That's a very good point.

 If it's critical to have a CSPRNG to make your feature useful then you
>> should argue for it. As it is, the plan is to permit upgrading to a newer
>> RNG by using a different NamedMDNode name which includes the algorithm name.
>>
>>
>>  At least for our use cases, we couldn't use /dev/{u}random directly
>>> because we needed reproducibility. However, the workflow I plan to use
>>> with this is grab a seed from /dev/random at the beginning of the
>>> build process, note that down somewhere, and use that seed for the
>>> rest of the build. We could certainly do something similar with a
>>> slightly modified RNG impl class which uses a random buffer or
>>> separate process to generate better randomness with a larger seed.
>>>
>>>  It also simplifies the code (since you don't need to add in a new RNG,
>>>> just
>>>> read off of a stream) and makes it more testable (since RNGs are
>>>> notoriously
>>>> easy to get wrong and hard to prove right).
>>>>
>>>
>>> Yes, as long as that stream is reproducible somehow. I think we should
>>> preserve the option to recreate all random choices made by LLVM when
>>> bugs crop up or for generating patches.
>>>
>>
>> The ability to reproduce the same decisions when debugging the compiler
>> is critical. Even the proposal of re-keying our RNG on a per-pass basis is
>> far from perfect, it allows us to narrow down the passes but not the actual
>> input source code. If we remove a few lines from the middle of a function
>> then the RNG stream will get out of sync and that may mask the bug. Solving
>> that too would be fantastic. :) Realistically I'm relying on random chance
>> to allow us to reduce the code down to a reasonably sized testcase.
>
>
> Thus my suggestion of relying on an external stream of randomness.
> Something as simple as:
>
> dd if=/dev/urandom of=/my/totes/random/data bs=1M count=100
>
>  gets you a totally reproducible build.
>

Fine, but how do we turn off the middle pieces of the compiler and still
have reproducible behaviour for the latter parts? The obvious answer is to
have each llvm pass restart from the beginning of the stream, but that has
a new problem in that random choices will be correlated across the
compiler. Can we solve that? Would it be enough to use cs-hash(strong but
correlated random, seeded but non-CS PRNG)?

As an added bonus, maintaining counters for rng bytes consumed during the
> process would allow you to chop up the process simply by
> adding/removing/moving the corresponding bits in the randomness source.
>

That means that tools like bugpoint would have to learn that there is a
random number generator and query its state and manipulate it
appropriately. At a high level I don't see any reason this wouldn't work,
but implementing it is going to be a royal pain. Bugpoint's operations must
be represented as command-line runs of opt. We could add "tell us how many
bytes have been consumed at this point" and "consume X bytes of RNG"
passes, but we need one of each type of pass in order to prevent us from
perturbing the pass structure ((or perhaps add it into the PassManager
itself?)), and we need some way to pass the "bytes to skip" value into each
of these passes independently, which we don't have today through the
command line. This is at least, a solvable problem with regular
non-cryptographic engineering. Probably smarter than trying to design our
own restartable CSPRNG system.

Have I missed any reason this doesn't work?

Nick
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20140710/ecb03b7e/attachment.html>


More information about the llvm-commits mailing list