[LLVMdev] Help adding the Bullet physics sdk benchmark to the LLVM test suite?

Erwin Coumans erwin.coumans at gmail.com
Tue Jan 5 12:53:57 PST 2010


How do other benchmarks deal with unstable algorithms or differences in
floating point results?

>> haven't been following this thread, but this sounds like a typical
>> unstable algorithm problem.  Are you always operating that close to
>> the tolerance level of the algorithm or are there some sets of inputs
>> that will behave reasonably?

What do you mean by "reasonably" or "affect codes so horribly"?

The accumulation of algorithms in a physics pipeline is unstable and unless
the compiler/platform
guarantees 100% identical floating point results, the outcome will diverge.

Do you think LLVM can be forced to produce identical floating point results?
Even when using different optimization levels or even different CPUs?

Some CPUs use 80bit FPU precision for intermediate results (on-chip in
registers),
while variables in-memory only use 32-bit or 64bit precision.
In combination with cancellation and other re-ordering this can
give slightly different results.

>> If not, the code doesn't seem very useful to me.  How could anyone rely
>> on the results, ever?

The code has proven to be useful for games and special effects in film,
but this particular benchmark might not suite LLVM testing indeed.

I suggest working on a better benchmark that tests independent parts of the
pipeline,
so we don't accumulate results (several frames) but we test a single
algorithm at a time,
with known input and expected output. This avoid unstability and we can
measure the error of the output.

Anton, are you interested in working together on such improved benchmark?
Thanks,
Erwin

*
*





> Date: Mon, 4 Jan 2010 20:24:23 -0600
> From: David Greene <dag at cray.com>
> Subject: Re: [LLVMdev] Help adding the Bullet physics sdk benchmark to
>        the     LLVM test suite?
> To: llvmdev at cs.uiuc.edu
> Message-ID: <201001042024.23451.dag at cray.com>
> Content-Type: text/plain; charset="iso-8859-15"
>
> On Monday 04 January 2010 20:11, Erwin Coumans wrote:
> > Hi Anton, and happy new year all,
> >
> > >>One questions though: is it possible to "verify" the results of all
> > >>the computations somehow?
> >
> > Good point, and there is no automated way currently, but we can work on
> > that.
> > Note that simulation suffers from the 'butterfly effect', so the smallest
> > change anywhere,
> > (cpu, compiler etc) diverges into totally different results after a
> while.
>
> I haven't been following this thread, but this sounds like a typical
> unstable algorithm problem.  Are you always operating that close to
> the tolerance level of the algorithm or are there some sets of inputs
> that will behave reasonably?
>
> If not, the code doesn't seem very useful to me.  How could anyone rely
> on the results, ever?
>
> In the worst case, you could experiment with different optimization levels
> and/or Pass combinations to find something that is reasonably stable.
>
> Perhaps LLVM needs a flag to disable sometimes undesireable
> transformations.
> Like anything involving floating-point calculations.  Compiler changes
> should
> not affect codes so horribly unless the user tells them to.  :)  The Cray
> compiler provides various -Ofp (-Ofp0, -Ofp1, etc.) levels for this very
> reason.
>
> > There are a few ways of verification I can think of:
> >
> > 1) verifying by adding unit tests for all stages in the physics pipeline
> > (broadphase acceleration structures, closest point computation,
> constraint
> > solver)
> > Given known input and output we can check if the solution is within a
> > certain tolerance.
>
> At each stage?  That's reasonable.  It could also help identify the parts
> of
> the pipeline that are unstable (if not already known).
>
> > 2) using the benchmark simulation and verifying the results frame by
> frame
> > and check for unusual behaviour
>
> Sounds expensive.
>
> > 3) modify the benchmark so that it is easier to test the end result, even
> > through it might be different.
>
> We really don't want to do this.  Either LLVM needs to be fixed to respect
> floating-point evaluation in unstable cases or the benchmark and upstream
> code
> needs to be fixed to be more stable.
>
>                                       -Dave
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20100105/1abdcc22/attachment.html>


More information about the llvm-dev mailing list