[LLVMdev] Help adding the Bullet physics sdk benchmark to the LLVM test suite?

David Greene dag at cray.com
Mon Jan 4 18:24:23 PST 2010


On Monday 04 January 2010 20:11, Erwin Coumans wrote:
> Hi Anton, and happy new year all,
>
> >>One questions though: is it possible to "verify" the results of all
> >>the computations somehow?
>
> Good point, and there is no automated way currently, but we can work on
> that.
> Note that simulation suffers from the 'butterfly effect', so the smallest
> change anywhere,
> (cpu, compiler etc) diverges into totally different results after a while.

I haven't been following this thread, but this sounds like a typical
unstable algorithm problem.  Are you always operating that close to
the tolerance level of the algorithm or are there some sets of inputs
that will behave reasonably?

If not, the code doesn't seem very useful to me.  How could anyone rely
on the results, ever?

In the worst case, you could experiment with different optimization levels
and/or Pass combinations to find something that is reasonably stable.

Perhaps LLVM needs a flag to disable sometimes undesireable transformations.
Like anything involving floating-point calculations.  Compiler changes should
not affect codes so horribly unless the user tells them to.  :)  The Cray 
compiler provides various -Ofp (-Ofp0, -Ofp1, etc.) levels for this very 
reason.

> There are a few ways of verification I can think of:
>
> 1) verifying by adding unit tests for all stages in the physics pipeline
> (broadphase acceleration structures, closest point computation, constraint
> solver)
> Given known input and output we can check if the solution is within a
> certain tolerance.

At each stage?  That's reasonable.  It could also help identify the parts of
the pipeline that are unstable (if not already known).

> 2) using the benchmark simulation and verifying the results frame by frame
> and check for unusual behaviour

Sounds expensive.

> 3) modify the benchmark so that it is easier to test the end result, even
> through it might be different.

We really don't want to do this.  Either LLVM needs to be fixed to respect
floating-point evaluation in unstable cases or the benchmark and upstream code 
needs to be fixed to be more stable.

                                       -Dave



More information about the llvm-dev mailing list