[LLVMdev] PTX target for LLVM?

David Greene dag at cray.com
Thu Apr 1 09:25:00 PDT 2010

On Thursday 01 April 2010 06:30:46 Torvald Riegel wrote:

> >  No! Just put a repository up (or make a tarball)! This is open source.
> >  Code is never perfect, so just put it out there with the same BSD-style
> >  license as LLVM. Every programmer can read code, even bad code,
> > especially when there's a research paper or thesis to go along with it.
> > Every delay in releasing code just slows down the progress of the the
> > world. That's the only "benefit": slowing the progress of other
> > researchers. If the code is needed to replicate your research, for the
> > love of Turing *publish it alongside the research*.
> It's not that easy, unfortunately. To get funding for research,
> publications still count more than whether your code is used by others or
> not. 

[NOTE: This is MY opinion and not reflective of what people at Cray may
or may not think nor does it in any way imply a company-wide position.]

The is actually part of a larger problem within the computing research 
community.  Unlike the hard sciences, we don't require reproducibility
of results.  This leads to endless hours of researchers fruitlessly trying
to improve upon results that they can't even verify in the first place.
The result is haphazard guesses and lower quality publications which
are of little use in the real world.

I've talked to high-up engineers at IBM who refuse to believe any research
publication until they've verified it themselves with their own simulators,
etc.  Something like 99% of the time they find results to be meaningless
or not reproducible.

Now, the ideas are the most valuable part of a publication, but it is
important to be able to validate the idea.  I disagree with the current
practice of not accepting papers that don't show a 10% improvement in
whatever area is under consideration.  I believe we should also publish
papers that show negative results.  This would save researchers an enormous
amount of time.  Moreover, ideas that were wrong 20 years ago may very
well be right today.

The combination of the current "10% or better" practice with no requirement
for reproducibility means there's very little incentive to release tools and
code used for the experiments.  In fact there is disincentive, as we wouldn't 
want some precocious student to demonstrate the experiment was flawed.  This 
is another problem, in that researchers view challenges as personal threats 
rather than a chance to advance the state of the art and students are 
encouraged to combatively challenge published research rather than work with 
the original publishers to improve it.

We need an overhaul of the system.


More information about the llvm-dev mailing list