[cfe-dev] [LLVMdev] [RFC] Module Flags Metadata
hfinkel at anl.gov
Thu Jan 26 15:02:04 PST 2012
On Thu, 2012-01-26 at 14:10 -0800, Dan Gohman wrote:
> On Jan 26, 2012, at 12:54 PM, Devang Patel wrote:
> > On Jan 26, 2012, at 11:15 AM, Dan Gohman wrote:
> >> or what optimizers must do to preserve it.
> > The number one reason behind metadata is to have a mechanism to track values while being completely transparent to optimizer. If you want a guarantee from the optimizer to preserve certain semantics about the way metadata is used (e.g. say to describe range of values) then metadata is not appropriate mechanism.
> If the optimizer makes no guarantees whatsoever, then metadata is
> not appropriate for anything.
> For example, the metadata used by TBAA today is not safe. Imagine an
> optimization pass which takes two allocas that are used in
> non-overlaping regions and rewrites all uses of one to use the other,
> to reduce the stack size. By LLVM IR rules alone, this would seem to
> be a valid semantics-preserving transformation. But if the loads
> and stores for the two allocas have different TBAA type tags, the
> tags will say NoAlias for memory references that do in fact alias.
> The only reason why TBAA doesn't have a problem with this today is
> that LLVM doesn't happen to implement optimizations which break it
> yet. But there are no guarantees.
On that thought, is there any way that my autovectorization pass could
invalidate the TBAA metadata (in a harmful way) when it fuses two
memory-adjacent loads or stores? Currently, it performs this fusion by
first cloning the first instruction (which I think will pick up its
metadata), then changing the instruction's type and operands as
necessary. This fusion will only take place if the two instructions have
the same LLVM type, but currently there is no check of the associated
> cfe-dev mailing list
> cfe-dev at cs.uiuc.edu
Leadership Computing Facility
Argonne National Laboratory
More information about the cfe-dev