[llvm-dev] Should MemorySSA be updated when load/store address changes ?
Momchil Velikov via llvm-dev
llvm-dev at lists.llvm.org
Wed Jan 5 07:56:45 PST 2022
This is a test case from `Transforms/GVN/PRE/pre-after-rle.ll`.
Before GVN, the MemorySSA beginning looks like:
define void @fn1(i32** noalias %start, i32* %width, i32 %h) {
entry:
; 1 = MemoryDef(liveOnEntry)
%call = tail call noalias i8* @malloc(i64 1024)
%call.cast = bitcast i8* %call to i32*
; 2 = MemoryDef(1)
store i32* %call.cast, i32** %start, align 8
br label %preheader
preheader: ; preds = %body,
%entry
; 5 = MemoryPhi({entry,2},{body,3})
%cmp = icmp slt i32 1, %h
br i1 %cmp, label %body, label %exit
body: ; preds = %body,
%preheader
; 4 = MemoryPhi({preheader,5},{body,3})
%j = phi i32 [ 0, %preheader ], [ %j.next, %body ]
; MemoryUse(2) MayAlias
%s = load i32*, i32** %start, align 8
%idx = getelementptr inbounds i32, i32* %s, i64 0
; 3 = MemoryDef(4)
store i32 0, i32* %idx, align 4
%j.next = add nuw nsw i32 %j, 1
; MemoryUse(3) MayAlias
%w = load i32, i32* %width, align 8
%cmp3 = icmp slt i32 %j.next, %w
br i1 %cmp3, label %body, label %preheader
exit: ; preds = %preheader
ret void
}
Note that `%w = load ...` is a `MemoryUse` of `store i32 0, i32* %idx`.
After GVN has removed the fully redundant load `%s = ...`, the
MemorySSA looks like:
define void @fn1(i32** noalias %start, i32* %width, i32 %h) {
entry:
; 1 = MemoryDef(liveOnEntry)
%call = tail call noalias i8* @malloc(i64 1024)
%call.cast = bitcast i8* %call to i32*
; 2 = MemoryDef(1)
store i32* %call.cast, i32** %start, align 8
br label %preheader
preheader: ; preds = %body,
%entry
; 5 = MemoryPhi({entry,2},{body,3})
%cmp = icmp slt i32 1, %h
br i1 %cmp, label %body, label %exit
body: ; preds = %body,
%preheader
; 4 = MemoryPhi({preheader,5},{body,3})
%j = phi i32 [ 0, %preheader ], [ %j.next, %body ]
; 3 = MemoryDef(4)
store i32 0, i32* %call.cast, align 4
%j.next = add nuw nsw i32 %j, 1
; MemoryUse(3) MayAlias
%w = load i32, i32* %width, align 8
%cmp3 = icmp slt i32 %j.next, %w
br i1 %cmp3, label %body, label %preheader
exit: ; preds = %preheader
ret void
}
The `%w = load ...` is still a use of the `store i32 0, i32*...`.
However, the locations now can be determined to not alias, and the
MemorySSA walker in fact skips `3 = MemoryDef(4)` and returns
`liveOnEntry` when queried for the clobber of `%w = load ...`.
So far so good, but then the `%w = load ... ` is not among the uses of
`liveOnEntry`.
This seems to contradict this part from the MemorySSA docs:
"Unlike other partitioned forms, LLVM’s MemorySSA does make one
useful guarantee - all loads are optimized to point at the thing
that actually clobbers them. This gives some nice properties. For
example, for a given store, you can find all loads actually
clobbered by that store by walking the immediate uses of the
store."
Is this the intended behaviour? Should MemorySSAUpdater have an update
API for when the address operands of loads/stores change?
~chill
--
Compiler scrub, Arm
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20220105/67d2e1aa/attachment.html>
More information about the llvm-dev
mailing list