[LLVMdev] function call optimization

kewuzhang kewu.zhang at amd.com
Thu Apr 2 07:34:47 PDT 2015


Dear All, 

I have a function which is some thing like this:
“
define float @ir.test.f32(float %arg1, float %arg2, i1 zeroext %flag) #6 {
br i1 % flag, label %1, label %3

; <label>:1                                       ; preds = %0
%2 = tail call float @__my_test_1_f32(float %arg1, float %arg2) #8
return float %2

; <label>:3                                       ; preds = %0
%3 = tail call float @__my_test_2_f32(float %arg1, float %arg2, i1 zeroext %flag) #8
return float %3

}
“

and my input will always something like
 "float @ir.test.f32(float %arg1, float %arg2, true)”
or "float @ir.test.f32(float %arg1, float %arg2, true)”, that is said that the flag is decided before compiling. 
So ideally I want something like :
"define float @ir.test.f32(float %arg1, float %arg2, true) #6{
return tail call float @__my_test_1_f32(float %arg1, float %arg2) #8
}
“
and 
"define float @ir.test.f32(float %arg1, float %arg2, false) #6{
return tail call float @__my_test_2_f32(float %arg1, float %arg2) #8
}
“

therefore, the dynamic branch can be avoided.
I feel that the optimization part can help me to achieve this, is it possible and how?

best
kevin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150402/782bd4c0/attachment.html>


More information about the llvm-dev mailing list