[PATCH] D15722: [WIP][Polly] SSA Codegen

Johannes Doerfert via llvm-commits llvm-commits at lists.llvm.org
Tue Jan 26 01:03:52 PST 2016

jdoerfert added a comment.

In http://reviews.llvm.org/D15722#334871, @grosser wrote:

> 1. (you already know about this) I am concerned that this approach might be more complex and/or may have more corner cases we need to worry about. Aditya mentioned that in their implementation in graphite, they fail to generate code in various situations. My understanding is that you are not aware of any such limitations in your code and you already checked several of the examples Aditya provided. It would probably be good for Aditya and Sebastian to have a look at this patch, as they have experience with SSA code generation and might be able to provide you with some test cases.

Thest cases would be good yes. And yes, I am not aware of general limitations or bugs.

> 2. It is interesting to see that this patch actually removes a lot of code. So maybe I am wrong about point 1)

In my opinion it is not (so much) about the amount of code but the complexity. And most of the new code is rather straight forward (e.g., everything in ScopHelper.cpp, merging after conditionals and after the loop, and the code that is produced by Polly).

> 3. An alternative approach might be to just call PromoteMemToRegisters on the scalar allocs we introduced. This would allow us to keep the same mental model, but to still generate good code.

At some point you want to run the vectorizer after Polly. Why require post-transformation if good code generation is not harder than what we have at the moment?

> Some questions I have before i get into reviewing this code:


> a) Does your patch need to make any effort to eliminate dead PHI nodes? Will we have a lot of PHI nodes that might need to be dead-code eliminated after your patch? More precisely, would we now need to run a phase of DCE after Polly instead of Mem2Reg.

It does produce dead PHI nodes, though, judging from my experiments not many. Additionally, these dead nodes should not harm vectorization or anything else and will be removed eventually. I did not implement an AST SSA generation that does not generate dead PHIs (e.g., http://pp.info.uni-karlsruhe.de/uploads/publikationen/braun13cc.pdf) since I wanted it to be simple. In our special casse one could easily remember all placed PHIs and use the number of users + the operands to remove all of them after code generation in an efficent way (without another pass).

> b) Are there any remaining issues or inherent limitations you are aware in comparison to the old approach?

Not that I am aware of.

> c) I know you are still running performance numbers, which will be interesting to look at. Otherwise, is this patch ready to be reviewed.

Performance numbers are back. Both on parkas2, almost identical except 2 small compile time improvements.


More information about the llvm-commits mailing list