<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Nov 12, 2021, at 15:03, Arthur Eubanks via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" class="">llvm-dev@lists.llvm.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class="">Currently in the LLVM IR optimization pipelines we pass around an OptimizationLevel, which consists of a speedup level and a size level (e.g. -O1 is {1, 0}, -Oz is {2, 2}). We use the size level to turn on/off some passes and also to determine inliner thresholds.<div class=""><br class=""></div><div class="">When attempting to add support for -Os/-Oz in <a href="https://reviews.llvm.org/D113738" class="">https://reviews.llvm.org/D113738</a>, I got some pushback saying that we should be relying on the function attributes minsize and optsize. The logical extension of that is to completely remove the size level from OptimizationLevel and rely on frontends to set minsize/optsize for -Os/-Oz. Passes that are disabled with -Os/-Oz can check those attributes instead.</div><div class=""><br class=""></div><div class="">There are some tests (e.g. inline-optsize.ll) that test that if we have optsize and -Oz, the lower inlining threshold (-Oz in this case) wins, but perhaps we can revisit that and calculate inline thresholds purely based on the function attributes.</div><div class=""><br class=""></div><div class="">Any thoughts?</div></div><br class=""></div></blockquote><br class=""></div><div>I do not believe in encoding optimization levels in the IR. The optimization level is an option for the machinery of the compiler, and not part of the semantics of the program.</div><div><br class=""></div><div>-Matt</div></body></html>