<table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Issue</th>
<td>
<a href=https://github.com/llvm/llvm-project/issues/91788>91788</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>
Improve cost modeling for RISC-V Machine Outliner
</td>
</tr>
<tr>
<th>Labels</th>
<td>
new issue
</td>
</tr>
<tr>
<th>Assignees</th>
<td>
</td>
</tr>
<tr>
<th>Reporter</th>
<td>
ilovepi
</td>
</tr>
</table>
<pre>
This is a work item identified in the gap analysis done in https://github.com/llvm/llvm-project/issues/89822.
The ARM32 and Aarch64 machine outliners make many adjustments to the cost modeling depending on the outlining task. For instance it seems to increase weights for tail-callable outlined regions, and adjust the weights for various other patterns.
RISC-V should probably just copy these as a starting point and explore the impact of different values on some sufficiently large codebase. Clang, and the llvm-test suite are the obvious starting points, but Fuchsia, Chrome, and the Linux kernel are also reasonable projects to include for a round of evaluation.
</pre>
<img width="1px" height="1px" alt="" src="http://email.email.llvm.org/o/eJyck0-PpEYMxT8NXKxpQdH00AcOk1m1tFJWkTar3A1lwDtFFSq7era_fUR1T7KbYy78K_H83s82ivDsifqi_a1oP5WYdAmxZxeutHE5BHvrvy0swAII7yG-ASutwJa88sRkgT3oQjDjBujR3YQFbPC0HyyqmxTNS2EuhbnMrEsaDmNYC3Nx7vpxe9pi-E6jFubCIomkMJfu3BlzKKpPRfVyv35bCF6-fmkMoLfwgnFcTkdYcVzYE4Skjj1FgRXfCFb0N0D7PYmu5FVAQ3Y5BlFYgyXHfgZLG3m7P4V7iLvK_kFR3g5wCRHYi6IfCVhBiNasxX6MhELwTjwvKjCFCIrsnkZ0Dgf3jyMLkWYOXgrzmp3fXeVyP_98xcghCQRdKMKGqhS9_ELg6-c_X5_-AllCcha2GAYc3A2y3Bi2264pBLi3ShSj7kG2wF5zYfqxuRApV-Z1w1EhTGB5miiSV7iiSyQ7CgkrgaRp4pHJq7uBwzjv9CwNKHSAV4d-_ki0C-Y-KomCJFYCfBQKwzXH-tVPhjEkhUsaF2HcX1-XGFb6WfN39ukHvFH05LIgOgmwcw8-I37MzUdHXLKUWSLEkLzd49GeCpWDP5S2b-y5OWNJff1ct82pbetTufRdbZpjZ0fTna2p6_o4dralrj1Vtj1Oz7bk3lTmWLV1VXemrU-HtjY11uNE07mx1akpjhWtyO6wYziEOJd5kPtz_dx1pcOBnOQVM8bTO-TDwph942Kf0Q1pluJYORaVf1WU1VH_ed1iuP53ePegj4n48liCPx5LUKbo-v-_e9n13wEAAP__oiVqcQ">