[llvm] [RISCV] Use vsetvli instead of vlenb in Prologue/Epilogue (PR #113756)

Craig Topper via llvm-commits llvm-commits at lists.llvm.org
Mon Oct 28 12:29:42 PDT 2024


topperc wrote:

> @topperc
> 
> > > Although one thing to consider is that ooo implementations will need to predict vtype/vl, and this may fill up the predictors quicker.
> > 
> > 
> > Do you know of ooo implementations implementing a predictor?
> 
> Yes I think so. Steam Computing open-sourced an ooo implementation with CSR speculation on top of BOOM at RISC-V Summit China: https://github.com/riscv-stc/riscv-boom/tree/matrix
> 
> ![image](https://private-user-images.githubusercontent.com/69110542/380846846-2067fd13-7e3c-470b-9da3-7fcce7f45f96.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzAxNDM0MzIsIm5iZiI6MTczMDE0MzEzMiwicGF0aCI6Ii82OTExMDU0Mi8zODA4NDY4NDYtMjA2N2ZkMTMtN2UzYy00NzBiLTlkYTMtN2ZjY2U3ZjQ1Zjk2LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEwMjglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMDI4VDE5MTg1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWRhNGQ2NDkxNjM5MDkxY2ZjNzIxNzhjM2YxMTM3MWViMDdlZGI0NTczNjhjMmFkNTk2YjE5NzVlYTBhZjQ1NDEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.ieiAveHIfon9KOWfgle63x5ZJaW-nE8_08e1k49rAec)
> 
> Their default configuration seems to have 8 entries for vconfig speculation: https://github.com/riscv-stc/riscv-boom/blob/8ccc5906f27d680ee9ef1b89f9a221da7b10f5df/src/main/scala/common/config-mixins.scala#L567C15-L567C30
> 
> I was not able to build it with verilator and contacted the author, who said that they only support vcs, which I don't have access to. If someone here has a license, I can share my Dockerfile that got the farthest along the build process with verilator.
> 
> Edit: actually, this might just do speculation, but that also requires keeping track of multiple vtypes
> 
> I would hope there are a lot of proprietary cores with vtype speculation currently in development as well.

I thought you meant predicting the VL/VTYPE without waiting for the scalar instructions to compute the AVL. This looks like it just allowing it to speculatively execute across branches that might mispredict.

https://github.com/llvm/llvm-project/pull/113756


More information about the llvm-commits mailing list