[llvm] [DirectX] adding support in obj2yaml and yaml2obj to root constants (PR #127840)
Chris B via llvm-commits
llvm-commits at lists.llvm.org
Fri Feb 21 14:22:54 PST 2025
================
@@ -13,11 +13,47 @@ using namespace llvm;
using namespace llvm::mcdxbc;
void RootSignatureDesc::write(raw_ostream &OS) const {
+ // Root signature header in dxcontainer has 6 uint_32t values.
+ const uint32_t HeaderSize = 24;
+ const uint32_t ParameterByteSize = Parameters.size_in_bytes();
+ const uint32_t NumParametes = Parameters.size();
+ const uint32_t Zero = 0;
- support::endian::write(OS, Version, llvm::endianness::little);
- support::endian::write(OS, NumParameters, llvm::endianness::little);
- support::endian::write(OS, RootParametersOffset, llvm::endianness::little);
- support::endian::write(OS, NumStaticSamplers, llvm::endianness::little);
- support::endian::write(OS, StaticSamplersOffset, llvm::endianness::little);
- support::endian::write(OS, Flags, llvm::endianness::little);
+ // Writing header information
+ support::endian::write(OS, Header.Version, llvm::endianness::little);
+ support::endian::write(OS, NumParametes, llvm::endianness::little);
+ support::endian::write(OS, HeaderSize, llvm::endianness::little);
+
+ // Static samplers still not implemented
+ support::endian::write(OS, Zero, llvm::endianness::little);
+ support::endian::write(OS, ParameterByteSize + HeaderSize,
+ llvm::endianness::little);
+
+ support::endian::write(OS, Header.Flags, llvm::endianness::little);
+
+ uint32_t ParamsOffset =
+ HeaderSize + (3 * sizeof(uint32_t) * Parameters.size());
+ for (const dxbc::RootParameter &P : Parameters) {
+ support::endian::write(OS, P.ParameterType, llvm::endianness::little);
+ support::endian::write(OS, P.ShaderVisibility, llvm::endianness::little);
+ support::endian::write(OS, ParamsOffset, llvm::endianness::little);
+
+ // Size of root parameter, removing the ParameterType and ShaderVisibility.
+ ParamsOffset += sizeof(dxbc::RootParameter) - 2 * sizeof(uint32_t);
----------------
llvm-beanz wrote:
This clearly needs some better documentation in the code. The way this is being computed is quite confusing and I'm not sure this is how I would do it.
I would probably instead compute the first offset (as you do) then just add the `n * sizeof(uint32_t)` each iteration through the loop where `n` is the number of dwords encoded in the value section.
The `sizeof` in this expression makes it odd and confusing.
https://github.com/llvm/llvm-project/pull/127840
More information about the llvm-commits
mailing list