[Mlir-commits] [mlir] [MLIR][DLTI] Make DLTI queries visit all ancestors/respect nested scopes (PR #115043)
Oleksandr Alex Zinenko
llvmlistbot at llvm.org
Tue Nov 19 15:16:44 PST 2024
ftynse wrote:
I have several points here, take them under advisory, you guys are driving this subsystem at this point anyway.
1. The example from the [second post](https://github.com/llvm/llvm-project/pull/115043#issuecomment-2458008925) in this conversation affords a different solution. Shadowing a DLTI attribute with a non-DLTI attribute is invalid and should be checked by the verifier. Not insisting on it, but I wouldn't expect such shadowing in legitimate use cases and rather write it off as incorrect IR construction, at which point it becomes preferable to warn the user.
2. The initial design considered the case of nested layouts by providing the hook to merge layout specs: https://github.com/llvm/llvm-project/blob/3c8818cf2deaa050817ecec1c99cf939295feced/mlir/include/mlir/Interfaces/DataLayoutInterfaces.td#L123. We could defer to specific entries to indicate whether they are "overridable" in certain contexts. I was thinking about, e.g., alignment where nested objects shouldn't just relax alignment requirements. This is in line with reporting an error in the case above. I recall there being _some_ verifier, but not sure if it was completely implemented or required https://discourse.llvm.org/t/connecting-op-verifiers-to-analyses/3553 to proceed.
3. The intention with that design was to allow specs to be propagated _down_ once and merged when constructing a `DataLayoutAnalysis` object that would serve for repeated queries. Mutating the IR invalidates the analysis, as is customary. I suppose this may co-exist, if properly documented, with a direct interface usable for one-off queries walking up.
4. My personal opinion is the caching mechanism in the data layout object was implemented prematurely, but we reached consensus on its theoretical necessity.
My recommendation would be to collect some performance data, if possible, to support or not the need for the cache. Otherwise, we can fallback to using our common knowledge from other similar problems in MLIR or other compilers to inform the decision about caching. I don't have a strong intuition there beyond this infra being under-utilized, where I'd rather go for the implementation that is not embarrassingly bad without prematurely optimizing it.
https://github.com/llvm/llvm-project/pull/115043
More information about the Mlir-commits
mailing list