<div dir="ltr"><div>Hi David,</div><div><br></div><div>I like your idea of providing a framework to describe the cache hierarchy/prefetchers etc.</div><div>I can definitely think of a couple of ways to reuse that bit of information in llvm-mca.</div><div><br></div><div>I have a few questions about the general design.</div><div>It looks like in your model, you don't describe whether a cache is inclusive or exclusive. I am not sure if that is useful for the algorithms that you have in mind. However, in some processors, the LLC (last level cache) is exclusive, and doesn't include lines from the lower levels. You could potentially have a mix of inclusive/exclusive shared/private caches; you may want to model those aspects.</div><div><br></div><div>When you mention "write combining", you do that in the context of streaming buffers used by nontemporal operations.</div><div>On x86, a write combining buffer allows temporal adjacent stores to be combined together. Those stores would bypass the cache, and committed together as a single store transaction. Write combining may fail for a number of reasons; for example: there may be alignment/size requirements; stores are not allowed to overlap; etc. Describing all those constraints could be problematic (and maybe outside of the scope of what you want to do). I guess, it is unclear at this stage (at least to me) how much information is required in practice.</div><div><br></div><div>Ideally, it would be nice to have the concept of "memory type", and map memory types to resources/memory consistency models. Not sure if there is already a way to do that mapping, nor if it would improve your existing framework. In theory, you could add the ability to describe constraints/resources for memory types, and then have users define how memory operations map to different types. That information would then drive the algorithm/cost model that computes resource allocation/schedule. However, I may be thinking too much about possible use cases ;-).<br></div><div><br></div><div>That being said, I like having extra information to feed to the compiler (and llvm-mca :-)). Let see what other people think about it.</div><div><br></div><div>Thanks!</div><div>Andrea<br></div></div><br><div class="gmail_quote"><div dir="ltr">On Mon, Nov 5, 2018 at 10:24 PM Renato Golin via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Mon, 5 Nov 2018 at 19:04, David Greene <<a href="mailto:dag@cray.com" target="_blank">dag@cray.com</a>> wrote:<br>
> I guess it would be up to the people interested in a particular target<br>
> to figure out a reasonable, maintainable way to manage models for<br>
> possibly many subtargets. This proposal is about providing<br>
> infrastructure to allow models to be created with not too much effort.<br>
> It doesn't say anything about what models for a particular<br>
> target/subtarget should look like. :)<br>
<br>
Exactly! :)<br>
<br>
-- <br>
cheers,<br>
--renato<br>
_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
</blockquote></div>