[PATCH] D157497: feat: Migrate isArch16Bit

Alex Langford via Phabricator via cfe-commits cfe-commits at lists.llvm.org
Tue Aug 22 11:38:05 PDT 2023


bulbazord requested changes to this revision.
bulbazord added a comment.

In D157497#4592330 <https://reviews.llvm.org/D157497#4592330>, @Pivnoy wrote:

> At the moment, the TargetParser architecture is extensible. This complicates the addition of new architectures, operating systems, and so on.
> In the main discussion, it was proposed to redesign the architecture of this module in order to increase the possibility of adding new architectures, etc.
> The main idea of   the rework was to separate the Triple entity into a data-class, and create a number of interfaces from components that would include Triple, through the implementation of which it would be possible to represent various data bindings of the components. At the moment, Triple is overflowing with various methods that do not fully meet the ideas of OOP.
> Since the TargetParser module is quite large and has many dependencies throughout the llvm-project, it was first of all supposed to remove these methods from Triple, since they would not correspond to OOP ideas.
> This would help to gradually rid Triple of unnecessary dependencies, and gradually change the architecture inside Triple, without breaking code of another LLVM developers.

I'm still not sure how this would make things simpler. I'll be as specific is possible in what does not click for me.

There is an inherent complexity in parsing triples. Architectures can have variants and subvariants, compilers and other tools do different things for the same architecture and OS when there are different vendors, the environment can subtly change things like ABI, the list goes on. I don't think you're going to be able to wholesale reduce complexity here. The proposal you've laid out here is certainly very OOP-like (in some sense of the phrase "OOP") but you present your ideas under the assumption that this style of OOP is the ideal to strive for. Why is that? Why is this better than what exists today? Is it easier to debug? Is it more performant? Is it easier to maintain? I personally do not think it will be better than what already exists in any of those ways.

In the original discourse thread, you also made the argument that the potential performance overhead and binary size increase should be unnoticeable and that with modern machines we do not need to fight for each microsecond. Without any numbers for performance or size, this is not an argument I can accept. Knowingly adding a performance/size regression to your build tools without an appropriate tradeoff does not make sense to do.

If you want to do this, please provide a concrete argument for what benefit this brings.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D157497/new/

https://reviews.llvm.org/D157497



More information about the cfe-commits mailing list