[PATCH] D124866: [CUDA][HIP] support __noinline__ as keyword

Artem Belevich via Phabricator via cfe-commits cfe-commits at lists.llvm.org
Mon May 9 13:17:14 PDT 2022


tra accepted this revision.
tra added a comment.
This revision is now accepted and ready to land.

In D124866#3501641 <https://reviews.llvm.org/D124866#3501641>, @yaxunl wrote:

> If we are to add `__forceinline__` as a keyword, I feel it better be a separate patch to be cleaner.

Fine with me.



================
Comment at: clang/lib/Parse/ParseDecl.cpp:902
+  while (Tok.is(tok::kw___noinline__)) {
+    IdentifierInfo *AttrName = Tok.getIdentifierInfo();
+    SourceLocation AttrNameLoc = ConsumeToken();
----------------
yaxunl wrote:
> tra wrote:
> > aaron.ballman wrote:
> > > yaxunl wrote:
> > > > aaron.ballman wrote:
> > > > > I think we should we be issuing a pedantic "this is a clang extension" warning here, WDYT?
> > > > will do
> > > I'm questioning whether my advice here was good or not -- now that I see the CUDA spec already calls these function qualifiers... it's debatable whether this is a Clang extension or just the way in which Clang implements the CUDA function qualifiers. @tra -- do you have opinions?
> > > 
> > > I'm sort of leaning towards dropping the extension warning, but the only reason I can think of for keeping it is if Clang is the only CUDA compiler that doesn't require you to include a header before using the function qualifiers. If that's the case, there is a portability concern.
> > I'm not sure if such a warning would be useful. 
> > 
> > > the only reason I can think of for keeping it is if Clang is the only CUDA compiler that doesn't require you to include a header before using the function qualifiers. If that's the case, there is a portability concern.
> > 
> > I don't think it's an issue.
> > 
> > We already have similar divergence between nvcc/clang. E.g. built-in variables like `threadIdx`. Clang implements them in a header, but NVCC provides them by compiler itself. 
> > With both compilers the variables are available by the time we get to compile user code. Virtually all CUDA compilations are done with tons of CUDA headers pre-included by compiler. Those that do not do that are already on their own and have to provide many other 'standard' CUDA things like target attributes. I don't think we need to worry about that.
> > 
> I can remove the diagnostics since it seems unnecessary.
> 
> I tend to treat it as an extension since nvcc is the de facto standard implementation, which does not implement it as a keyword. Compared to that, this is like an extension.
I'd argue that NVCC does implement it (as in "documents and makes it available"). Providing the documented functionality using a different implementation does not reach the point of being an extension, IMO. While there are observable differences between implementations, depending on them would be a portability error for the user.



CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D124866/new/

https://reviews.llvm.org/D124866



More information about the cfe-commits mailing list