[PATCH] D68818: [hip][cuda] Fix the extended lambda name mangling issue.

Richard Smith - zygoloid via Phabricator via cfe-commits cfe-commits at lists.llvm.org
Wed Oct 16 12:31:16 PDT 2019


rsmith added a comment.

Broadly, I think it's reasonable to number additional lambda expressions in CUDA compilations. However:

- This is (in theory) an ABI break on the host side, as it changes the lambda numbering in inline functions and function templates and the like. That could be mitigated by using a different numbering sequence for the lambdas that are only numbered for this reason.
- Depending on whether the call operator is a device function is unstable. If I understand the CUDA rules correctly, then in practice, because `constexpr` functions are implicitly `host device`, all lambdas will get numbered in CUDA on C++14 onwards but not in CUDA on C++11, and we generally want those modes to be ABI-compatible. I'd suggest you simplify and stabilize this by simply numbering all lambdas in CUDA mode.



================
Comment at: clang/lib/Sema/SemaLambda.cpp:477
+  // mangled name. But, the mangler needs informing those re-numbered lambdas
+  // still have `internal` linkage.
+  if (getLangOpts().CUDA && Method->hasAttr<CUDADeviceAttr>()) {
----------------
What happens if there are other enclosing constructs that would give the lambda internal linkage (eg, an anonymous namespace or static function that might collide with one in another translation unit, or a declaration involving a type with internal linkage)? Presumably you can still suffer from mangling collisions in those cases, at least if you link together multiple translation units containing device code. Do we need (something like) unique identifiers for device code TUs to use in manglings of ostensibly internal linkage entities?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D68818/new/

https://reviews.llvm.org/D68818





More information about the cfe-commits mailing list