[PATCH] D78059: [llvm][STLExtras] Add various type_trait utilities currently present in MLIR

David Blaikie via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Apr 14 15:12:44 PDT 2020


dblaikie added inline comments.


================
Comment at: llvm/include/llvm/ADT/STLExtras.h:109-145
+/// This class provides various trait information about a callable object.
+///   * To access the number of arguments: Traits::num_args
+///   * To access the type of an argument: Traits::arg_t<i>
+///   * To access the type of the result:  Traits::result_t
+template <typename T, bool isClass = std::is_class<T>::value>
+struct function_traits : public function_traits<decltype(&T::operator())> {};
+
----------------
rriddle wrote:
> dblaikie wrote:
> > rriddle wrote:
> > > dblaikie wrote:
> > > > rriddle wrote:
> > > > > dblaikie wrote:
> > > > > > Use of this device makes me a bit uncomfortable - such a thing would be incompatible with overloading, default arguments, etc. 
> > > > > > 
> > > > > > A general idea that's gone around inside Google but I can't find a good reference for it externally, is to only depend on the "call only interface" (ie: never take the address of a function you don't own - only call it with whatever types it was intended to have and treat it as though it returns the type it's documented to return). This ensures maximum flexibility for the function vendor to refactor it - add default arguments or overload the function as part of migrating/improving/refactoring the API, etc.
> > > > > > 
> > > > > > Is there any chance the existing uses of the mlir-equivalent of this trait could reasonably avoid it & instead rely on the call-only contract when interacting with functions/functors?
> > > > > Thanks for the comment David. In all of the usages in MLIR, we aren't necessarily concerned about overloading/default arguments/etc. The general usage is that there is an API that takes a callback(generally expected to be a lambda), and then switches depending on the type of the function argument. Removing the ability to do that would greatly uglify a large amount of code. Is there a way to do that without doing the above? I'm open to any suggestions you might have. 
> > > > When you say "switches depending on the type of the function argument" - would it be possible to use overloads instead? (eg: func(function_ref<void(T1)>); func(function_ref<void(T2)>); etc... )
> > > I tried that originally, but when using overloads even the most trivial examples fail due to ambiguity:
> > > ```
> > > void bar(llvm::function_ref<void(int)> fn);
> > > void bar(llvm::function_ref<void(bool)> fn);
> > > 
> > > void foo() {
> > >   bar([](int i) {});
> > > }
> > > 
> > > <source>:8:3: error: call to 'bar' is ambiguous
> > > 
> > >   bar([](int i) {});
> > > 
> > >   ^~~
> > > 
> > > <source>:4:6: note: candidate function
> > > 
> > > void bar(llvm::function_ref<void(int)> fn);
> > > 
> > >      ^
> > > 
> > > <source>:5:6: note: candidate function
> > > 
> > > void bar(llvm::function_ref<void(bool)> fn);
> > > 
> > >      ^
> > > ```
> > > 
> > > https://godbolt.org/z/JDTFtn
> > Fair point - though are you dealing with situations where there is such an ambiguity, and you still want different behavior based on those two ambiguous alternatives? That seems like quite subtle code that might be good to avoid.
> int and bool are definitely contrived examples, but they were only meant to demonstrate the current state of overload detection. The most common situation we deal with is w.r.t Operations vs. *Op classes. MLIR Ops are like smart pointers that provide an interface into a generic Operation*. Depending on the situation and availability, it is much more convenient to interface with an *Op than an Operation. For example, MLIR has visitor methods that allow for walking nested operations. These are widely used, and extremely convenient:
> 
> ```
> ModuleOp module = ...;
> 
> // Walk *all* operations within the module opaquely.
> module.walk([](Operation *op) { ... });
> 
> // Wall only functions, or gpu modules, within the module.
> module.walk([](FuncOp func) { ... });
> module.walk([](GPUModuleOp gpuModule) { ... });
> 
> // Walk all operations that provide the memory effect interface.
> module.walk([](MemoryEffectOpInterface memoryInterface) { ... });
> ```
> 
> 
Fair enough - I'm a little apprehensive about adding such a general tool to LLVM's core libraries & how it might get used, but I guess we'll leave that up to code review.

(one possible other way to have structured your callback API might've been to use "moduel.walk<GPUModuleOp>([](auto gpuModule) { ... });" etc... awkward in its own way, to be sure)


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D78059/new/

https://reviews.llvm.org/D78059





More information about the llvm-commits mailing list