[PATCH] D66148: [SemanticTypedef] Provide a semantic typedef class and operators

David Greene via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Aug 14 12:09:48 PDT 2019


greened marked an inline comment as done.
greened added inline comments.


================
Comment at: llvm/include/llvm/ADT/SemanticTypedef.h:27
+/// cumbersome and error-prone.  The semantic typedef utilites here make this at
+/// least a little bit easier.
+///
----------------
mehdi_amini wrote:
> The problem exposed above is compelling, but I have hard time connecting the explanation that follows to the problem statement. Can you provide the final solution for expressing the original API `void foo(bool Feature1, bool Feature2);` and how it compares to an `enum class`?
For booleans `enum class` may be better in some cases, so perhaps I should restate the problem.  Assuming a `StrongBoolean` is available:

    cl::opt<bool> EnableFeature1("enable-feature1", cl::init(false), cl::desc(...));
    cl::opt<bool> EnableFeature2("enable-feature2", cl::init(false), cl::desc(...));

    struct Feature1Flag : public StrongBoolean<Feature1Flag> {
      using StrongBoolean::StrongBoolean;
    };
    struct Feature2Flag : public StrongBoolean<Feature2Flag> {
      using StrongBoolean::StrongBoolean;
    };

    void foo(Feature1Flag Feature1, Feature2Flag Feature2);

    Feature1Flag F1(EnableFeature1);
    Feature2Flag F2(EnableFeature2);

    foo(F1, F2);

Using `enum class`:

    enum class Feature1Flag : bool {
      Disabled = false,
      Enabled = true
    };
    enum class Feature2Flag : bool {
      Disabled = false,
      Enabled = true
    }

    cl::opt<Feature1Flag> EnableFeature1(
      cl::values(
        clEnumValN("enable-feature1", Feature1Flag::Enabled, "..."),
       clEnumValN("disable-feature1", Feature1Flag::Disabled, "...")));

    cl::opt<Feature2Flag> EnableFeature2(
      cl::values(
        clEnumValN("enable-feature2", Feature2Flag::Enabled , "..."),
       clEnumValN("disable-feature2", Feature2Flag::Disabled, "...")));

    void foo(Feature1Flag Feature1, Feature2Flag Feature2);

    Feature1Flag F1(EnableFeature1);
    Feature2Flag F2(EnableFeature2);

    foo(F1, F2);

At least I think that's right.  It's a bit more burdensome to create `cl::opt` flags with `enum class`.

Of course many Booleans won't use a `cl::opt`.  Still, `SemanticTypedef` may be easier than `enum class`:

    template<typename Tag>
    struct MyFeatureFlags : public StrongBoolean<Tag> {
      using StrongBoolean<Tag>::StrongBoolean;
    };

    struct Feature1Flag : public MyFeatureFlags<Feature1Flag> {
      using MyFeatureFlags::MyFeatureFlags;
    };
    struct Feature2Flag : public MyFeatureFlags<Feature2Flag> {
      using MyFeatureFlags::MyFeatureFlags;
    };

    void foo(Feature1Flag Feature1, Feature2Flag Feature2) {
      if (Feature1 || Feature2) {
        ...
      }
    }

Here, I'm assuming the ability to have "related" typedefs interoperate, so that logical disjunction between types with the same wrapper template but different tags can be allowed.  I have prototyped this.  It involves some complicated SFINAE magic and may not be worth the effort.  But it is possible.

We could instead do:

    // Feature1 and Feature2 declared as originally

    void foo(Feature1Flag Feature1, Feature2Flag Feature2) {
      if (Feature1.isTrue() || Feature2.isTrue()) {
        ...
      }
    }

Then we wouldn't need the messy SFINAE stuff but it's slightly less type-safe.  The tradeoff is probably fine, though.

With `enum class`:

    // Feature1 and Feature2 declared as before

    void foo(Feature1Flag Feature1, Feature2Flag Feature2) {
      if (bool(Feature1) || bool(Feature2)) {
        ...
      }
    }

Not that much of a win for `SemanticTypedef` but it is a bit cleaner.

One use for `SemanticTypedef` that would be helpful and not as conveniently achieved with `enum class`:

    template<typename Tag, typename Int = unsigned>
    struct Bits : StrongInteger<Tag, Int> {
      using SemanticTypedef<Tag, Int>::SemanticTypedef;
      Int toBytes() { return Int(*this) / 8; }
    };

    template<typename Tag, typename Int = unsigned>
    struct Bytests : StrongInteger<Tag, Int> {
      using SemanticTypedef<Tag, Int>::SemanticTypedef;
      Int toBits() { return Int(*this) * 8; }
    };

Then the interfaces that take/return bits/bytes would instead take/return these types and we'd reduce the number of bits-or-bytes errors in the code.



Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D66148/new/

https://reviews.llvm.org/D66148





More information about the llvm-commits mailing list