[PATCH] D111288: [fir] Add data flow optimization pass

Andrzej Warzynski via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Oct 15 01:46:43 PDT 2021


awarzynski added inline comments.


================
Comment at: flang/test/Fir/memref-data-flow.fir:103
+// CHECK:    return
+// CHECK:  }
+
----------------
clementval wrote:
> mehdi_amini wrote:
> > clementval wrote:
> > > mehdi_amini wrote:
> > > > These tests looks like entirely auto-generated, what should we expect here?
> > > > 
> > > > Also the source of the test seems like taken directly from a real-world example, without any attempt to document or minimize: this can be a maintenance problem moving forward.
> > > I heard that when you guys discussed the upstreaming of fir-dev it was advise to use to MLIR script to generate the check because the current checks were too small. 
> > I'm not aware of this, we have documented guidelines though: https://mlir.llvm.org/getting_started/TestingGuide/#filecheck-best-practices
> Thanks for the guidelines link. I'll have to check what we will do in Flang because I now have contradictory information from two sides. 
> I heard that when you guys discussed the upstreaming of fir-dev it was advise to use to MLIR script to generate the check because the current checks were too small.

I think that quite a few separate discussions have happened in the last year or so. I was just a by-stander most of the time, so can't really comment. Also, I'm not aware of anything public that we could refer to here (e.g. an RFC or something similar). Perhaps this particular aspect has not been discussed? :)

@mehdi_amini, 
> These tests looks like entirely auto-generated, what should we expect here?
IIUC, you *are* in favor of auto-generated tests (e.g. with [[ https://github.com/llvm/llvm-project/blob/main/mlir/utils/generate-test-checks.py | generate-test-checks.py ]]), but would prefer them to be a bit refactored, right? Otherwise it's not really obvious "what" is being tested. Basically,
>     Tests should be minimal, and only check what is absolutely necessary.

@clementval The check lines could be reduced to only contain the key operations being tested and making sure the these are generated correctly.

> this can be a maintenance problem moving forward

I agree. Any change in MLIR that causes these tests to fail could be non-trivial to triage. But I also think that it will be quite tricky to re-generate these tests from scratch. Perhaps we could make sure that:
* every test is documented (e.g. by making a descriptive function name that highlights "what" is being tested)
* check lines not strictly related to the functionality being tested are reduced/removed
This way we would make any potential triaging much easier for ourselves without having to re-write these cases.

This is just my 2p. WDYT?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D111288/new/

https://reviews.llvm.org/D111288



More information about the llvm-commits mailing list