[PATCH] D83924: [mlir] Add update_mlir_test_checks.py

Mehdi AMINI via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Jul 16 12:10:23 PDT 2020


mehdi_amini added inline comments.


================
Comment at: llvm/utils/update_mlir_test_checks.py:29
+The script is designed to make adding checks to a test case fast, it is *not*
+designed to be authoratitive about what constitutes a good test!
+"""
----------------
stephenneuendorffer wrote:
> mehdi_amini wrote:
> > Something that I believe is important is to not just have blindly generated CHECKs but have a human editing the tests (documenting, minimizing the number of checks, etc.). 
> > 
> > Can we have the script adding a paragraph mentioning in the actual generated test? (with a link to the testing guide, etc.)
> Conceivably...but I'm guessing that we'd end up with the same paragraph in lots of files.  I tend to use this to auto-update tests in a mechanical way which is easy to verify, but hard to do by hand.  I'm specifically trying to avoid opening every file when updating lots of tests, so I think adding a bunch of text in the file that is expected to be removed would be a bad idea.  What if we just added a boilerplate message about usage and norms that was generated when the script is run?
> Conceivably...but I'm guessing that we'd end up with the same paragraph in lots of files.

I mean: since the user is supposed to manually edit significantly the output of the tool, removing the paragraph doesn't seem much.

The challenge is not get the output of the script just good enough to be checked in as-is but leading to actual bad tests


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D83924/new/

https://reviews.llvm.org/D83924





More information about the llvm-commits mailing list