[llvm] update_test_checks: keep meta variables stable by default (PR #76748)

via llvm-commits llvm-commits at lists.llvm.org
Tue Jan 2 12:02:03 PST 2024


Nicolai =?utf-8?q?Hähnle?= <nicolai.haehnle at amd.com>,
Nicolai =?utf-8?q?Hähnle?= <nicolai.haehnle at amd.com>,
Nicolai =?utf-8?q?Hähnle?= <nicolai.haehnle at amd.com>
Message-ID:
In-Reply-To: <llvm.org/llvm/llvm-project/pull/76748 at github.com>


llvmbot wrote:


<!--LLVM PR SUMMARY COMMENT-->

@llvm/pr-subscribers-testing-tools

Author: Nicolai Hähnle (nhaehnle)

<details>
<summary>Changes</summary>

There is a small sequence of commits here, the first one simply pre-commits a test. But really, you don't lose much by looking at the overall diff.

Prior to this change, running UTC on larger tests, especially tests with unnamed IR values, often resulted in a spuriously large diff because e.g. TMPnn variables in the CHECK lines were renumbered. This change attempts to reduce the diff by keeping those variable names the same.

There are cases in which this "drift" of variable names can end up being more confusing. The old behavior can be re-enabled with the `--reset-variable-names` command line argument.

The improvement may not be immediately apparent in the diff of this change. The point is that the diff of stable_ir_values.ll against stable_ir_values.ll.expected after this change is smaller.

Ideally, we'd also keep meta variables for "global" objects stable, e.g. for attributes (#nn) and metadata (!nn). However, that would require a much more substantial refactoring of how we generate check lines, so I left it for future work.

---

Patch is 29.33 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/76748.diff


6 Files Affected:

- (added) llvm/test/tools/UpdateTestChecks/update_test_checks/Inputs/stable_ir_values.ll (+22) 
- (added) llvm/test/tools/UpdateTestChecks/update_test_checks/Inputs/stable_ir_values.ll.expected (+23) 
- (added) llvm/test/tools/UpdateTestChecks/update_test_checks/Inputs/stable_ir_values.ll.expected.reset (+23) 
- (added) llvm/test/tools/UpdateTestChecks/update_test_checks/stable_ir_values.test (+5) 
- (modified) llvm/utils/UpdateTestChecks/common.py (+406-26) 
- (modified) llvm/utils/update_test_checks.py (+18-3) 


``````````diff
diff --git a/llvm/test/tools/UpdateTestChecks/update_test_checks/Inputs/stable_ir_values.ll b/llvm/test/tools/UpdateTestChecks/update_test_checks/Inputs/stable_ir_values.ll
new file mode 100644
index 00000000000000..8457bf7dc40a2e
--- /dev/null
+++ b/llvm/test/tools/UpdateTestChecks/update_test_checks/Inputs/stable_ir_values.ll
@@ -0,0 +1,22 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 3
+; RUN: opt < %s -S | FileCheck %s
+
+; The assumption underlying this test is that there are pre-existing check lines
+; but something has changed, and we would like to avoid needless changes of
+; meta variable names so that diffs end up being easier to read, e.g. avoid
+; changing X_I33 into X_I34 or renumbering the various TMP variables.
+
+define i32 @func({i32, i32} %x, i32 %y) {
+; CHECK-LABEL: define i32 @func(
+; CHECK-SAME: { i32, i32 } [[X:%.*]], i32 [[Y:%.*]]) {
+; CHECK-NEXT:    [[X_I33:%.*]] = extractvalue { i32, i32 } [[X]], 0
+; CHECK-NEXT:    [[TMP1:%.*]] = add i32 [[X_I33]], [[Y]]
+; CHECK-NEXT:    [[TMP2:%.*]] = mul i32 [[TMP1]], 3
+; CHECK-NEXT:    ret i32 [[TMP2]]
+;
+  %x.i34 = extractvalue {i32, i32} %x, 0
+  %1 = add i32 %y, 1
+  %2 = add i32 %x.i34, %1
+  %3 = mul i32 %2, 3
+  ret i32 %3
+}
diff --git a/llvm/test/tools/UpdateTestChecks/update_test_checks/Inputs/stable_ir_values.ll.expected b/llvm/test/tools/UpdateTestChecks/update_test_checks/Inputs/stable_ir_values.ll.expected
new file mode 100644
index 00000000000000..3549a4d76aa762
--- /dev/null
+++ b/llvm/test/tools/UpdateTestChecks/update_test_checks/Inputs/stable_ir_values.ll.expected
@@ -0,0 +1,23 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 3
+; RUN: opt < %s -S | FileCheck %s
+
+; The assumption underlying this test is that there are pre-existing check lines
+; but something has changed, and we would like to avoid needless changes of
+; meta variable names so that diffs end up being easier to read, e.g. avoid
+; changing X_I33 into X_I34 or renumbering the various TMP variables.
+
+define i32 @func({i32, i32} %x, i32 %y) {
+; CHECK-LABEL: define i32 @func(
+; CHECK-SAME: { i32, i32 } [[X:%.*]], i32 [[Y:%.*]]) {
+; CHECK-NEXT:    [[X_I33:%.*]] = extractvalue { i32, i32 } [[X]], 0
+; CHECK-NEXT:    [[TMP3:%.*]] = add i32 [[Y]], 1
+; CHECK-NEXT:    [[TMP1:%.*]] = add i32 [[X_I33]], [[TMP3]]
+; CHECK-NEXT:    [[TMP2:%.*]] = mul i32 [[TMP1]], 3
+; CHECK-NEXT:    ret i32 [[TMP2]]
+;
+  %x.i34 = extractvalue {i32, i32} %x, 0
+  %1 = add i32 %y, 1
+  %2 = add i32 %x.i34, %1
+  %3 = mul i32 %2, 3
+  ret i32 %3
+}
diff --git a/llvm/test/tools/UpdateTestChecks/update_test_checks/Inputs/stable_ir_values.ll.expected.reset b/llvm/test/tools/UpdateTestChecks/update_test_checks/Inputs/stable_ir_values.ll.expected.reset
new file mode 100644
index 00000000000000..5142e3ed32ba45
--- /dev/null
+++ b/llvm/test/tools/UpdateTestChecks/update_test_checks/Inputs/stable_ir_values.ll.expected.reset
@@ -0,0 +1,23 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 3
+; RUN: opt < %s -S | FileCheck %s
+
+; The assumption underlying this test is that there are pre-existing check lines
+; but something has changed, and we would like to avoid needless changes of
+; meta variable names so that diffs end up being easier to read, e.g. avoid
+; changing X_I33 into X_I34 or renumbering the various TMP variables.
+
+define i32 @func({i32, i32} %x, i32 %y) {
+; CHECK-LABEL: define i32 @func(
+; CHECK-SAME: { i32, i32 } [[X:%.*]], i32 [[Y:%.*]]) {
+; CHECK-NEXT:    [[X_I34:%.*]] = extractvalue { i32, i32 } [[X]], 0
+; CHECK-NEXT:    [[TMP1:%.*]] = add i32 [[Y]], 1
+; CHECK-NEXT:    [[TMP2:%.*]] = add i32 [[X_I34]], [[TMP1]]
+; CHECK-NEXT:    [[TMP3:%.*]] = mul i32 [[TMP2]], 3
+; CHECK-NEXT:    ret i32 [[TMP3]]
+;
+  %x.i34 = extractvalue {i32, i32} %x, 0
+  %1 = add i32 %y, 1
+  %2 = add i32 %x.i34, %1
+  %3 = mul i32 %2, 3
+  ret i32 %3
+}
diff --git a/llvm/test/tools/UpdateTestChecks/update_test_checks/stable_ir_values.test b/llvm/test/tools/UpdateTestChecks/update_test_checks/stable_ir_values.test
new file mode 100644
index 00000000000000..4dfaf5d25c8a69
--- /dev/null
+++ b/llvm/test/tools/UpdateTestChecks/update_test_checks/stable_ir_values.test
@@ -0,0 +1,5 @@
+# RUN: cp -f %S/Inputs/stable_ir_values.ll %t.ll && %update_test_checks %t.ll
+# RUN: diff -u %t.ll %S/Inputs/stable_ir_values.ll.expected
+# Now test that we can reset all the names
+# RUN: %update_test_checks %t.ll --reset-variable-names
+# RUN: diff -u %t.ll %S/Inputs/stable_ir_values.ll.expected.reset
diff --git a/llvm/utils/UpdateTestChecks/common.py b/llvm/utils/UpdateTestChecks/common.py
index 0fe0dfc506b059..9c669e86f530a9 100644
--- a/llvm/utils/UpdateTestChecks/common.py
+++ b/llvm/utils/UpdateTestChecks/common.py
@@ -1,6 +1,8 @@
 from __future__ import print_function
 
 import argparse
+import bisect
+import collections
 import copy
 import glob
 import itertools
@@ -10,7 +12,7 @@
 import sys
 import shlex
 
-from typing import List
+from typing import List, Mapping, Set
 
 ##### Common utilities for update_*test_checks.py
 
@@ -410,6 +412,42 @@ def should_add_line_to_output(
     return True
 
 
+def collect_original_check_lines(original_check_lines, ti: TestInfo, prefix_set: set):
+    """
+    Collect pre-existing check lines in a the dictionary original_check_lines.
+    original_check_lines[func_name][prefix] is filled with a list of
+    right-hand-sides of check lines.
+    """
+    in_function_start = None
+    for input_line_info in ti.ro_iterlines():
+        input_line = input_line_info.line
+        if in_function_start is not None:
+            if input_line == "":
+                continue
+            if input_line.lstrip().startswith(";"):
+                m = CHECK_RE.match(input_line)
+                if (
+                    m is not None
+                    and m.group(1) in prefix_set
+                    and m.group(2) not in ["LABEL", "SAME"]
+                ):
+                    if m.group(1) not in in_function_start:
+                        in_function_start[m.group(1)] = []
+                    in_function_start[m.group(1)].append(
+                        input_line[m.span()[1] :].strip()
+                    )
+                continue
+            in_function_start = None
+
+        m = IR_FUNCTION_RE.match(input_line)
+        if m is not None:
+            func_name = m.group(1)
+            if ti.args.function is not None and func_name != ti.args.function:
+                # When filtering on a specific function, skip all others.
+                continue
+
+            in_function_start = original_check_lines[func_name] = {}
+
 # Perform lit-like substitutions
 def getSubstitutions(sourcepath):
     sourcedir = os.path.dirname(sourcepath)
@@ -481,7 +519,7 @@ def invoke_tool(exe, cmd_args, ir, preprocess_cmd=None, verbose=False):
 CHECK_PREFIX_RE = re.compile(r"--?check-prefix(?:es)?[= ](\S+)")
 PREFIX_RE = re.compile("^[a-zA-Z0-9_-]+$")
 CHECK_RE = re.compile(
-    r"^\s*(?://|[;#])\s*([^:]+?)(?:-NEXT|-NOT|-DAG|-LABEL|-SAME|-EMPTY)?:"
+    r"^\s*(?://|[;#])\s*([^:]+?)(?:-(NEXT|NOT|DAG|LABEL|SAME|EMPTY))?:"
 )
 
 UTC_ARGS_KEY = "UTC_ARGS:"
@@ -922,7 +960,7 @@ def __init__(
         self.variable_mapping = {}
 
     # Return true if this kind of IR value is "local", basically if it matches '%{{.*}}'.
-    def is_local_def_ir_value_match(self, match):
+    def is_local_def_ir_value(self):
         return self.ir_prefix == "%"
 
     # Return true if this kind of IR value is "global", basically if it matches '#{{.*}}'.
@@ -936,9 +974,9 @@ def get_ir_prefix_from_ir_value_match(self, match):
         return re.search(self.ir_prefix, match[0])[0], self.check_prefix
 
     # Return the IR regexp we use for this kind or IR value, e.g., [\w.-]+? for locals
-    def get_ir_regex_from_ir_value_re_match(self, match):
+    def get_ir_regex(self):
         # for backwards compatibility we check locals with '.*'
-        if self.is_local_def_ir_value_match(match):
+        if self.is_local_def_ir_value():
             return ".*"
         return self.ir_regexp
 
@@ -977,9 +1015,9 @@ def get_value_definition(self, var, match):
             regex = ""  # always capture a number in the default format
             capture_start = "[[#"
         else:
-            regex = self.get_ir_regex_from_ir_value_re_match(match)
+            regex = self.get_ir_regex()
             capture_start = "[["
-        if self.is_local_def_ir_value_match(match):
+        if self.is_local_def_ir_value():
             return capture_start + varname + ":" + prefix + regex + "]]"
         return prefix + capture_start + varname + ":" + regex + "]]"
 
@@ -988,7 +1026,7 @@ def get_value_use(self, var, match, var_prefix=None):
         if var_prefix is None:
             var_prefix = self.check_prefix
         capture_start = "[[#" if self.is_number else "[["
-        if self.is_local_def_ir_value_match(match):
+        if self.is_local_def_ir_value():
             return capture_start + self.get_value_name(var, var_prefix) + "]]"
         prefix = self.get_ir_prefix_from_ir_value_match(match)[0]
         return prefix + capture_start + self.get_value_name(var, var_prefix) + "]]"
@@ -1176,20 +1214,236 @@ def may_clash_with_default_check_prefix_name(check_prefix, var):
     )
 
 
+VARIABLE_TAG = "[[@@]]"
+METAVAR_RE = re.compile(r"\[\[([A-Z0-9_]+)(?::[^]]+)?\]\]")
+NUMERIC_SUFFIX_RE = re.compile(r"[0-9]*$")
+
+
+class CheckValueInfo:
+    def __init__(
+        self,
+        nameless_value: NamelessValue,
+        var: str,
+        prefix: str,
+    ):
+        self.nameless_value = nameless_value
+        self.var = var
+        self.prefix = prefix
+
+
+class CheckLineInfo:
+    def __init__(self, line, values):
+        self.line: str = line
+        self.values: List[CheckValueInfo] = values
+
+    def __repr__(self):
+        return f"CheckLineInfo(line={self.line}, self.values={self.values})"
+
+
+def remap_metavar_names(
+    orig_line_infos: List[CheckLineInfo],
+    new_line_infos: List[CheckLineInfo],
+    committed_names: Set[str],
+) -> Mapping[str, str]:
+    """
+    Map all FileCheck variable names that appear in new_line_infos to new
+    FileCheck variable names in an attempt to reduce the diff from orig_line_infos
+    to new_line_infos.
+    """
+    # Initialize uncommitted identity mappings
+    new_mapping = {}
+    for line in new_line_infos:
+        for value in line.values:
+            new_mapping[value.var] = value.var
+
+    # Recursively commit to the identity mapping or find a better one
+    def recurse(
+        orig_line_infos: List[CheckLineInfo], new_line_infos: List[CheckLineInfo]
+    ):
+        if not new_line_infos or not orig_line_infos:
+            return
+
+        lines = set()
+
+        # Search for lines that are identical on both sides, including meta
+        # variable names, and commit to those names immediately
+        for line in orig_line_infos:
+            key = (line.line.strip(), tuple(value.var for value in line.values))
+            lines.add(key)
+
+        for line in new_line_infos:
+            key = (
+                line.line.strip(),
+                tuple(new_mapping[value.var] for value in line.values),
+            )
+            if key in lines:
+                for value in line.values:
+                    committed_names.add(new_mapping[value.var])
+
+        # Search for lines that are unique on both sides if we only consider
+        # variable names that have been committed.
+        lines = collections.defaultdict(lambda: [None, None])
+        for i, line in enumerate(orig_line_infos):
+            key = (
+                line.line.strip(),
+                tuple(
+                    value.var for value in line.values if value.var in committed_names
+                ),
+            )
+            entry = lines[key]
+            if entry[0] is None:
+                entry[0] = i
+            else:
+                entry[0] = False
+
+        for i, line in enumerate(new_line_infos):
+            key = (
+                line.line.strip(),
+                tuple(
+                    new_mapping[value.var]
+                    for value in line.values
+                    if new_mapping[value.var] in committed_names
+                ),
+            )
+            entry = lines[key]
+            if entry[1] is None:
+                entry[1] = i
+            else:
+                entry[1] = False
+
+        unique_matches = []
+        for entry in lines.values():
+            if (
+                entry[0] is not None
+                and entry[0] is not False
+                and entry[1] is not None
+                and entry[1] is not False
+            ):
+                unique_matches.append((entry[0], entry[1]))
+
+        if not unique_matches:
+            # There are no unique matches. This is the recursion base case.
+            return
+
+        # Compute a maximal crossing-free matching via dynamic programming
+        unique_matches.sort(key=lambda entry: entry[0])
+
+        backlinks = []
+        table = []
+        for _, new_idx in unique_matches:
+            ti = bisect.bisect_left(table, new_idx, key=lambda entry: entry[0])
+            if ti < len(table):
+                table[ti] = (new_idx, len(backlinks))
+            else:
+                table.append((new_idx, len(backlinks)))
+            if ti > 0:
+                backlinks.append(table[ti - 1][1])
+            else:
+                backlinks.append(None)
+
+        # Commit to names in the matching, re-checking compatibility as we go along
+        match_idx = table[-1][1]
+        matches = [(len(orig_line_infos), len(new_line_infos))]
+        while match_idx is not None:
+            orig_idx, new_idx = unique_matches[match_idx]
+            local_commits = {}
+
+            for orig_value, new_value in zip(
+                orig_line_infos[orig_idx].values, new_line_infos[new_idx].values
+            ):
+                if new_mapping[new_value.var] in committed_names:
+                    # The new value has already been committed by a previous match we considered
+                    # during the outer loop. If it was mapped to the same name as the original value,
+                    # we can consider committing other values from this line. Otherwise, we should
+                    # ignore this line.
+                    if new_mapping[new_value.var] == orig_value.var:
+                        continue
+                    else:
+                        break
+
+                if new_value.var in local_commits:
+                    # Same, but for a possible commit happening on the same line
+                    if local_commits[new_value.var] == orig_value.var:
+                        continue
+                    else:
+                        break
+
+                if orig_value.var in committed_names:
+                    # We can't map this value because the name we would map it to has already been
+                    # committed for something else. Give up on this line.
+                    break
+
+                local_commits[new_value.var] = orig_value.var
+            else:
+                # No reason not to add any commitments for this line
+                for new_var, orig_var in local_commits.items():
+                    new_mapping[new_var] = orig_var
+                    committed_names.add(orig_var)
+
+                    if (
+                        orig_var != new_var
+                        and orig_var in new_mapping
+                        and new_mapping[orig_var] == orig_var
+                    ):
+                        new_mapping[orig_var] = "conflict_" + orig_var
+
+                matches.append((orig_idx, new_idx))
+
+            match_idx = backlinks[match_idx]
+
+        # Recurse
+        prev_orig_idx = -1
+        prev_new_idx = -1
+        for orig_idx, new_idx in reversed(matches):
+            recurse(
+                orig_line_infos[prev_orig_idx + 1 : orig_idx],
+                new_line_infos[prev_new_idx + 1 : new_idx],
+            )
+            prev_orig_idx = orig_idx
+            prev_new_idx = new_idx
+
+    recurse(orig_line_infos, new_line_infos)
+
+    # Commit to remaining names and resolve conflicts
+    for new_name, mapped_name in new_mapping.items():
+        if mapped_name in committed_names:
+            continue
+        if not mapped_name.startswith("conflict_"):
+            assert mapped_name == new_name
+            committed_names.add(mapped_name)
+
+    for new_name, mapped_name in new_mapping.items():
+        if mapped_name in committed_names:
+            continue
+        assert mapped_name.startswith("conflict_")
+
+        m = NUMERIC_SUFFIX_RE.search(new_name)
+        base_name = new_name[: m.span()[0]]
+        suffix = int(new_name[m.span()[0] :]) if m.span()[0] != m.span()[1] else 1
+        while True:
+            candidate = f"{base_name}{suffix}"
+            if candidate not in committed_names:
+                new_mapping[new_name] = candidate
+                break
+            suffix += 1
+
+    return new_mapping
+
 def generalize_check_lines_common(
     lines,
     is_analyze,
     vars_seen,
     global_vars_seen,
     nameless_values,
-    nameless_value_regex,
+    nameless_value_regex: re.Pattern,
     is_asm,
     preserve_names,
+    original_check_lines: List[str] | None = None,
 ):
     # This gets called for each match that occurs in
     # a line. We transform variables we haven't seen
     # into defs, and variables we have seen into uses.
-    def transform_line_vars(match):
+    def transform_line_vars(match, transform_locals=True):
         var = get_name_from_ir_value_match(match)
         nameless_value = get_nameless_value_from_match(match, nameless_values)
         if may_clash_with_default_check_prefix_name(nameless_value.check_prefix, var):
@@ -1198,7 +1452,9 @@ def transform_line_vars(match):
                 " with scripted FileCheck name." % (var,)
             )
         key = (var, nameless_value.check_key)
-        is_local_def = nameless_value.is_local_def_ir_value_match(match)
+        is_local_def = nameless_value.is_local_def_ir_value()
+        if is_local_def and not transform_locals:
+            return None
         if is_local_def and key in vars_seen:
             rv = nameless_value.get_value_use(var, match)
         elif not is_local_def and key in global_vars_seen:
@@ -1217,13 +1473,15 @@ def transform_line_vars(match):
         # including the commas and spaces.
         return match.group(1) + rv + match.group(match.lastindex)
 
-    lines_with_def = []
+    def transform_non_local_line_vars(match):
+        return transform_line_vars(match, False)
+
     multiple_braces_re = re.compile(r"({{+)|(}}+)")
     def escape_braces(match_obj):
         return '{{' + re.escape(match_obj.group(0)) + '}}'
 
-    for i, line in enumerate(lines):
-        if not is_asm and not is_analyze:
+    if not is_asm and not is_analyze:
+        for i, line in enumerate(lines):
             # An IR variable named '%.' matches the FileCheck regex string.
             line = line.replace("%.", "%dot")
             for regex in _global_hex_value_regex:
@@ -1241,25 +1499,136 @@ def escape_braces(match_obj):
             # Ignore any comments, since the check lines will too.
             scrubbed_line = SCRUB_IR_COMMENT_RE.sub(r"", line)
             lines[i] = scrubbed_line
-        if not preserve_names:
-            # It can happen that two matches are back-to-back and for some reason sub
-            # will not replace both of them. For now we work around this by
-            # substituting until there is no more match.
-            changed = True
-            while changed:
-                (lines[i], changed) = nameless_value_regex.subn(
-                    transform_line_vars, lines[i], count=1
-                )
-        if is_analyze:
+
+    if not preserve_names:
+        if is_asm:
+            for i, _ in enumerate(lines):
+                # It can happen that two matches are back-to-back and for some reason sub
+                # will not replace both of them. For now we work around this by
+                # substituting until there is no more match.
+                changed = True
+                while changed:
+                    (li...
[truncated]

``````````

</details>


https://github.com/llvm/llvm-project/pull/76748


More information about the llvm-commits mailing list