<table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Issue</th>
<td>
<a href=https://github.com/llvm/llvm-project/issues/115578>115578</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>
[lit] Lit command pipelines are broken when the builtin implementation of `env` is used without a subcommand
</td>
</tr>
<tr>
<th>Labels</th>
<td>
new issue
</td>
</tr>
<tr>
<th>Assignees</th>
<td>
</td>
</tr>
<tr>
<th>Reporter</th>
<td>
tahonermann
</td>
</tr>
</table>
<pre>
LLVM's `lit` utility includes a builtin implementation of the `env` command in `llvm/utils/lit/lit/TestRunner.py`; see [here](https://github.com/llvm/llvm-project/blob/26a9f3f5906c62cff7f2245b98affa432b504a87/llvm/utils/lit/lit/TestRunner.py#L734-L754). When the `env` command is used without a subcommand argument, the behavior is to print the (possibly augmented) environment to stdout. Lit's builtin implementation does this, but further processing of the command pipeline is then skipped.
The existing `llvm/utils/lit/tests/shtest-env-positive.py` test (in conjunction with `llvm/utils/lit/tests/Inputs/shtest-env-positive/env-no-subcommand.txt`) appears to exercise this situation, but the test actually passes spuriously. This can be demonstrated in several ways. First by observing that "executed command" log output is not generated for commands that follow the `env` command in a pipeline, and second by augmenting the test in a way that should cause the test to fail, but doesn't.
To demonstrate the issue, here first is an invocation of the `llvm/utils/lit/tests/shtest-env-positive.py` test (on Linux) that shows that "execute command" log output should appear for each command in the pipeline. Note that the command pipeline to run in this case is of the form `env ... python ... | FileCheck ...` and the log output reflects both commands from the pipeline.
```
$ llvm-lit -a llvm/utils/lit/tests/shtest-env-positive.py
...
# RUN: at line 3
env -u FILECHECK_OPTS "/usr/bin/python3.9" /iusers/thonerma/llvm-project/llvm/utils/lit/lit.py -j1 --order=lexical -a -v Inputs/shtest-env-positive | FileCheck -match-full-lines /localdisk2/thonerma/build-llvm-project-main-Release-On/utils/lit/tests/shtest-env-positive.py
# executed command: env -u FILECHECK_OPTS /usr/bin/python3.9 /iusers/thonerma/llvm-project/llvm/utils/lit/lit.py -j1 --order=lexical -a -v Inputs/shtest-env-positive
# executed command: FileCheck -match-full-lines /localdisk2/thonerma/build-llvm-project-main-Release-On/utils/lit/tests/shtest-env-positive.py
...
Total Discovered Tests: 1
Passed: 1 (100.00%)
```
The following demonstrates an independent run of the `llvm/utils/lit/tests/Inputs/shtest-env-positive/env-no-subcommand.txt` test. Note that the output does not include "executed command" logs for the `FileCheck` portions of the pipelines in that test. The test exercises five distinct scenarios but I've only included output from the first. Note that the command to run in the pipeline is of the form `env | FileCheck ...`, but the log output only reflects the `env` command from the pipeline; the `FileCheck` command is skipped.
```
$ llvm-lit -a llvm/utils/lit/tests/Inputs/shtest-env-positive/env-no-subcommand.txt
...
# RUN: at line 4
env | FileCheck -check-prefix=NO-ARGS /localdisk2/thonerma/llvm-project/llvm/utils/lit/tests/Inputs/shtest-env-positive/env-no-subcommand.txt
# executed command: env
# .---command stdout------------
...
# `-----------------------------
# RUN: at line 11
...
```
The following demonstrates that a modification to the test that should cause it to fail does not do so. Apply the following diff (or make similar changes) and observe that the test still passes.
```
$ git diff
diff --git a/llvm/utils/lit/tests/Inputs/shtest-env-positive/env-no-subcommand.txt b/llvm/utils/lit/tests/Inputs/shtest-env-positive/env-no-subcommand.txt
index 761a8061a0b0..f17eeb01c2c2 100644
--- a/llvm/utils/lit/tests/Inputs/shtest-env-positive/env-no-subcommand.txt
+++ b/llvm/utils/lit/tests/Inputs/shtest-env-positive/env-no-subcommand.txt
@@ -3,9 +3,9 @@
## Check default environment.
# RUN: env | FileCheck -check-prefix=NO-ARGS %s
#
-# NO-ARGS: BAR=2
-# NO-ARGS: FOO=1
-# NO-ARGS: QUX=3
+# NO-ARGS: BAR=This can be anything!
+# NO-ARGS: FOO=So can this!
+# NO-ARGS: QUX=And this!
## Set environment variables.
# RUN: env FOO=2 BAR=1 | FileCheck -check-prefix=SET-VAL %s
$ llvm-lit llvm/utils/lit/tests/Inputs/shtest-env-positive/env-no-subcommand.txt
-- Testing: 1 tests, 1 workers --
PASS: shtest-env :: env-no-subcommand.txt (1 of 1)
Testing Time: 0.01s
Total Discovered Tests: 1
Passed: 1 (100.00%)
```
</pre>
<img width="1px" height="1px" alt="" src="http://email.email.llvm.org/o/eJzMWFtz47oN_jXMC0YaibJ8efCDE6_bnaa7203Oad86lARZ3NCkhqSc-N93QMmXZO1kL5Pp8Xh8kQgQ-AB8ACWck2uNOGf5NcuXV6LzjbFzLxqj0W6E1leFqXbz29s__8n4xAEbJ0p6Nk6g81JJvwOpS9VV6EBA0UnlpQa5aRVuUHvhpdFgavANkijqLYmWZrMRugKpgz613TC-In2O8RWp33_eo_NfO63Rxu2OjROWXYNDBJZfN2iR5UvGp433rWPZgvEV46u19E1XxKUhnYNq-opaa75hSVoLZQrGV3wsZnVW57NkXI55WdeTmvNRXsymoq7FKONFnozEdHLU87aJPLudZKPodpKPGJ_F8O8G9SXnHXQOK3iUvjGdBwGuK_Y3hV13BCDjN0G8wEZspbEk5Q20Vmrf6-XT1jgnC7UD0a1JBivGZ4B6K63RdIEknK9M52O4JaMn7lKoKoMOfCMdbVx0HurO-gYttNaU6JzU630496a2skUlNQbTyF33INsWq5glS5Ys-s_7BgGfpPOk4FLMPTpP_1xDvyLU26g1Tnq5xT78QNfJZamhNPpbp8tgNUH4ptaPuu0uqWd8RX-1iY4xiP0T5TlhKdoWhQ3I4xPaUjoMIIGTvgvA7eEiYIKRovSdUGoHrXAOHbi2s9J0Tu1iuCfRUmgoECrcGO28FR5DPTjcohUKHsXOxbCS1nkodmAKh3ZL4PlGEAQcn7DsSGgwl3EOyqzBdL7tPAVDGw9r1Njrro3dL3W9ktooZR4vl6Y4hJa8o4sOS6MrsmdItd6gweUg8ih2vXbXmE5VUIougDWs8QZqIdUeLko3zfjEP08WcwpLEJbOdcEMKnuoAyzSgdAg9daUL3nmN9PLaLiVunui2O-deXQvob-A_OB4nzMBdhRlcwosGbmHNoZPJvgo_Pmq8gZsp3uxkDYuVNrga23sZogexHEM7c43RoefbHIDK6nwpsHyga6Qh6Sa5E7stVgrLL2DwvjmmCK1NZvnlp6GiAqjf_d_-QgCySrpIRLwK_gHRWTmoDGDr398YtkChIeARNbfIVejDlYfbz_c_P3DzT_--_nL_R2FhTZ0lvhdasZXPRRZPKP4ML6SnUNL-_uht33fGC7RfNzuIPqWQhQZW6Fl2VLhkyyFIl-jLbxGLfAiENFG-LKJ6k6piNxyZJsypVCVdA_8uX3E0lV0amW0EVJHX1GhcBh91r-EMoH7HX9kC7gE7Xlc_4-gvu7GXwvsQ0rfGy8ULKUrzRYtVnAfpLMFpP0CgC_ULIIPKfFQmiRxkjCeMz47X3eH5tqTOfHxCXUOBFlhi7qiQYCY5AdZ8hfbZeDQl6Q2ME0YLqgrDSPjK23MBeIc7DzEk_S3xhLZHxhwT0-up0jaMVhwv285-47toKZyrMIQUnpwJWphpXGhE31kfLJFMFodJtpqb_eBCkPfucTYp0SNz6aiM1x9jpxPZ4gTfg4mHUj6fLf-jqxpUD6H3snweW5M-x1a_6WEeZP0R0fSf8GjJX1FrcVaPrFs-elztPj6t7vXCvxHiOn3vXmFXI8L4iiK9rHoR_Po5HUGFjZOotdeFwFM05fafpJCQqIL2JhK1nKYtLw5Gem-m_bkYc47lnxlwJkYFm2rdkM1HHaTdR2mLgsb8YDg5EYqYaFshF6jCwO4roYR-KTwwu7OS6WGIfutVF5LHzbrr4Rto4guivfIBijeL8eI059gMk7FNBmnIimSOK7TCWKRpCUvOaRJMh4NtRNF0ft4uEf2un-_p8NslLBRAlHG-A0NHtfDj3B56J6MZ5T_PT1UWItO-dNjcHxcty-Sn6GV3J3s0wNLmoYFpO168ZVlS37-5urzZ5Yt0_M3__XHf1i2zE4APaP49Ngo9M43Uq8ZTy8J9RvemSDSH-kvru33X4TDwenCZ8De4TM4YSusFIU61N1LYHsD-GB9-gbQdx_uoz8XtydAn-lB75RdURRmMcIzjF6DxhtI4dHYB7QO9gz7ZXEXEDvuASxbDC6fIQGa4mgASI8jXE-4_X5wLzdI0kmcpO75IfitafHHh8Wrap5Vs2wmrnCeTrI05XmWja6aOZ-OCyyrvBC8LsbTaZ0lIhuVWTKtE5GM8ys55wkfpWkyS0bJNB_FfJxP89F0nFVFJZJyzEYJboRUMYUmNnZ9FY7q8zTN88n0SokClQtPFznX-Lg_yHOWL6_sPAS26NaOjRIlnXdHNV56FR5LUpDzJdxK_93h2IGwCIU1D6jhcf-o7fJTyOPg9NrDt6vOqvlPP1QMjlH2DZ5v5_x_AQAA___IN-1j">