[llvm] [MachineScheduler] Convert some of the debug prints into using LDBG. NFC (PR #161997)
LLVM Continuous Integration via llvm-commits
llvm-commits at lists.llvm.org
Sat Oct 4 23:32:10 PDT 2025
llvm-ci wrote:
LLVM Buildbot has detected a new failure on builder `mlir-nvidia` running on `mlir-nvidia` while building `llvm` at step 7 "test-build-check-mlir-build-only-check-mlir".
Full details are available at: https://lab.llvm.org/buildbot/#/builders/138/builds/19987
<details>
<summary>Here is the relevant piece of the build log for the reference</summary>
```
Step 7 (test-build-check-mlir-build-only-check-mlir) failure: test (failure)
******************** TEST 'MLIR :: Integration/GPU/CUDA/TensorCore/wmma-matmul-f32.mlir' FAILED ********************
Exit Code: 2
Command Output (stdout):
--
# RUN: at line 1
/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt /vol/worker/mlir-nvidia/mlir-nvidia/llvm.src/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f32.mlir | /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt -gpu-lower-to-nvvm-pipeline="cubin-chip=sm_70 cubin-format=fatbin" | /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-runner --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/lib/libmlir_cuda_runtime.so --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/lib/libmlir_runner_utils.so --entry-point-result=void | /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/FileCheck /vol/worker/mlir-nvidia/mlir-nvidia/llvm.src/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f32.mlir
# executed command: /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt /vol/worker/mlir-nvidia/mlir-nvidia/llvm.src/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f32.mlir
# executed command: /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-opt '-gpu-lower-to-nvvm-pipeline=cubin-chip=sm_70 cubin-format=fatbin'
# executed command: /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-runner --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/lib/libmlir_cuda_runtime.so --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/lib/libmlir_runner_utils.so --entry-point-result=void
# .---command stderr------------
# | 'cuDevicePrimaryCtxRetain(&ctx, getDefaultCuDevice())' failed with 'CUDA_ERROR_INVALID_VALUE'
# | PLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace and instructions to reproduce the bug.
# | Stack dump:
# | 0. Program arguments: /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/mlir-runner --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/lib/libmlir_cuda_runtime.so --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/lib/libmlir_runner_utils.so --entry-point-result=void
# | Stack dump without symbol names (ensure you have llvm-symbolizer in your PATH or set the environment var `LLVM_SYMBOLIZER_PATH` to point to it):
# | 0 libLLVMSupport.so.22.0git 0x00007c36e06381e7 llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) + 39
# | 1 libLLVMSupport.so.22.0git 0x00007c36e06358e5 llvm::sys::RunSignalHandlers() + 293
# | 2 libLLVMSupport.so.22.0git 0x00007c36e0638f95
# | 3 libc.so.6 0x00007c36dfed3520
# | 4 libcuda.so.1 0x00007c36d96837eb
# | 5 libmlir_cuda_runtime.so 0x00007c36db064e10
# | 6 libmlir_cuda_runtime.so 0x00007c36db064d12 mgpuModuleLoad + 18
# | 7 (error) 0x00007c36e03ef01c
# | 8 (error) 0x00007c36e03ef36d
# | 9 libLLVMOrcJIT.so.22.0git 0x00007c36eb2061a7
# | 10 libMLIRExecutionEngine.so.22.0git 0x00007c36eb38ea6e
# | 11 libMLIRExecutionEngine.so.22.0git 0x00007c36eb38e8e5 mlir::ExecutionEngine::initialize() + 53
# | 12 libMLIRJitRunner.so.22.0git 0x00007c36eb41de8c
# | 13 libMLIRJitRunner.so.22.0git 0x00007c36eb41a8fd
# | 14 libMLIRJitRunner.so.22.0git 0x00007c36eb418c10 mlir::JitRunnerMain(int, char**, mlir::DialectRegistry const&, mlir::JitRunnerConfig) + 4928
# | 15 mlir-runner 0x00005b21269de3f8 main + 296
# | 16 libc.so.6 0x00007c36dfebad90
# | 17 libc.so.6 0x00007c36dfebae40 __libc_start_main + 128
# | 18 mlir-runner 0x00005b21269ddeb5 _start + 37
# `-----------------------------
# error: command failed with exit status: -11
# executed command: /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/FileCheck /vol/worker/mlir-nvidia/mlir-nvidia/llvm.src/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f32.mlir
# .---command stderr------------
# | FileCheck error: '<stdin>' is empty.
# | FileCheck command line: /vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/bin/FileCheck /vol/worker/mlir-nvidia/mlir-nvidia/llvm.src/mlir/test/Integration/GPU/CUDA/TensorCore/wmma-matmul-f32.mlir
# `-----------------------------
# error: command failed with exit status: 2
--
********************
```
</details>
https://github.com/llvm/llvm-project/pull/161997
More information about the llvm-commits
mailing list