[llvm] [ScheduleDAG] Fix and simplify the algorithm used for finding callseq_start (PR #149692)

Matt Arsenault via llvm-commits llvm-commits at lists.llvm.org
Fri Jul 25 06:46:42 PDT 2025


================
@@ -0,0 +1,31 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc < %s -mtriple=m68k-linux | FileCheck %s
+
+define double @minimized() nounwind {
+; CHECK-LABEL: minimized:
+; CHECK:       ; %bb.0: ; %start
+; CHECK-NEXT:    suba.l #20, %sp
+; CHECK-NEXT:    movem.l %a2, (16,%sp) ; 8-byte Folded Spill
+; CHECK-NEXT:    move.l #0, (4,%sp)
+; CHECK-NEXT:    move.l #0, (%sp)
+; CHECK-NEXT:    jsr $0
+; CHECK-NEXT:    suba.l %a2, %a2
+; CHECK-NEXT:    move.l (%a2), (%sp)
+; CHECK-NEXT:    move.l #0, (12,%sp)
+; CHECK-NEXT:    move.l #0, (8,%sp)
+; CHECK-NEXT:    move.l $4, (4,%sp)
+; CHECK-NEXT:    jsr __muldf3
+; CHECK-NEXT:    move.l %d0, (%a2)
+; CHECK-NEXT:    move.l %d1, $4
+; CHECK-NEXT:    move.l %a2, %d0
+; CHECK-NEXT:    move.l %a2, %d1
+; CHECK-NEXT:    movem.l (16,%sp), %a2 ; 8-byte Folded Reload
+; CHECK-NEXT:    adda.l #20, %sp
+; CHECK-NEXT:    rts
+start:
+  %_58 = call double null(double 0.000000e+00)
+  %_60 = load double, ptr null, align 8
+  %_57 = fmul double 0.000000e+00, %_60
+  store double %_57, ptr null, align 8
----------------
arsenm wrote:

Prefer avoid the UB load / store to null 

https://github.com/llvm/llvm-project/pull/149692


More information about the llvm-commits mailing list