[llvm-dev] Best way to run LLVM to debug optimized binaries

Greg Stark via llvm-dev llvm-dev at lists.llvm.org
Sat Sep 12 04:44:25 PDT 2015


It seems like I get "<optimized out>" back from gdb when running on
clang -O2 binaries much more so than with gcc -O2. So much so that I
think it's not that they're optimized out but that I'm missing some
key flag to include some key piece of debugging info. I can't believe
that "t" here is optimized out entirely given that it's one of the
function parameters and it's a pointer to a complex structure. The
function is static so it is possible it was inlined but if the assert
is failing it's hard to see how the structure would be entirely
optimized out.

#2  0x00000000010ac7c7 in cdissect (v=<optimized out>, t=<optimized
out>, begin=<optimized out>, end=<optimized out>) at regexec.c:653
653 assert(er != REG_NOMATCH || (t->flags & BACKR));
(gdb) p er
$1 = <optimized out>
(gdb) p t
$2 = <optimized out>

Is there some flag I should be giving to clang to get binaries that
will perform reasonably fast but still be possible to debug? How can I
tell why this variable is showing as optimized out?

-- 
greg


More information about the llvm-dev mailing list