[LLVMdev] Old DOUT

Chris Lattner clattner at apple.com
Fri Dec 11 18:05:35 PST 2009


On Dec 11, 2009, at 9:44 AM, David Greene wrote:

> On Friday 11 December 2009 11:35, Chris Lattner wrote:
> =
>>> I'm not sure what you mean here.  It's not ok to convert code under
>>> DEBUG() or #ifndef NDEBUG to use dbgs()?
>> 
>> Right.
>> 
>>> Then what's the point of providing it?
>> 
>> I don't know what dbgs does, so I don't know!
> 
> dbgs() will be a circular-buffering raw_ostream, meaning it saves
> the last N bytes (N == unlimited by default) and displays the
> output at program termination if requested.  By default output
> gets generated immediately, just like errs().
> 
> I will add a flag -debug-buffer-size=N to set the buffer and turn
> on delayed output.  This is super useful when trying to debug
> very large codes.  I have had debug output consume GBs of disk space.
> This avoids that problem but it only works if all current debug
> output goes to the new stream.
> 
> As I said, by default there is no change in behavior.  dbgs() works
> very similarly to the formatted_raw_ostream in that it uses errs()
> underneath to do the actual output and only does the circular buffering
> and delayed output if requested.

It seems like there is better ways to handle this (i.e. split the input into smaller chunks) but I'm not opposed to the general idea.

>> The problem is that things like this: 
>> 
>> DOUT << foo();
>> 
>> evaluate foo even when assertions are disabled.  This is bad, and bringing
>> it back with a new name is not good.
> 
> That's not what I'm proposing.
> 
> DOUT << foo();
> 
> is broken.  It should have been written as:
> 
> DEBUG(DOUT << foo());
> 
> Today we use:
> 
> DEBUG(errs() << foo());
> 
> With dbgs() it will be:
> 
> DEBUG(dbgs() << foo());
> 
> The only functional difference is the ability to use circular buffering.

Ok, I'm fine with that.

-Chris





More information about the llvm-dev mailing list