[cfe-dev] Determining macros used in a function or SourceRange (using clang plugin)

Eric Bayer via cfe-dev cfe-dev at lists.llvm.org
Thu Sep 29 20:44:40 PDT 2016


Just in case anyone ever needs to get the relevant macro text from 
either an ExpansionInfo or a MacroInfo to allow emitting that macro 
completely, I've included code for the way I figured out how to do it.  
I didn't find an easier way, but maybe I missed something, but it was 
pretty hard. For the rest of the detection I switched to using the 
Preprocessor callbacks to catch all the macros emitted in groups (using 
the range to know what goes together.)  Looks like everything is working.

Thanks to everyone for all their help!
   -Eric


typedefstd::pair<clang::SourceRange, clang::SourceRange>MacroDefinitionPair;

MacroDefinitionPairgetMacroDefinitionPair(constSourceLocationDefBodyLoc, 
constSourceManager&SM, constLangOptions&LangOpts) {
MacroDefinitionPairRanges;
intiter=0;

//SourceLocation scur = SM.getSpellingLoc(BeginLoc);
std::pair<FileID, unsigned>cur_info=SM.getDecomposedLoc(DefBodyLoc);
boolinvalid=false;
StringRefbuf=SM.getBufferData(cur_info.first, &invalid);

if(invalid) returnRanges;

// Get the point in the buffer
constchar*buf_start=buf.data();
constchar*orig_point=buf_start+cur_info.second;
constchar*point=orig_point;

// Find the point we begin this #define
while(1) {

// Search backwards until a new-line
while(point>buf_start) {
if(*(point-1) =='\n') break;
point--;
}

// Make a lexer and point it at our buffer and offset and ignore
// comments.
Lexerlexer(SM.getLocForStartOfFile(cur_info.first), LangOpts,
buf_start, point, buf.end());
lexer.SetCommentRetentionState(false);

// Parse the first two tokens
Tokentok_h, tok_define;
lexer.LexFromRawLexer(tok_h);
lexer.LexFromRawLexer(tok_define);

// If we match the beginning of a define, then we are done
if(tok_h.is(tok::hash) &&tok_define.is(tok::raw_identifier) &&
tok_define.getRawIdentifier() =="define") {

// Get the name token (this skips over the leading space)
Tokentok_name;
lexer.LexFromRawLexer(tok_name);

// The range starts at our token and goes through one token before
// the body.
Ranges.first.setBegin(tok_name.getLocation());
Ranges.first.setEnd(DefBodyLoc.getLocWithOffset(-1));

// Done processing, move on.
break;
}

// Backup one more and keep looking
point--;

// If we can't find the beginning, return null ranges to represent
// an invalid starting point.
if( point<buf_start) returnRanges;
}

// Make a lexer and point it at our buffer and offset
Lexerlexer(SM.getLocForStartOfFile(cur_info.first), LangOpts,
buf.begin(), orig_point, buf.end());
lexer.SetCommentRetentionState(false);

// Intermediate variables
Tokentok;
SourceLocationEndBodyLoc;

// Advance 1 token because when we start the tokenizer it assumes that
// we're the start of line.
lexer.LexFromRawLexer(tok);

// Read tokens until we find the next Start of Line.
while(1) {
lexer.LexFromRawLexer(tok);

// If we're EoF or SoL, we should stop advancing
if(tok.is(tok::eof) ||
tok.getFlags() &Token::TokenFlags::StartOfLine) {
//if (tok.is(tok::eof)) {
break;
}

// Cache the last token's final location
EndBodyLoc=tok.getEndLoc().getLocWithOffset(-1);
}

// Mark the end of the macro body
Ranges.second.setBegin(DefBodyLoc);
Ranges.second.setEnd(EndBodyLoc);

// Return the ranges
returnRanges;
}


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20160929/4823de17/attachment.html>


More information about the cfe-dev mailing list