[PATCH] D17066: libLTO: add a ThinLTOCodeGenerator on the model of LTOCodeGenerator.

Mehdi AMINI via llvm-commits llvm-commits at lists.llvm.org
Tue Mar 8 16:08:26 PST 2016


joker.eph added a comment.

Thanks for the great review. Hopefully I didn't forget anything with this update.


================
Comment at: include/llvm/LTO/ThinLTOCodeGenerator.h:47
@@ +46,3 @@
+/// Cache behavior controls.
+struct CachingOptions {
+  std::string Path;
----------------
tejohnson wrote:
> joker.eph wrote:
> > tejohnson wrote:
> > > It just wasn't clear how they interact, although your explanation was what I guessed. Maybe for the thinlto_codegen_set_cache_entry_expiration interface add that the module may be removed earlier due to the pruning interval.
> > Now I'm unsure it was clear enough, because you wrote " the module may be removed earlier due to the pruning interval". A cache entry *can't* be remove earlier. The pruning interval means that will only *check* the entries.
> Ah, so a clarifying comment would help then since I misunderstood. It sounds like the modules will be checked for expiration (and pruned from the cache) at the pruning interval. So with the default pruning interval of -1, the expiration is unused? I took these to be separate, complimentary, mechanisms for keeping the cache size down. Another option beyond just documenting the interactions well is to combine these parameters into a single interface that takes both the pruning interval and the expiration. 
Tried to make the doc very explicit, let me know what you think.

================
Comment at: lib/LTO/LTOModule.cpp:80
@@ +79,3 @@
+  // Right now the detection is only based on the summary presence. We may want
+  // to add a dedicated flag at some point.
+  return hasFunctionSummary(IRFile->getMemoryBufferRef(),
----------------
tejohnson wrote:
> Also for future consider caching value unless it is expected to be called only once per module.
It is supposed to be called once indeed. I think we can update the implementation in the future if needed.

================
Comment at: lib/LTO/ThinLTOCodeGenerator.cpp:66
@@ +65,3 @@
+  } else {
+    ModuleOrErr = parseBitcodeFile(Buffer, Context);
+  }
----------------
tejohnson wrote:
> The when lazy metadata linking is enabled, metadata parsing should be postponed until we actually go to do the import and linkInModule during the bulk importing. Since the source module is destroyed at the end of linkInModule, this should reduce the max amount of live metadata to one module at a time, rather than all modules we are importing from. I'm surprised it didn't reduce the max memory usage in a debug build.
> 
> If we ever go back to post-pass metadata linking in the function importer (i.e. if we decide to do iterative importing rather than a single bulk import from each module), this will become more critical. However, if we move to summary-only importing decisions it will obviate the need to do this.
> 
> Until we go to summary only import decisions, I would think it should reduce the max memory usage as per my first paragraph. Did you see a cost going with lazy metadata loading?
So just tested:

With lazy loading of metadata:       getLazyBitcodeModule takes 237ms and materializeMetadata takes 74ms (total 314ms)
without lazy loading of metadata:  getLazyBitcodeModule takes 316ms 

So no perf diff.

I probably don't see any diff on the memory because most metadata will leak to the context and being free'd with the Module. So there is no real impact on the peak memory (just delaying a little bit).

================
Comment at: lib/LTO/ThinLTOCodeGenerator.cpp:378
@@ +377,2 @@
+    llvm::PrintStatistics();
+}
----------------
slarin wrote:
> joker.eph wrote:
> > tejohnson wrote:
> > > Unfortunately this means that ThinLTOCodeGenerator::run() is currently untested. Consider adding a mode to llvm-lto that does all thinlto steps (thin-action=all?) to test all the steps via this interface.
> > OK.
> +1 on Teresa's point about llvm-lto.
Done, it was valuable :)


http://reviews.llvm.org/D17066





More information about the llvm-commits mailing list