[PATCH] D113857: [[llvm-reduce] Add parallel chunk processing.

Florian Hahn via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Sun Nov 14 11:17:19 PST 2021


fhahn created this revision.
fhahn added reviewers: aeubanks, dblaikie, lebedev.ri, Meinersbur.
fhahn requested review of this revision.
Herald added a project: LLVM.

This patch adds parallel processing of chunks. When reducing very large
inputs, e.g. functions with 500k basic blocks, processing chunks in
parallel can significantly speed up the reduction.

To allow modifying clones of the original module in parallel, each clone
needs their onw LLVMContext object. To achieve this, each job parses the
input module with their own LLVMContext. In case a job successfully
reduced the input, it serializes the result module as bitcode into a
result array.

To ensure parallel reduction produces the same results as serial
reduction, only the first successfully reduced result is used, and
results of other successful jobs are dropped. Processing resumes after
the chunk that was successfully reduced.

The number of threads to use can be configured using the -max-chunk-threads
option. It defaults to 1, which means serial processing.


Repository:
  rG LLVM Github Monorepo

https://reviews.llvm.org/D113857

Files:
  llvm/test/tools/llvm-reduce/operands-skip.ll
  llvm/tools/llvm-reduce/deltas/Delta.cpp

-------------- next part --------------
A non-text attachment was scrubbed...
Name: D113857.387118.patch
Type: text/x-patch
Size: 7544 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20211114/2bbc3463/attachment.bin>


More information about the llvm-commits mailing list