[llvm-reduce] Add parallel chunk processing.
This patch adds parallel processing of chunks. When reducing very large inputs, e.g. functions with 500k basic blocks, processing chunks in parallel can significantly speed up the reduction. To allow modifying clones of the original module in parallel, each clone needs their own LLVMContext object. To achieve this, each job parses the input module with their own LLVMContext. In case a job successfully reduced the input, it serializes the result module as bitcode into a result array. To ensure parallel reduction produces the same results as serial reduction, only the first successfully reduced result is used, and results of other successful jobs are dropped. Processing resumes after the chunk that was successfully reduced. The number of threads to use can be configured using the -j option. It defaults to 1, which means serial processing. Reviewed By: Meinersbur Differential Revision: https://reviews.llvm.org/D113857
Loading
Please sign in to comment