[llvm-reduce] Add flag to start at finer granularity
Sometimes if llvm-reduce is interrupted in the middle of a delta pass on a large file, it can take quite some time for the tool to start actually doing new work if it is restarted again on the partially-reduced file. A lot of time ends up being spent testing large chunks when these large chunks are very unlikely to actually pass the interestingness test. In cases like this, the tool will complete faster if the starting granularity is reduced to a finer amount. Thus, we introduce a command line flag that automatically divides the chunks into smaller subsets a fixed, user-specified number of times prior to beginning the core loop. Reviewed By: aeubanks Differential Revision: https://reviews.llvm.org/D112651
Loading
Please sign in to comment