- Dec 14, 2012
-
-
Nadav Rotem authored
Enable the Loop Vectorizer by default for O2 and O3. Disable if-conversion by default. I plan to revert this patch later today. llvm-svn: 170157
-
- Dec 12, 2012
-
-
Nadav Rotem authored
LoopVectorizer: Use the "optsize" attribute to decide if we are allowed to increase the function size. llvm-svn: 170004
-
Nadav Rotem authored
LoopVectorizer: When -Os is used, vectorize only loops that dont require a tail loop. There is no testcase because I dont know of a way to initialize the loop vectorizer pass without adding an additional hidden flag. llvm-svn: 169950
-
- Dec 10, 2012
-
-
Nadav Rotem authored
llvm-svn: 169774
-
- Dec 03, 2012
-
-
Chandler Carruth authored
Sooooo many of these had incorrect or strange main module includes. I have manually inspected all of these, and fixed the main module include to be the nearest plausible thing I could find. If you own or care about any of these source files, I encourage you to take some time and check that these edits were sensible. I can't have broken anything (I strictly added headers, and reordered them, never removed), but they may not be the headers you'd really like to identify as containing the API being implemented. Many forward declarations and missing includes were added to a header files to allow them to parse cleanly when included first. The main module rule does in fact have its merits. =] llvm-svn: 169131
-
- Nov 29, 2012
-
-
Nadav Rotem authored
llvm-svn: 168928
-
- Nov 15, 2012
-
-
Dmitri Gribenko authored
llvm-svn: 168049
-
- Oct 30, 2012
-
-
Nadav Rotem authored
llvm-svn: 167036
-
- Oct 29, 2012
-
-
Nadav Rotem authored
llvm-svn: 166948
-
Nadav Rotem authored
Change the PassManagerBuilder (used by -O3) loop vectorizer flag from -vectorize to -vectorize-loops because we dont want to share the same flag as the bb-vectorizer. llvm-svn: 166937
-
- Oct 26, 2012
-
-
Rafael Espindola authored
list of externals. This makes sense since a shared library with no symbols can still be useful if it has static constructors. llvm-svn: 166795
-
- Oct 25, 2012
-
-
Nadav Rotem authored
llvm-svn: 166643
-
Nadav Rotem authored
llvm-svn: 166642
-
- Oct 18, 2012
-
-
Chandler Carruth authored
over the implicitly-formed-and-nesting CGSCC pass manager and function pass managers, especially when using them on the opt commandline or using extension points in the module builder. The '-barrier' opt flag (or the pass itself) will create a no-op module pass in the pipeline, resetting the pass manager stack, and allowing the creation of a new pipeline of function passes or CGSCC passes to be created that is independent from any previous pipelines. For example, this can be used to test running two CGSCC passes in independent CGSCC pass managers as opposed to in the same CGSCC pass manager. It also allows us to introduce a further hack into the PassManagerBuilder to separate the O0 pipeline extension passes from the always-inliner's CGSCC pass manager, which they likely do not want to participate in... At the very least none of the Sanitizer passes want this behavior. This fixes a bug with ASan at O0 currently, and I'll commit the ASan test which covers this pass. I'm happy to add a test case that this pass exists and works, but not sure how much time folks would like me to spend adding test cases for the details of its behavior of partition pass managers.... The whole thing is just vile, and mostly intended to unblock ASan, so I'm hoping to rip this all out in a brave new pass manager world. llvm-svn: 166172
-
- Oct 17, 2012
-
-
Nadav Rotem authored
llvm-svn: 166112
-
- Oct 02, 2012
-
-
Chandler Carruth authored
Again, let me know if anything breaks due to this! llvm-svn: 164986
-
- Sep 28, 2012
-
-
- Sep 27, 2012
-
-
Nick Lewycky authored
have testcases for the current problems. llvm-svn: 164731
-
- Sep 24, 2012
-
-
Chandler Carruth authored
Queue the fallout. ;] llvm-svn: 164480
-
- Sep 18, 2012
-
-
Benjamin Kramer authored
llvm-svn: 164124
-
Chandler Carruth authored
FCAs. This is essential in order to promote allocas that are used in struct returns by frontends like Clang. The FCA load would block the rest of the pass from firing, resulting is significant regressions with the bullet benchmark in the nightly test suite. Thanks to Duncan for repeated discussions about how best to do this, and to both him and Benjamin for review. This appears to have blocked many places where the pass tries to fire, and so I'm expect somewhat different results with this fix added. As with the last big patch, I'm including a change to enable the SROA by default *temporarily*. Ben is going to remove this as soon as the LNT bots pick up the patch. I'm just trying to get a round of LNT numbers from the stable machines in the lab. NOTE: Four clang tests are expected to fail in the brief window where this is enabled. Sorry for the noise! llvm-svn: 164119
-
- Sep 15, 2012
-
-
Benjamin Kramer authored
What we have so far: - Some clang test failures (these were known already) - Perf results are mixed, some big regressions http://llvm.org/perf/db_default/v4/nts/3844 http://llvm.org/perf/db_default/v4/nts/3845 bullet suffers a lot. matmul is interesting: slower scalar code, faster with -vectorize. - Some dragonegg selfhost bots crash in SROA during selfhost now http://lab.llvm.org:8011/builders/dragonegg-x86_64-linux-gcc-4.6-self-host-checks/builds/1632 http://lab.llvm.org:8011/builders/dragonegg-x86_64-linux-gcc-4.5-self-host/builds/1891 llvm-svn: 163968
-
Chandler Carruth authored
new one, and add support for running the new pass in that mode and in that slot of the pass manager. With this the new pass can completely replace the old one within the pipeline. The strategy for enabling or disabling the SSAUpdater logic is to do it by making the requirement of the domtree analysis optional. By default, it is required and we get the standard mem2reg approach. This is usually the desired strategy when run in stand-alone situations. Within the CGSCC pass manager, we disable requiring of the domtree analysis and consequentially trigger fallback to the SSAUpdater promotion. In theory this would allow the pass to re-use a domtree if one happened to be available even when run in a mode that doesn't require it. In practice, it lets us have a single pass rather than two which was simpler for me to wrap my head around. There is a hidden flag to force the use of the SSAUpdater code path for the purpose of testing. The primary testing strategy is just to run the existing tests through that path. One notable difference is that it has custom code to handle lifetime markers, and one of the tests has been enhanced to exercise that code. This has survived a bootstrap and the test suite without serious correctness issues, however my run of the test suite produced *very* alarming performance numbers. I don't entirely understand or trust them though, so more investigation is on-going. To aid my understanding of the performance impact of the new SROA now that it runs throughout the optimization pipeline, I'm enabling it by default in this commit, and will disable it again once the LNT bots have picked up one iteration with it. I want to get those bots (which are much more stable) to evaluate the impact of the change before I jump to any conclusions. NOTE: Several Clang tests will fail because they run -O3 and check the result's order of output. They'll go back to passing once I disable it again. llvm-svn: 163965
-
- Sep 14, 2012
-
-
Chandler Carruth authored
being busy testing this... llvm-svn: 163890
-
Chandler Carruth authored
This is essentially a ground up re-think of the SROA pass in LLVM. It was initially inspired by a few problems with the existing pass: - It is subject to the bane of my existence in optimizations: arbitrary thresholds. - It is overly conservative about which constructs can be split and promoted. - The vector value replacement aspect is separated from the splitting logic, missing many opportunities where splitting and vector value formation can work together. - The splitting is entirely based around the underlying type of the alloca, despite this type often having little to do with the reality of how that memory is used. This is especially prevelant with unions and base classes where we tail-pack derived members. - When splitting fails (often due to the thresholds), the vector value replacement (again because it is separate) can kick in for preposterous cases where we simply should have split the value. This results in forming i1024 and i2048 integer "bit vectors" that tremendously slow down subsequnet IR optimizations (due to large APInts) and impede the backend's lowering. The new design takes an approach that fundamentally is not susceptible to many of these problems. It is the result of a discusison between myself and Duncan Sands over IRC about how to premptively avoid these types of problems and how to do SROA in a more principled way. Since then, it has evolved and grown, but this remains an important aspect: it fixes real world problems with the SROA process today. First, the transform of SROA actually has little to do with replacement. It has more to do with splitting. The goal is to take an aggregate alloca and form a composition of scalar allocas which can replace it and will be most suitable to the eventual replacement by scalar SSA values. The actual replacement is performed by mem2reg (and in the future SSAUpdater). The splitting is divided into four phases. The first phase is an analysis of the uses of the alloca. This phase recursively walks uses, building up a dense datastructure representing the ranges of the alloca's memory actually used and checking for uses which inhibit any aspects of the transform such as the escape of a pointer. Once we have a mapping of the ranges of the alloca used by individual operations, we compute a partitioning of the used ranges. Some uses are inherently splittable (such as memcpy and memset), while scalar uses are not splittable. The goal is to build a partitioning that has the minimum number of splits while placing each unsplittable use in its own partition. Overlapping unsplittable uses belong to the same partition. This is the target split of the aggregate alloca, and it maximizes the number of scalar accesses which become accesses to their own alloca and candidates for promotion. Third, we re-walk the uses of the alloca and assign each specific memory access to all the partitions touched so that we have dense use-lists for each partition. Finally, we build a new, smaller alloca for each partition and rewrite each use of that partition to use the new alloca. During this phase the pass will also work very hard to transform uses of an alloca into a form suitable for promotion, including forming vector operations, speculating loads throguh PHI nodes and selects, etc. After splitting is complete, each newly refined alloca that is a candidate for promotion to a scalar SSA value is run through mem2reg. There are lots of reasonably detailed comments in the source code about the design and algorithms, and I'm going to be trying to improve them in subsequent commits to ensure this is well documented, as the new pass is in many ways more complex than the old one. Some of this is still a WIP, but the current state is reasonbly stable. It has passed bootstrap, the nightly test suite, and Duncan has run it successfully through the ACATS and DragonEgg test suites. That said, it remains behind a default-off flag until the last few pieces are in place, and full testing can be done. Specific areas I'm looking at next: - Improved comments and some code cleanup from reviews. - SSAUpdater and enabling this pass inside the CGSCC pass manager. - Some datastructure tuning and compile-time measurements. - More aggressive FCA splitting and vector formation. Many thanks to Duncan Sands for the thorough final review, as well as Benjamin Kramer for lots of review during the process of writing this pass, and Daniel Berlin for reviewing the data structures and algorithms and general theory of the pass. Also, several other people on IRC, over lunch tables, etc for lots of feedback and advice. llvm-svn: 163883
-
- Apr 13, 2012
-
-
Hal Finkel authored
As has been suggested by Duncan and others, Early-CSE and GVN should do similar redundancy elimination, but Early-CSE is much less expensive. Most of my autovectorization benchmarks show a performance regresion, but all of these are < 0.1%, and so I think that it is still worth using the less expensive pass. llvm-svn: 154673
-
- Apr 03, 2012
-
-
Bill Wendling authored
llvm-svn: 153902
-
- Mar 24, 2012
-
-
Kostya Serebryany authored
llvm-svn: 153353
-
- Feb 01, 2012
-
-
Hal Finkel authored
This is the initial checkin of the basic-block autovectorization pass along with some supporting vectorization infrastructure. Special thanks to everyone who helped review this code over the last several months (especially Tobias Grosser). llvm-svn: 149468
-
- Jan 17, 2012
-
-
Dan Gohman authored
EP_ModuleOptimizerEarly, to allow passes to be added before the main ModulePass optimizers. llvm-svn: 148329
-
- Dec 07, 2011
-
-
Duncan Sands authored
llvm-svn: 146037
-
- Nov 30, 2011
-
-
Kostya Serebryany authored
llvm-svn: 145530
-
- Aug 16, 2011
-
-
David Chisnall authored
Add a mechanism for optimisation plugins to register passes that all front ends can use without needing to be aware of the plugin (or the plugin be aware of the front end). Before 3.0, I'd like to add a mechanism for automatically loading a set of plugins from a config file. API suggestions welcome... llvm-svn: 137717
-
- Aug 10, 2011
-
-
Rafael Espindola authored
functionality since in the C api a pass is created and added to a pass manager in a single call. llvm-svn: 137159
-
- Aug 02, 2011
-
-
Rafael Espindola authored
llvm-svn: 136727
-