- Dec 13, 2008
-
-
Bill Wendling authored
llvm[2]: Linking Release executable opt (without symbols) ... Undefined symbols: "llvm::APFloat::IEEEsingle", referenced from: __ZN4llvm7APFloat10IEEEsingleE$non_lazy_ptr in libLLVMCore.a(Constants.o) __ZN4llvm7APFloat10IEEEsingleE$non_lazy_ptr in libLLVMCore.a(AsmWriter.o) __ZN4llvm7APFloat10IEEEsingleE$non_lazy_ptr in libLLVMCore.a(ConstantFold.o) "llvm::APFloat::IEEEdouble", referenced from: __ZN4llvm7APFloat10IEEEdoubleE$non_lazy_ptr in libLLVMCore.a(Constants.o) __ZN4llvm7APFloat10IEEEdoubleE$non_lazy_ptr in libLLVMCore.a(AsmWriter.o) __ZN4llvm7APFloat10IEEEdoubleE$non_lazy_ptr in libLLVMCore.a(ConstantFold.o) ld: symbol(s) not found This is in release mode. To replicate, compile llvm and llvm-gcc in optimized mode. Then build llvm, in optimized mode, with the newly created compiler. llvm-svn: 60977
-
Chris Lattner authored
a pretification of the IR. llvm-svn: 60973
-
Misha Brukman authored
llvm-svn: 60971
-
- Dec 09, 2008
-
-
Chris Lattner authored
of a pointer. This allows is to catch more equivalencies. For example, the type_lists_compatible_p function used to require two iterations of the gvn pass (!) to delete its 18 redundant loads because the first pass would CSE all the addressing computation cruft, which would unblock the second memdep/gvn passes from recognizing them. This change allows memdep/gvn to catch all 18 when run just once on the function (as is typical :) instead of just 3. On all of 403.gcc, this bumps up the # reundandancies found from: 63 gvn - Number of instructions PRE'd 153991 gvn - Number of instructions deleted 50069 gvn - Number of loads deleted to: 63 gvn - Number of instructions PRE'd 154137 gvn - Number of instructions deleted 50185 gvn - Number of loads deleted +120 loads deleted isn't bad. llvm-svn: 60799
-
Chris Lattner authored
pointer stuff from it, simplifying the code a bit. llvm-svn: 60783
-
Chris Lattner authored
MemDep::getNonLocalPointerDependency method. There are some open issues with this (missed optimizations) and plenty of future work, but this does allow GVN to eliminate *slightly* more loads (49246 vs 49033). Switching over now allows simplification of the other code path in memdep. llvm-svn: 60780
-
Chris Lattner authored
llvm-svn: 60779
-
Chris Lattner authored
on test/CodeGen/Generic/2007-06-06-CriticalEdgeLandingPad. llvm-svn: 60739
-
- Dec 08, 2008
-
-
Chris Lattner authored
jump threading has been shown to only expose problems not have bugs itself. I'm sure it's completely bug free! ;-) llvm-svn: 60725
-
Devang Patel authored
Thanks Duncan! llvm-svn: 60702
-
Devang Patel authored
llvm-svn: 60701
-
- Dec 07, 2008
-
-
Chris Lattner authored
nodes. FoldSingleEntryPHINodes deletes the PHI, so there is no need to delete it afterward. llvm-svn: 60653
-
Chris Lattner authored
everything interesting anyway. llvm-svn: 60640
-
- Dec 06, 2008
-
-
Chris Lattner authored
doesn't do its own local caching, and is slightly more aggressive about free/store dse (see testcase). This eliminates the last external client of MemDep::getDependenceFrom(). llvm-svn: 60619
-
- Dec 05, 2008
-
-
Dale Johannesen authored
loops when they can be subsumed into addressing modes. Change X86 addressing mode check to realize that some PIC references need an extra register. (I believe this is correct for Linux, if not, I'm sure someone will tell me.) llvm-svn: 60608
-
Chris Lattner authored
1. Merge the 'None' result into 'Normal', making loads and stores return their dependencies on allocations as Normal. 2. Split the 'Normal' result into 'Clobber' and 'Def' to distinguish between the cases when memdep knows the value is produced from when we just know if may be changed. 3. Move some of the logic for determining whether readonly calls are CSEs into memdep instead of it being in GVN. This still leaves verification that the arguments are hte same to GVN to let it know about value equivalences in different contexts. 4. Change memdep's call/call dependency analysis to use getModRefInfo(CallSite,CallSite) instead of doing something very weak. This only really matters for things like DSA, but someday maybe we'll have some other decent context sensitive analyses :) 5. This reimplements the guts of memdep to handle the new results. 6. This simplifies GVN significantly: a) readonly call CSE is slightly simpler b) I eliminated the "getDependencyFrom" chaining for load elimination and load CSE doesn't have to worry about volatile (they are always clobbers) anymore. c) GVN no longer does any 'lastLoad' caching, leaving it to memdep. 7. The logic in DSE is simplified a bit and sped up. A potentially unsafe case was eliminated. llvm-svn: 60607
-
Anton Korobeynikov authored
See PR3160 for details llvm-svn: 60604
-
Chris Lattner authored
llvm-svn: 60594
-
Chris Lattner authored
llvm-svn: 60588
-
- Dec 04, 2008
-
-
Devang Patel authored
This fixes many bugs. I will add more test cases in a separate check-in. Some day, the code that manipulates CFG and updates dom. info could use refactoring help. llvm-svn: 60554
-
Chris Lattner authored
llvm-svn: 60534
-
Chris Lattner authored
llvm-svn: 60514
-
- Dec 03, 2008
-
-
Dale Johannesen authored
llvm-svn: 60508
-
Dale Johannesen authored
llvm-svn: 60506
-
Chris Lattner authored
llvm-svn: 60501
-
Dale Johannesen authored
llvm-svn: 60494
-
Chris Lattner authored
1) have it fold "br undef", which does occur with surprising frequency as jump threading iterates. 2) teach j-t to delete dead blocks. This removes the successor edges, reducing the in-edges of other blocks, allowing recursive simplification. 3) Fold things like: br COND, BBX, BBY BBX: br COND, BBZ, BBW which also happens because jump threading iterates. llvm-svn: 60470
-
Chris Lattner authored
llvm-svn: 60469
-
Chris Lattner authored
llvm-svn: 60468
-
Chris Lattner authored
unconditionally delete the block. All likely clients will do the checking anyway. llvm-svn: 60464
-
Chris Lattner authored
DeleteBlockIfDead method. llvm-svn: 60463
-
- Dec 02, 2008
-
-
Dale Johannesen authored
llvm-svn: 60442
-
Dale Johannesen authored
llvm-svn: 60431
-
Chris Lattner authored
straight-forward implementation. This does not require any extra alias analysis queries beyond what we already do for non-local loads. Some programs really really like load PRE. For example, SPASS triggers this ~1000 times, ~300 times in 255.vortex, and ~1500 times on 403.gcc. The biggest limitation to the implementation is that it does not split critical edges. This is a huge killer on many programs and should be addressed after the initial patch is enabled by default. The implementation of this should incidentally speed up rejection of non-local loads because it avoids creating the repl densemap in cases when it won't be used for fully redundant loads. This is currently disabled by default. Before I turn this on, I need to fix a couple of miscompilations in the testsuite, look at compile time performance numbers, and look at perf impact. This is pretty close to ready though. llvm-svn: 60408
-
Bill Wendling authored
llvm-svn: 60403
-
Bill Wendling authored
llvm-svn: 60402
-
Bill Wendling authored
llvm-svn: 60401
-
Bill Wendling authored
constant. If X is a constant, then this is folded elsewhere. - Added a note to Target/README.txt to indicate that we'd like to implement this when we're able. llvm-svn: 60399
-
Bill Wendling authored
llvm-svn: 60398
-
Bill Wendling authored
- No need to do a swap on a canonicalized pattern. No functionality change. llvm-svn: 60397
-