- Apr 28, 2005
-
-
Chris Lattner authored
llvm-svn: 21607
-
Chris Lattner authored
llvm-svn: 21606
-
Chris Lattner authored
llvm-svn: 21605
-
Reid Spencer authored
constant folding implemented in lib/Transforms/Utils/Local.cpp. llvm-svn: 21604
-
Reid Spencer authored
Help Wanted! There's a lot of them to write. llvm-svn: 21603
-
- Apr 27, 2005
-
-
Reid Spencer authored
llvm-svn: 21602
-
Chris Lattner authored
llvm-svn: 21600
-
Andrew Lenharth authored
Implement Value* tracking for loads and stores in the selection DAG. This enables one to use alias analysis in the backends. (TRUNK)Stores and (EXT|ZEXT|SEXT)Loads have an extra SDOperand which is a SrcValueSDNode which contains the Value*. Note that if the operation is introduced by the backend, it will still have the operand, but the value* will be null. llvm-svn: 21599
-
Chris Lattner authored
llvm-svn: 21598
-
Reid Spencer authored
* Name the instructions by appending to name of original * Factor common part out of a switch statement. llvm-svn: 21597
-
Duraid Madina authored
llvm-svn: 21590
-
Reid Spencer authored
* Correct stale documentation in a few places * Re-order the file to better associate things and reduce line count * Make the pass thread safe by caching the Function* objects needed by the optimizers in the pass object instead of globally. * Provide the SimplifyLibCalls pass object to the optimizer classes so they can access cached Function* objects and TargetData info * Make sure the pass resets its cache if the Module passed to runOnModule changes * Rename CallOptimizer LibCallOptimization. All the classes are named *Optimization while the objects are *Optimizer. * Don't cache Function* in the optimizer objects because they could be used by multiple PassManager's running in multiple threads * Add an optimization for strcpy which is similar to strcat * Add a "TODO" list at the end of the file for ideas on additional libcall optimizations that could be added (get ideas from other compilers). Sorry for the huge diff. Its mostly reorganization of code. That won't happen again as I believe the design and infrastructure for this pass is now done or close to it. llvm-svn: 21589
-
Chris Lattner authored
call to them into an 'unreachable' instruction. This triggers a bunch of times, particularly on gcc: gzip: 36 gcc: 601 eon: 12 bzip: 38 llvm-svn: 21587
-
Reid Spencer authored
llvm-svn: 21583
-
Reid Spencer authored
helps track down what gets triggered in the pass so its easier to identify good test cases. llvm-svn: 21582
-
Chris Lattner authored
'unwinding' llvm-svn: 21581
-
Reid Spencer authored
llvm-svn: 21580
-
Reid Spencer authored
llvm-svn: 21579
-
Reid Spencer authored
llvm-svn: 21578
-
Reid Spencer authored
llvm-svn: 21575
-
Reid Spencer authored
llvm-svn: 21574
-
- Apr 26, 2005
-
-
Chris Lattner authored
llvm-svn: 21571
-
Reid Spencer authored
* MemCpyOptimization can only be optimized if the 3rd and 4th arguments are constants and we weren't checking for that. * The result of llvm.memcpy (and llvm.memmove) is void* not sbyte*, put in a cast. llvm-svn: 21570
-
Reid Spencer authored
* Have the SimplifyLibCalls pass acquire the TargetData and pass it down to the optimization classes so they can use it to make better choices for the signatures of functions, etc. * Rearrange the code a little so the utility functions are closer to their usage and keep the core of the pass near the top of the files. * Adjust the StrLen pass to get/use the correct prototype depending on the TargetData::getIntPtrType() result. The result of strlen is size_t which could be either uint or ulong depending on the platform. * Clean up some coding nits (cast vs. dyn_cast, remove redundant items from a switch, etc.) * Implement the MemMoveOptimization as a twin of MemCpyOptimization (they only differ in name). llvm-svn: 21569
-
Chris Lattner authored
Vladimir Prus! llvm-svn: 21566
-
Chris Lattner authored
llvm-svn: 21565
-
Duraid Madina authored
llvm-svn: 21564
-
Duraid Madina authored
llvm-svn: 21563
-
Reid Spencer authored
named getConstantStringLength. This is the common part of StrCpy and StrLen optimizations and probably several others, yet to be written. It performs all the validity checks for looking at constant arrays that are supposed to be null-terminated strings and then computes the actual length of the string. * Implement the MemCpyOptimization class. This just turns memcpy of 1, 2, 4 and 8 byte data blocks that are properly aligned on those boundaries into a load and a store. Much more could be done here but alignment restrictions and lack of knowledge of the target instruction set prevent use from doing significantly more. That will have to be delegated to the code generators as they lower llvm.memcpy calls. llvm-svn: 21562
-
Duraid Madina authored
subtracts. This is a very rough and nasty implementation of Lefevre's "pattern finding" algorithm. With a few small changes though, it should end up beating most other methods in common use, regardless of the size of the constant (currently, it's often one or two shifts worse) TODO: rewrite it so it's not hideously ugly (this is a translation from perl, which doesn't help ;) bypass most of it for multiplies by 2^n+1 (eventually) teach it that some combinations of shift+add are cheaper than others (e.g. shladd on ia64, scaled adds on alpha) get it to try multiple booth encodings in search of the cheapest routine make it work for negative constants This is hacked up as a DAG->DAG transform, so once I clean it up I hope it'll be pulled out of here and put somewhere else. The only thing backends should really have to worry about for now is where to draw the line between using this code vs. going ahead and doing an integer multiply anyway. llvm-svn: 21560
-
Reid Spencer authored
* Factor out commonalities between StrLenOptimization and StrCatOptimization * Make sure that signatures return sbyte* not void* llvm-svn: 21559
-
Reid Spencer authored
* Change signatures of OptimizeCall and ValidateCalledFunction so they are non-const, allowing the optimization object to be modified. This is in support of caching things used across multiple calls. * Provide two functions for constructing and caching function types * Modify the StrCatOptimization to cache Function objects for strlen and llvm.memcpy so it doesn't regenerate them on each call site. Make sure these are invalidated each time we start the pass. * Handle both a GEP Instruction and a GEP ConstantExpr * Add additional checks to make sure we really are dealing with an arary of sbyte and that all the element initializers are ConstantInt or ConstantExpr that reduce to ConstantInt. * Make sure the GlobalVariable is constant! * Don't use ConstantArray::getString as it can fail and it doesn't give us the right thing. We must check for null bytes in the middle of the array. * Use llvm.memcpy instead of memcpy so we can factor alignment into it. * Don't use void* types in signatures, replace with sbyte* instead. llvm-svn: 21555
-
Chris Lattner authored
llvm-svn: 21552
-
- Apr 25, 2005
-
-
Reid Spencer authored
* Don't use std::string for the function names, const char* will suffice * Allow each CallOptimizer to validate the function signature before doing anything * Repeatedly loop over the functions until an iteration produces no more optimizations. This allows one optimization to insert a call that is optimized by another optimization. * Implement the ConstantArray portion of the StrCatOptimization * Provide a template for the MemCpyOptimization * Make ExitInMainOptimization split the block, not delete everything after the return instruction. (This covers revision 1.3 and 1.4, as the 1.3 comments were botched) llvm-svn: 21548
-
Chris Lattner authored
int foo1(int x, int y) { int t1 = x >= 0; int t2 = y >= 0; return t1 & t2; } int foo2(int x, int y) { int t1 = x == -1; int t2 = y == -1; return t1 & t2; } produces: _foo1: or r2, r4, r3 srwi r2, r2, 31 xori r3, r2, 1 blr _foo2: and r2, r4, r3 addic r2, r2, 1 li r2, 0 addze r3, r2 blr instead of: _foo1: srwi r2, r4, 31 xori r2, r2, 1 srwi r3, r3, 31 xori r3, r3, 1 and r3, r2, r3 blr _foo2: addic r2, r4, 1 li r2, 0 addze r2, r2 addic r3, r3, 1 li r3, 0 addze r3, r3 and r3, r2, r3 blr llvm-svn: 21547
-
Reid Spencer authored
* Use a llvm-svn: 21546
-
Chris Lattner authored
_foo: or r2, r4, r3 srwi r3, r2, 31 blr instead of: _foo: srwi r2, r4, 31 srwi r3, r3, 31 or r3, r2, r3 blr llvm-svn: 21544
-
Chris Lattner authored
Naveen Neelakantam, thanks! llvm-svn: 21543
-
Chris Lattner authored
llvm-svn: 21541
-
Chris Lattner authored
llvm-svn: 21537
-