- Apr 27, 2005
-
-
Chris Lattner authored
'unwinding' llvm-svn: 21581
-
Reid Spencer authored
llvm-svn: 21580
-
Reid Spencer authored
llvm-svn: 21579
-
Reid Spencer authored
llvm-svn: 21578
-
Chris Lattner authored
llvm-svn: 21577
-
Chris Lattner authored
llvm-svn: 21576
-
Reid Spencer authored
llvm-svn: 21575
-
Reid Spencer authored
llvm-svn: 21574
-
- Apr 26, 2005
-
-
Reid Spencer authored
form from the "call" form. llvm-svn: 21573
-
Reid Spencer authored
the uninitiated reader is not confused. llvm-svn: 21572
-
Chris Lattner authored
llvm-svn: 21571
-
Reid Spencer authored
* MemCpyOptimization can only be optimized if the 3rd and 4th arguments are constants and we weren't checking for that. * The result of llvm.memcpy (and llvm.memmove) is void* not sbyte*, put in a cast. llvm-svn: 21570
-
Reid Spencer authored
* Have the SimplifyLibCalls pass acquire the TargetData and pass it down to the optimization classes so they can use it to make better choices for the signatures of functions, etc. * Rearrange the code a little so the utility functions are closer to their usage and keep the core of the pass near the top of the files. * Adjust the StrLen pass to get/use the correct prototype depending on the TargetData::getIntPtrType() result. The result of strlen is size_t which could be either uint or ulong depending on the platform. * Clean up some coding nits (cast vs. dyn_cast, remove redundant items from a switch, etc.) * Implement the MemMoveOptimization as a twin of MemCpyOptimization (they only differ in name). llvm-svn: 21569
-
Reid Spencer authored
llvm-svn: 21568
-
Reid Spencer authored
llvm-svn: 21567
-
Chris Lattner authored
Vladimir Prus! llvm-svn: 21566
-
Chris Lattner authored
llvm-svn: 21565
-
Duraid Madina authored
llvm-svn: 21564
-
Duraid Madina authored
llvm-svn: 21563
-
Reid Spencer authored
named getConstantStringLength. This is the common part of StrCpy and StrLen optimizations and probably several others, yet to be written. It performs all the validity checks for looking at constant arrays that are supposed to be null-terminated strings and then computes the actual length of the string. * Implement the MemCpyOptimization class. This just turns memcpy of 1, 2, 4 and 8 byte data blocks that are properly aligned on those boundaries into a load and a store. Much more could be done here but alignment restrictions and lack of knowledge of the target instruction set prevent use from doing significantly more. That will have to be delegated to the code generators as they lower llvm.memcpy calls. llvm-svn: 21562
-
Reid Spencer authored
llvm-svn: 21561
-
Duraid Madina authored
subtracts. This is a very rough and nasty implementation of Lefevre's "pattern finding" algorithm. With a few small changes though, it should end up beating most other methods in common use, regardless of the size of the constant (currently, it's often one or two shifts worse) TODO: rewrite it so it's not hideously ugly (this is a translation from perl, which doesn't help ;) bypass most of it for multiplies by 2^n+1 (eventually) teach it that some combinations of shift+add are cheaper than others (e.g. shladd on ia64, scaled adds on alpha) get it to try multiple booth encodings in search of the cheapest routine make it work for negative constants This is hacked up as a DAG->DAG transform, so once I clean it up I hope it'll be pulled out of here and put somewhere else. The only thing backends should really have to worry about for now is where to draw the line between using this code vs. going ahead and doing an integer multiply anyway. llvm-svn: 21560
-
Reid Spencer authored
* Factor out commonalities between StrLenOptimization and StrCatOptimization * Make sure that signatures return sbyte* not void* llvm-svn: 21559
-
Reid Spencer authored
* Rename ExitInMain and StrCat tests so they don't have the date the regression was entered since they are feature tests, not regressions. llvm-svn: 21558
-
Reid Spencer authored
* Change signatures of OptimizeCall and ValidateCalledFunction so they are non-const, allowing the optimization object to be modified. This is in support of caching things used across multiple calls. * Provide two functions for constructing and caching function types * Modify the StrCatOptimization to cache Function objects for strlen and llvm.memcpy so it doesn't regenerate them on each call site. Make sure these are invalidated each time we start the pass. * Handle both a GEP Instruction and a GEP ConstantExpr * Add additional checks to make sure we really are dealing with an arary of sbyte and that all the element initializers are ConstantInt or ConstantExpr that reduce to ConstantInt. * Make sure the GlobalVariable is constant! * Don't use ConstantArray::getString as it can fail and it doesn't give us the right thing. We must check for null bytes in the middle of the array. * Use llvm.memcpy instead of memcpy so we can factor alignment into it. * Don't use void* types in signatures, replace with sbyte* instead. llvm-svn: 21555
-
Jeff Cohen authored
llvm-svn: 21554
-
Reid Spencer authored
llvm-svn: 21553
-
Chris Lattner authored
llvm-svn: 21552
-
- Apr 25, 2005
-
-
Reid Spencer authored
* Don't use std::string for the function names, const char* will suffice * Allow each CallOptimizer to validate the function signature before doing anything * Repeatedly loop over the functions until an iteration produces no more optimizations. This allows one optimization to insert a call that is optimized by another optimization. * Implement the ConstantArray portion of the StrCatOptimization * Provide a template for the MemCpyOptimization * Make ExitInMainOptimization split the block, not delete everything after the return instruction. (This covers revision 1.3 and 1.4, as the 1.3 comments were botched) llvm-svn: 21548
-
Chris Lattner authored
int foo1(int x, int y) { int t1 = x >= 0; int t2 = y >= 0; return t1 & t2; } int foo2(int x, int y) { int t1 = x == -1; int t2 = y == -1; return t1 & t2; } produces: _foo1: or r2, r4, r3 srwi r2, r2, 31 xori r3, r2, 1 blr _foo2: and r2, r4, r3 addic r2, r2, 1 li r2, 0 addze r3, r2 blr instead of: _foo1: srwi r2, r4, 31 xori r2, r2, 1 srwi r3, r3, 31 xori r3, r3, 1 and r3, r2, r3 blr _foo2: addic r2, r4, 1 li r2, 0 addze r2, r2 addic r3, r3, 1 li r3, 0 addze r3, r3 and r3, r2, r3 blr llvm-svn: 21547
-
Reid Spencer authored
* Use a llvm-svn: 21546
-
Reid Spencer authored
the restriction that it is an XFAIL because it now passes. llvm-svn: 21545
-
Chris Lattner authored
_foo: or r2, r4, r3 srwi r3, r2, 31 blr instead of: _foo: srwi r2, r4, 31 srwi r3, r3, 31 or r3, r2, r3 blr llvm-svn: 21544
-
Chris Lattner authored
Naveen Neelakantam, thanks! llvm-svn: 21543
-
Tanya Lattner authored
llvm-svn: 21542
-
Chris Lattner authored
llvm-svn: 21541
-
Chris Lattner authored
llvm-svn: 21540
-
Chris Lattner authored
llvm-svn: 21539
-
Chris Lattner authored
llvm-svn: 21537
-
Chris Lattner authored
llvm-svn: 21536
-