- Jan 12, 2009
-
-
Duncan Sands authored
suggested by Chris. llvm-svn: 62099
-
- Jan 11, 2009
-
-
Chris Lattner authored
not thrilled about 64-bit % in general, so rewrite to use * instead. llvm-svn: 62047
-
Chris Lattner authored
We should treat vectors as atomic types, not like arrays. llvm-svn: 62046
-
Chris Lattner authored
canonicalization transform based on duncan's comments: 1) improve the comment about %. 2) within our index loop make sure the offset stays within the *type size*, instead of within the *abi size*. This allows us to reason explicitly about landing in tail padding and means that issues like non-zero offsets into [0 x foo] types don't occur anymore. llvm-svn: 62045
-
- Jan 09, 2009
-
-
Chris Lattner authored
llvm-svn: 61997
-
Chris Lattner authored
llvm-svn: 61995
-
Misha Brukman authored
llvm-svn: 61991
-
rdar://6480391Chris Lattner authored
I noticed this in the code compiled for a routine using std::map, which produced this code: %25 = tail call i32 @memcmp(i8* %24, i8* %23, i32 6) nounwind readonly %.lobit.i = lshr i32 %25, 31 ; <i32> [#uses=1] %tmp.i = trunc i32 %.lobit.i to i8 ; <i8> [#uses=1] %toBool = icmp eq i8 %tmp.i, 0 ; <i1> [#uses=1] br i1 %toBool, label %bb3, label %bb4 which compiled to: call L_memcmp$stub shrl $31, %eax testb %al, %al jne LBB1_11 ## with this change, we compile it to: call L_memcmp$stub testl %eax, %eax js LBB1_11 This triggers all the time in common code, with patters like this: %169 = and i32 %ply, 1 ; <i32> [#uses=1] %170 = trunc i32 %169 to i8 ; <i8> [#uses=1] %toBool = icmp ne i8 %170, 0 ; <i1> [#uses=1] %7 = lshr i32 %6, 24 ; <i32> [#uses=1] %9 = trunc i32 %7 to i8 ; <i8> [#uses=1] %10 = icmp ne i8 %9, 0 ; <i1> [#uses=1] etc llvm-svn: 61985
-
Chris Lattner authored
llvm-svn: 61984
-
Chris Lattner authored
jump threading can have bugs, who knew? ;-) llvm-svn: 61983
-
Chris Lattner authored
llvm-svn: 61980
-
Chris Lattner authored
(which is constant time and cheap) before checking hasAllZeroIndices. llvm-svn: 61976
-
- Jan 08, 2009
-
-
Chris Lattner authored
loads from allocas that cover the entire aggregate. This handles some memcpy/byval cases that are produced by llvm-gcc. This triggers a few times in kc++ (with std::pair<std::_Rb_tree_const_iterator <kc::impl_abstract_phylum*>,bool>) and once in 176.gcc (with %struct..0anon). llvm-svn: 61915
-
- Jan 07, 2009
-
-
Chris Lattner authored
integer to a (transitive) bitcast the alloca and if that integer has the full size of the alloca, then it clobbers the whole thing. Handle this by extracting pieces out of the stored integer and filing them away in the SROA'd elements. This triggers fairly frequently because the CFE uses integers to pass small structs by value and the inliner exposes these. For example, in kimwitu++, I see a bunch of these with i64 stores to "%struct.std::pair<std::_Rb_tree_const_iterator<kc::impl_abstract_phylum*>,bool>" In 176.gcc I see a few i32 stores to "%struct..0anon". In the testcase, this is a difference between compiling test1 to: _test1: subl $12, %esp movl 20(%esp), %eax movl %eax, 4(%esp) movl 16(%esp), %eax movl %eax, (%esp) movl (%esp), %eax addl 4(%esp), %eax addl $12, %esp ret vs: _test1: movl 8(%esp), %eax addl 4(%esp), %eax ret The second half of this will be to handle loads of the same form. llvm-svn: 61853
-
Chris Lattner authored
llvm-svn: 61852
-
Chris Lattner authored
change. llvm-svn: 61851
-
Chris Lattner authored
requerying it all over the place. llvm-svn: 61850
-
Chris Lattner authored
code, no functionality change. llvm-svn: 61849
-
- Jan 06, 2009
-
-
Chris Lattner authored
as template arguments instead of as instance variables, exposing more optimization opportunities to the compiler earlier. llvm-svn: 61776
-
- Jan 05, 2009
-
-
Evan Cheng authored
llvm-svn: 61752
-
Nick Lewycky authored
Finalization occurs after all the FunctionPasses in the group have run, which is clearly not what we want. This also means that we have to make sure that we apply the right param attributes when creating a new function. Also, add a missed optimization: strdup and strndup. NoCapture and NoAlias return! llvm-svn: 61658
-
- Jan 04, 2009
-
-
Nick Lewycky authored
llvm-svn: 61632
-
Bill Wendling authored
llvm-svn: 61623
-
- Jan 01, 2009
-
-
Bill Wendling authored
llvm-svn: 61538
-
Bill Wendling authored
xor (or (icmp, icmp), true) -> and(icmp, icmp) This is possible because of De Morgan's law. llvm-svn: 61537
-
- Dec 24, 2008
-
-
Dale Johannesen authored
llvm-svn: 61403
-
Dale Johannesen authored
other SPEC breakage. I'll be reverting all recent changes shortly, this checking is mostly so this change doesn't get lost. llvm-svn: 61402
-
- Dec 23, 2008
-
-
Dale Johannesen authored
my last patch to this file. The issue there was that all uses of an IV inside a loop are actually references to Base[IV*2], and there was one use outside that was the same but LSR didn't see the base or the scaling because it didn't recurse into uses outside the loop; thus, it used base+IV*scale mode inside the loop instead of pulling base out of the loop. This was extra bad because register pressure later forced both base and IV into memory. Doing that recursion, at least enough to figure out addressing modes, is a good idea in general; the change in AddUsersIfInteresting does this. However, there were side effects.... It is also possible for recursing outside the loop to introduce another IV where there was only 1 before (if the refs inside are not scaled and the ref outside is). I don't think this is a common case, but it's in the testsuite. It is right to be very aggressive about getting rid of such introduced IVs (CheckForIVReuse and the handling of nonzero RewriteFactor in StrengthReduceStridedIVUsers). In the testcase in question the new IV produced this way has both a nonconstant stride and a nonzero base, neither of which was handled before. And when inserting new code that feeds into a PHI, it's right to put such code at the original location rather than in the PHI's immediate predecessor(s) when the original location is outside the loop (a case that couldn't happen before) (RewriteInstructionToUseNewBase); better to avoid making multiple copies of it in this case. Also, the mechanism for keeping SCEV's corresponding to GEP's no longer works, as the GEP might change after its SCEV is remembered, invalidating the SCEV, and we might get a bad SCEV value when looking up the GEP again for a later loop. This also couldn't happen before, as we weren't recursing into GEP's outside the loop. I owe some testcases for this, want to get it in for nightly runs. llvm-svn: 61362
-
Owen Anderson authored
llvm-svn: 61358
-
- Dec 22, 2008
-
-
Bill Wendling authored
llvm-svn: 61354
-
Bill Wendling authored
llvm-svn: 61353
-
Bill Wendling authored
llvm-svn: 61352
-
Bill Wendling authored
llvm-svn: 61350
-
Bill Wendling authored
llvm-svn: 61349
-
Bill Wendling authored
truely deleted. These will be expanded with further checks of all of the data structures. llvm-svn: 61347
-
- Dec 21, 2008
-
-
Nick Lewycky authored
llvm-svn: 61297
-
- Dec 20, 2008
-
-
Nick Lewycky authored
our optz'n will apply to it, then build the replacement vector only if needed. llvm-svn: 61279
-
- Dec 19, 2008
-
-
Evan Cheng authored
- CodeGenPrepare does not split loop back edges but it only knows about back edges of single block loops. It now does a DFS walk to find loop back edges. - Use SplitBlockPredecessors to factor out common predecessors of the critical edge destination. This is disabled for now due to some regressions. llvm-svn: 61248
-
- Dec 18, 2008
-
-
Bill Wendling authored
llvm-svn: 61222
-
Bill Wendling authored
llvm-svn: 61219
-