- Dec 11, 2007
-
-
Bill Wendling authored
- Fix subtle bug when creating initially creating this map. llvm-svn: 44873
-
Bill Wendling authored
because those with side effects will be caught by other checks in here. Also, simplify the check for a BB in a sub loop. llvm-svn: 44871
-
Evan Cheng authored
llvm-svn: 44838
-
Evan Cheng authored
llvm-svn: 44837
-
Gordon Henriksen authored
per-function collector model. Collector is now the factory for CollectorMetadata, so the latter may be subclassed. llvm-svn: 44827
-
- Dec 10, 2007
-
-
Owen Anderson authored
how the CodeGen machinery works. llvm-svn: 44786
-
Christopher Lamb authored
Improve branch folding by recgonizing that explict successor relationships impact the value of fall-through choices. llvm-svn: 44785
-
- Dec 09, 2007
-
-
Chris Lattner authored
knows the vector is not pow2 llvm-svn: 44740
-
Chris Lattner authored
llvm-svn: 44728
-
Bill Wendling authored
llvm-svn: 44727
-
Chris Lattner authored
llvm-svn: 44726
-
Chris Lattner authored
%f8 = type <8 x float> define void @test_f8(%f8* %P, %f8* %Q, %f8* %S) { %p = load %f8* %P ; <%f8> [#uses=1] %q = load %f8* %Q ; <%f8> [#uses=1] %R = add %f8 %p, %q ; <%f8> [#uses=1] store %f8 %R, %f8* %S ret void } into: _test_f8: movaps 16(%rdi), %xmm0 addps 16(%rsi), %xmm0 movaps (%rdi), %xmm1 addps (%rsi), %xmm1 movaps %xmm0, 16(%rdx) movaps %xmm1, (%rdx) ret llvm-svn: 44725
-
Chris Lattner authored
llvm-svn: 44724
-
- Dec 08, 2007
-
-
Chris Lattner authored
llvm-svn: 44723
-
Chris Lattner authored
llvm-svn: 44722
-
Chris Lattner authored
llvm-svn: 44719
-
Chris Lattner authored
llvm-svn: 44718
-
Chris Lattner authored
llvm-svn: 44717
-
Chris Lattner authored
llvm-svn: 44716
-
Chris Lattner authored
llvm-svn: 44715
-
Chris Lattner authored
Leave it visibility hidden, but not in an anon namespace. llvm-svn: 44714
-
Bill Wendling authored
isTriviallyReMaterializable -> hasNoSideEffects isReallyTriviallyReMaterializable -> isTriviallyReMaterializable llvm-svn: 44702
-
- Dec 07, 2007
-
-
Bill Wendling authored
_foo: li r2, 0 LBB1_1: ; bb li r5, 0 stw r5, 0(r3) addi r2, r2, 1 addi r3, r3, 4 cmplw cr0, r2, r4 bne cr0, LBB1_1 ; bb LBB1_2: ; return blr to: _foo: li r2, 0 li r5, 0 LBB1_1: ; bb stw r5, 0(r3) addi r2, r2, 1 addi r3, r3, 4 cmplw cr0, r2, r4 bne cr0, LBB1_1 ; bb LBB1_2: ; return blr ZOMG!! :-) Moar to come... llvm-svn: 44687
-
Evan Cheng authored
llvm-svn: 44671
-
- Dec 06, 2007
-
-
Dale Johannesen authored
Simpler and safer. llvm-svn: 44663
-
Evan Cheng authored
llvm-svn: 44660
-
Chris Lattner authored
only disable it if we don't know it will be obviously profitable. Still fixme, but less so. :) llvm-svn: 44658
-
Chris Lattner authored
the X86 backend are needed before this should be enabled by default. llvm-svn: 44657
-
Chris Lattner authored
_foo: movl $12, %eax andl 4(%esp), %eax movl _array(%eax), %eax ret instead of: _foo: movl 4(%esp), %eax shrl $2, %eax andl $3, %eax movl _array(,%eax,4), %eax ret As it turns out, this triggers all the time, in a wide variety of situations, for example, I see diffs like this in various programs: - movl 8(%eax), %eax - shll $2, %eax - andl $1020, %eax - movl (%esi,%eax), %eax + movzbl 8(%eax), %eax + movl (%esi,%eax,4), %eax - shll $2, %edx - andl $1020, %edx - movl (%edi,%edx), %edx + andl $255, %edx + movl (%edi,%edx,4), %edx Unfortunately, I also see stuff like this, which can be fixed in the X86 backend: - andl $85, %ebx - addl _bit_count(,%ebx,4), %ebp + shll $2, %ebx + andl $340, %ebx + addl _bit_count(%ebx), %ebp llvm-svn: 44656
-
Chris Lattner authored
llvm-svn: 44654
-
Dale Johannesen authored
llvm-svn: 44649
-
Evan Cheng authored
Fix for PR1831: if all defs of an interval are re-materializable, then it's a preferred spill candiate. llvm-svn: 44644
-
- Dec 05, 2007
-
-
Evan Cheng authored
llvm-svn: 44612
-
Evan Cheng authored
llvm-svn: 44611
-
Evan Cheng authored
llvm-svn: 44610
-
Evan Cheng authored
llvm-svn: 44609
-
Chris Lattner authored
llvm-svn: 44608
-
Chris Lattner authored
llvm-svn: 44607
-
Evan Cheng authored
This allows an important optimization to be re-enabled. - If all uses / defs of a split interval can be folded, give the interval a low spill weight so it would not be picked in case spilling is needed (avoid pushing other intervals in the same BB to be spilled). llvm-svn: 44601
-