- Dec 29, 2007
-
-
Chris Lattner authored
llvm-svn: 45415
-
Chris Lattner authored
comparisons with a constant. This allows us to compile isnan to: _foo: fcmpu cr7, f1, f1 mfcr r2 rlwinm r3, r2, 0, 31, 31 blr instead of: LCPI1_0: ; float .space 4 _foo: lis r2, ha16(LCPI1_0) lfs f0, lo16(LCPI1_0)(r2) fcmpu cr7, f1, f0 mfcr r2 rlwinm r3, r2, 0, 31, 31 blr llvm-svn: 45405
-
Chris Lattner authored
llvm-svn: 45402
-
Chris Lattner authored
llvm-svn: 45400
-
Chris Lattner authored
x = load p store x -> p llvm-svn: 45398
-
- Dec 22, 2007
-
-
Chris Lattner authored
legalizer support goes in. llvm-svn: 45323
-
Chris Lattner authored
llvm-svn: 45322
-
Chris Lattner authored
or after legalize. llvm-svn: 45321
-
Chris Lattner authored
targets. llvm-svn: 45320
-
- Dec 20, 2007
-
-
Evan Cheng authored
llvm-svn: 45259
-
Evan Cheng authored
llvm-svn: 45252
-
- Dec 19, 2007
-
-
Duncan Sands authored
llvm-svn: 45198
-
Duncan Sands authored
to know about calls that cannot throw ('nounwind'): if such a call does throw for some reason then the personality will terminate the program. The distinction between an ordinary call and a nounwind call is that an ordinary call gets an entry in the exception table but a nounwind call does not. This patch sets up the exception table appropriately. One oddity is that I've chosen to bracket nounwind calls with labels (like invokes) - the other choice would have been to bracket ordinary calls with labels. While bracketing ordinary calls is more natural (because bracketing by labels would then correspond exactly to getting an entry in the exception table), I didn't do it because introducing labels impedes some optimizations and I'm guessing that ordinary calls occur more often than nounwind calls. This fixes the gcc filter2 eh test, at least at -O0 (the inliner needs some tweaking at higher optimization levels). llvm-svn: 45197
-
Evan Cheng authored
llvm-svn: 45186
-
- Dec 18, 2007
-
-
Evan Cheng authored
llvm-svn: 45167
-
Evan Cheng authored
llvm-svn: 45164
-
Evan Cheng authored
FIX for PR1799: When a load is unfolded from an instruction, check if it is a new node. If not, do not create a new SUnit. llvm-svn: 45157
-
Evan Cheng authored
llvm-svn: 45151
-
- Dec 17, 2007
-
-
Duncan Sands authored
how to lower them (with no attempt made to be efficient, since they should only occur for unoptimized code). llvm-svn: 45108
-
- Dec 14, 2007
-
-
Evan Cheng authored
llvm-svn: 45028
-
- Dec 12, 2007
-
-
Dan Gohman authored
SelectionDAG::getConstant, in the same way as vector floating-point constants. This allows the legalize expansion code for @llvm.ctpop and friends to be usable with vector types. llvm-svn: 44954
-
- Dec 11, 2007
-
-
Evan Cheng authored
llvm-svn: 44837
-
- Dec 09, 2007
-
-
Chris Lattner authored
knows the vector is not pow2 llvm-svn: 44740
-
Chris Lattner authored
llvm-svn: 44728
-
Chris Lattner authored
llvm-svn: 44726
-
Chris Lattner authored
%f8 = type <8 x float> define void @test_f8(%f8* %P, %f8* %Q, %f8* %S) { %p = load %f8* %P ; <%f8> [#uses=1] %q = load %f8* %Q ; <%f8> [#uses=1] %R = add %f8 %p, %q ; <%f8> [#uses=1] store %f8 %R, %f8* %S ret void } into: _test_f8: movaps 16(%rdi), %xmm0 addps 16(%rsi), %xmm0 movaps (%rdi), %xmm1 addps (%rsi), %xmm1 movaps %xmm0, 16(%rdx) movaps %xmm1, (%rdx) ret llvm-svn: 44725
-
Chris Lattner authored
llvm-svn: 44724
-
- Dec 08, 2007
-
-
Chris Lattner authored
llvm-svn: 44723
-
Chris Lattner authored
llvm-svn: 44722
-
Chris Lattner authored
llvm-svn: 44719
-
Chris Lattner authored
llvm-svn: 44718
-
Chris Lattner authored
llvm-svn: 44717
-
Chris Lattner authored
llvm-svn: 44716
-
Chris Lattner authored
llvm-svn: 44715
-
Chris Lattner authored
Leave it visibility hidden, but not in an anon namespace. llvm-svn: 44714
-
- Dec 06, 2007
-
-
Dale Johannesen authored
Simpler and safer. llvm-svn: 44663
-
Chris Lattner authored
only disable it if we don't know it will be obviously profitable. Still fixme, but less so. :) llvm-svn: 44658
-
Chris Lattner authored
the X86 backend are needed before this should be enabled by default. llvm-svn: 44657
-
Chris Lattner authored
_foo: movl $12, %eax andl 4(%esp), %eax movl _array(%eax), %eax ret instead of: _foo: movl 4(%esp), %eax shrl $2, %eax andl $3, %eax movl _array(,%eax,4), %eax ret As it turns out, this triggers all the time, in a wide variety of situations, for example, I see diffs like this in various programs: - movl 8(%eax), %eax - shll $2, %eax - andl $1020, %eax - movl (%esi,%eax), %eax + movzbl 8(%eax), %eax + movl (%esi,%eax,4), %eax - shll $2, %edx - andl $1020, %edx - movl (%edi,%edx), %edx + andl $255, %edx + movl (%edi,%edx,4), %edx Unfortunately, I also see stuff like this, which can be fixed in the X86 backend: - andl $85, %ebx - addl _bit_count(,%ebx,4), %ebp + shll $2, %ebx + andl $340, %ebx + addl _bit_count(%ebx), %ebp llvm-svn: 44656
-
Chris Lattner authored
llvm-svn: 44654
-