- Sep 19, 2006
-
-
Chris Lattner authored
allow it to go into an infinite loop, filling up the disk! llvm-svn: 30494
-
Chris Lattner authored
llvm-svn: 30478
-
Chris Lattner authored
llvm-svn: 30477
-
Evan Cheng authored
llvm-svn: 30474
-
- Sep 18, 2006
-
-
Evan Cheng authored
llvm-svn: 30470
-
Andrew Lenharth authored
llvm-svn: 30462
-
Andrew Lenharth authored
llvm-svn: 30461
-
Jim Laskey authored
llvm-svn: 30460
-
- Sep 16, 2006
-
-
Chris Lattner authored
llvm-svn: 30407
-
Chris Lattner authored
llvm-svn: 30403
-
Chris Lattner authored
llvm-svn: 30402
-
- Sep 15, 2006
-
-
Chris Lattner authored
is faster and is needed for future improvements. llvm-svn: 30383
-
- Sep 14, 2006
-
-
Chris Lattner authored
This implements CodeGen/X86/and-or-fold.ll llvm-svn: 30379
-
Chris Lattner authored
matching things like ((x >> c1) & c2) | ((x << c3) & c4) to (rot x, c5) & c6 llvm-svn: 30376
-
Evan Cheng authored
llvm-svn: 30327
-
Evan Cheng authored
llvm-svn: 30326
-
Evan Cheng authored
llvm-svn: 30316
-
- Sep 13, 2006
-
-
Chris Lattner authored
in a specific BB, don't undo this!). This allows us to compile CodeGen/X86/loop-hoist.ll into: _foo: xorl %eax, %eax *** movl L_Arr$non_lazy_ptr, %ecx movl 4(%esp), %edx LBB1_1: #cond_true movl %eax, (%ecx,%eax,4) incl %eax cmpl %edx, %eax jne LBB1_1 #cond_true LBB1_2: #return ret instead of: _foo: xorl %eax, %eax movl 4(%esp), %ecx LBB1_1: #cond_true *** movl L_Arr$non_lazy_ptr, %edx movl %eax, (%edx,%eax,4) incl %eax cmpl %ecx, %eax jne LBB1_1 #cond_true LBB1_2: #return ret This was noticed in 464.h264ref. This doesn't usually affect PPC, but strikes X86 all the time. llvm-svn: 30290
-
Chris Lattner authored
addl %ecx, %ecx adcl %eax, %eax instead of: movl %ecx, %edx addl %edx, %edx shrl $31, %ecx addl %eax, %eax orl %ecx, %eax and to: addc r5, r5, r5 adde r4, r4, r4 instead of: slwi r2,r9,1 srwi r0,r11,31 slwi r3,r11,1 or r2,r0,r2 on PPC. llvm-svn: 30284
-
- Sep 12, 2006
-
-
Evan Cheng authored
representing expressions that can only be resolved at link time, etc. llvm-svn: 30278
-
- Sep 11, 2006
-
-
Nate Begeman authored
llvm-svn: 30240
-
- Sep 10, 2006
-
-
Chris Lattner authored
due to switch cases going to the same place, it make #pred != #phi entries, breaking live interval analysis. This fixes 458.sjeng on x86 with llc. llvm-svn: 30236
-
- Sep 09, 2006
-
-
Chris Lattner authored
llvm-svn: 30225
-
Chris Lattner authored
llvm-svn: 30217
-
Nate Begeman authored
the file now, however the relocated address is currently wrong. Fixing that will require some deep pondering. llvm-svn: 30207
-
- Sep 08, 2006
-
-
Chris Lattner authored
safe for later allocation. This fixes McCat/18-imp with llc-beta. llvm-svn: 30204
-
Chris Lattner authored
llvm-svn: 30198
-
Chris Lattner authored
of unallocatable registers, just because an alias is allocatable. We were picking registers like SIL just because ESI was being used. llvm-svn: 30197
-
Jim Laskey authored
llvm-svn: 30162
-
- Sep 07, 2006
-
-
Evan Cheng authored
llvm-svn: 30151
-
Chris Lattner authored
too many phi operands when lowering a switch to branches in some cases. llvm-svn: 30142
-
- Sep 06, 2006
-
-
Jim Laskey authored
llvm-svn: 30126
-
- Sep 05, 2006
-
-
Evan Cheng authored
llvm-svn: 30122
-
Chris Lattner authored
llvm-svn: 30118
-
Chris Lattner authored
llvm-svn: 30117
-
Chris Lattner authored
llvm-svn: 30114
-
Chris Lattner authored
def operand or a use operand. llvm-svn: 30109
-
Chris Lattner authored
actually *removes* one of the operands, instead of just assigning both operands the same register. This make reasoning about instructions unnecessarily complex, because you need to know if you are before or after register allocation to match up operand #'s with the target description file. Changing this also gets rid of a bunch of hacky code in various places. This patch also includes changes to fold loads into cmp/test instructions in the X86 backend, along with a significant simplification to the X86 spill folding code. llvm-svn: 30108
-
- Sep 04, 2006
-
-
Chris Lattner authored
llvm-svn: 30099
-
Chris Lattner authored
llvm-svn: 30098
-