- Sep 14, 2006
-
-
Evan Cheng authored
llvm-svn: 30326
-
Chris Lattner authored
llvm-svn: 30321
-
Chris Lattner authored
the process in addition to disabling core file emission. This speeds up bugpoint on default-configured macs by several orders of magnitude. llvm-svn: 30317
-
Evan Cheng authored
llvm-svn: 30316
-
Devang Patel authored
to Dominators.h llvm-svn: 30309
-
Chris Lattner authored
llvm-svn: 30308
-
- Sep 13, 2006
-
-
Chris Lattner authored
This folds unconditional branches that are often produced by code specialization. llvm-svn: 30307
-
Nick Lewycky authored
llvm-svn: 30305
-
Nick Lewycky authored
llvm-svn: 30304
-
Chris Lattner authored
llvm-svn: 30303
-
Evan Cheng authored
llvm-svn: 30300
-
Nick Lewycky authored
llvm-svn: 30298
-
Chris Lattner authored
llvm-svn: 30294
-
Chris Lattner authored
llvm-svn: 30293
-
Chris Lattner authored
llvm-svn: 30292
-
Rafael Espindola authored
llvm-svn: 30291
-
Chris Lattner authored
in a specific BB, don't undo this!). This allows us to compile CodeGen/X86/loop-hoist.ll into: _foo: xorl %eax, %eax *** movl L_Arr$non_lazy_ptr, %ecx movl 4(%esp), %edx LBB1_1: #cond_true movl %eax, (%ecx,%eax,4) incl %eax cmpl %edx, %eax jne LBB1_1 #cond_true LBB1_2: #return ret instead of: _foo: xorl %eax, %eax movl 4(%esp), %ecx LBB1_1: #cond_true *** movl L_Arr$non_lazy_ptr, %edx movl %eax, (%edx,%eax,4) incl %eax cmpl %ecx, %eax jne LBB1_1 #cond_true LBB1_2: #return ret This was noticed in 464.h264ref. This doesn't usually affect PPC, but strikes X86 all the time. llvm-svn: 30290
-
Chris Lattner authored
We now compile CodeGen/X86/lea-2.ll into: _test: movl 4(%esp), %eax movl 8(%esp), %ecx leal -5(%ecx,%eax,4), %eax ret instead of: _test: movl 4(%esp), %eax leal (,%eax,4), %eax addl 8(%esp), %eax addl $4294967291, %eax ret llvm-svn: 30288
-
Chris Lattner authored
llvm-svn: 30286
-
Chris Lattner authored
llvm-svn: 30285
-
Chris Lattner authored
addl %ecx, %ecx adcl %eax, %eax instead of: movl %ecx, %edx addl %edx, %edx shrl $31, %ecx addl %eax, %eax orl %ecx, %eax and to: addc r5, r5, r5 adde r4, r4, r4 instead of: slwi r2,r9,1 srwi r0,r11,31 slwi r3,r11,1 or r2,r0,r2 on PPC. llvm-svn: 30284
-
Chris Lattner authored
This implements CodeGen/X86/jump_sign.ll. llvm-svn: 30283
-
Devang Patel authored
llvm-svn: 30281
-
- Sep 12, 2006
-
-
Evan Cheng authored
llvm-svn: 30279
-
Evan Cheng authored
representing expressions that can only be resolved at link time, etc. llvm-svn: 30278
-
Evan Cheng authored
llvm-svn: 30277
-
Chris Lattner authored
Handle this. This fixes PR908 and Transforms/LICM/2006-09-12-DeadUserOfSunkInstr.ll llvm-svn: 30275
-
Chris Lattner authored
llvm-svn: 30271
-
Chris Lattner authored
llvm-svn: 30269
-
Chris Lattner authored
llvm-svn: 30268
-
- Sep 11, 2006
-
-
Chris Lattner authored
llvm-svn: 30266
-
Rafael Espindola authored
llvm-svn: 30262
-
Rafael Espindola authored
llvm-svn: 30261
-
Rafael Espindola authored
llvm-svn: 30252
-
Nick Lewycky authored
llvm-svn: 30251
-
Rafael Espindola authored
llvm-svn: 30246
-
Evan Cheng authored
llvm-svn: 30245
-
Evan Cheng authored
llvm-svn: 30244
-
Evan Cheng authored
operand of a conditional branch to allow load folding into CMP / TEST instructions. llvm-svn: 30241
-
Nate Begeman authored
llvm-svn: 30240
-