- Sep 14, 2006
-
-
Evan Cheng authored
llvm-svn: 30327
-
Evan Cheng authored
llvm-svn: 30326
-
Evan Cheng authored
llvm-svn: 30325
-
Evan Cheng authored
llvm-svn: 30324
-
Chris Lattner authored
llvm-svn: 30323
-
Chris Lattner authored
llvm-svn: 30322
-
Chris Lattner authored
llvm-svn: 30321
-
Reid Spencer authored
header file on Darwin. llvm-svn: 30319
-
Chris Lattner authored
llvm-svn: 30318
-
Chris Lattner authored
the process in addition to disabling core file emission. This speeds up bugpoint on default-configured macs by several orders of magnitude. llvm-svn: 30317
-
Evan Cheng authored
llvm-svn: 30316
-
Devang Patel authored
llvm-svn: 30315
-
Evan Cheng authored
llvm-svn: 30314
-
Nick Lewycky authored
llvm-svn: 30313
-
Nick Lewycky authored
a pointer to a temporary. llvm-svn: 30312
-
Nick Lewycky authored
pick up on memory errors. llvm-svn: 30311
-
Devang Patel authored
type. Do not ignore these operands while finding external references. llvm-svn: 30310
-
Devang Patel authored
to Dominators.h llvm-svn: 30309
-
Chris Lattner authored
llvm-svn: 30308
-
- Sep 13, 2006
-
-
Chris Lattner authored
This folds unconditional branches that are often produced by code specialization. llvm-svn: 30307
-
Nick Lewycky authored
llvm-svn: 30305
-
Nick Lewycky authored
llvm-svn: 30304
-
Chris Lattner authored
llvm-svn: 30303
-
Chris Lattner authored
llvm-svn: 30302
-
Evan Cheng authored
llvm-svn: 30300
-
Nick Lewycky authored
llvm-svn: 30298
-
Chris Lattner authored
llvm-svn: 30296
-
Chris Lattner authored
llvm-svn: 30294
-
Chris Lattner authored
llvm-svn: 30293
-
Chris Lattner authored
llvm-svn: 30292
-
Rafael Espindola authored
llvm-svn: 30291
-
Chris Lattner authored
in a specific BB, don't undo this!). This allows us to compile CodeGen/X86/loop-hoist.ll into: _foo: xorl %eax, %eax *** movl L_Arr$non_lazy_ptr, %ecx movl 4(%esp), %edx LBB1_1: #cond_true movl %eax, (%ecx,%eax,4) incl %eax cmpl %edx, %eax jne LBB1_1 #cond_true LBB1_2: #return ret instead of: _foo: xorl %eax, %eax movl 4(%esp), %ecx LBB1_1: #cond_true *** movl L_Arr$non_lazy_ptr, %edx movl %eax, (%edx,%eax,4) incl %eax cmpl %ecx, %eax jne LBB1_1 #cond_true LBB1_2: #return ret This was noticed in 464.h264ref. This doesn't usually affect PPC, but strikes X86 all the time. llvm-svn: 30290
-
Chris Lattner authored
llvm-svn: 30289
-
Chris Lattner authored
We now compile CodeGen/X86/lea-2.ll into: _test: movl 4(%esp), %eax movl 8(%esp), %ecx leal -5(%ecx,%eax,4), %eax ret instead of: _test: movl 4(%esp), %eax leal (,%eax,4), %eax addl 8(%esp), %eax addl $4294967291, %eax ret llvm-svn: 30288
-
Chris Lattner authored
llvm-svn: 30287
-
Chris Lattner authored
llvm-svn: 30286
-
Chris Lattner authored
llvm-svn: 30285
-
Chris Lattner authored
addl %ecx, %ecx adcl %eax, %eax instead of: movl %ecx, %edx addl %edx, %edx shrl $31, %ecx addl %eax, %eax orl %ecx, %eax and to: addc r5, r5, r5 adde r4, r4, r4 instead of: slwi r2,r9,1 srwi r0,r11,31 slwi r3,r11,1 or r2,r0,r2 on PPC. llvm-svn: 30284
-
Chris Lattner authored
This implements CodeGen/X86/jump_sign.ll. llvm-svn: 30283
-
Chris Lattner authored
llvm-svn: 30282
-