- Apr 18, 2007
-
-
Evan Cheng authored
llvm-svn: 36233
-
Chris Lattner authored
wrong operand. llvm-svn: 36223
-
- Apr 17, 2007
-
-
Chris Lattner authored
This compiles: int baz(long long a) { return (short)(((int)(a >>24)) >> 9); } into: _baz: srwi r2, r3, 1 extsh r3, r2 blr on PPC, instead of: _baz: slwi r2, r3, 8 srwi r2, r2, 9 extsh r3, r2 blr GCC produces: _baz: srwi r10,r4,24 insrwi r10,r3,24,0 srawi r9,r3,24 srawi r3,r10,9 extsh r3,r3 blr This implements CodeGen/PowerPC/shl_elim.ll llvm-svn: 36221
-
Evan Cheng authored
long live interval that has low usage density. 1. Change order of coalescing to join physical registers with virtual registers first before virtual register intervals become too long. 2. Check size and usage density to determine if it's worthwhile to join. 3. If joining is aborted, assign virtual register live interval allocation preference field to the physical register. 4. Register allocator should try to allocate to the preferred register first (if available) to create identify moves that can be eliminated. llvm-svn: 36218
-
Evan Cheng authored
llvm-svn: 36216
-
Evan Cheng authored
llvm-svn: 36214
-
Chris Lattner authored
a chance to hack on it. This compiles: int baz(long long a) { return (short)(((int)(a >>24)) >> 9); } into: _baz: slwi r2, r3, 8 srwi r2, r2, 9 extsh r3, r2 blr instead of: _baz: srwi r2, r4, 24 rlwimi r2, r3, 8, 0, 23 srwi r2, r2, 9 extsh r3, r2 blr This implements CodeGen/PowerPC/sign_ext_inreg1.ll llvm-svn: 36212
-
Reid Spencer authored
llvm-svn: 36180
-
- Apr 16, 2007
-
-
Anton Korobeynikov authored
target for tabs checking. llvm-svn: 36146
-
- Apr 14, 2007
-
-
Chris Lattner authored
some reason. :( Will investigate. llvm-svn: 36011
-
Anton Korobeynikov authored
shouldn't. Also fix some "latent" bug on 64-bit platforms llvm-svn: 35990
-
Chris Lattner authored
llvm-svn: 35985
-
- Apr 13, 2007
-
-
Anton Korobeynikov authored
llvm-svn: 35963
-
- Apr 12, 2007
-
-
Reid Spencer authored
linkage so we only end up with one of them in a program. These are, after all overloaded and templatish in nature. llvm-svn: 35956
-
Reid Spencer authored
barf when CBE is run with a program that contains these intrinsics. llvm-svn: 35946
-
Reid Spencer authored
the size of the value, not just zext. Also, give better names to two BBs. llvm-svn: 35945
-
Chris Lattner authored
class supports. In the case of vectors, this means we often get the wrong type (e.g. we get v4f32 instead of v8i16). Make sure to convert the vector result to the right type. This fixes CodeGen/X86/2007-04-11-InlineAsmVectorResult.ll llvm-svn: 35944
-
Chris Lattner authored
llvm-svn: 35943
-
Chris Lattner authored
llvm-svn: 35941
-
Reid Spencer authored
Implement the "part_set" intrinsic. llvm-svn: 35938
-
- Apr 11, 2007
-
-
Chris Lattner authored
llvm-svn: 35910
-
Chris Lattner authored
llvm-svn: 35888
-
Chris Lattner authored
llvm-svn: 35887
-
Chris Lattner authored
allows other simplifications. For example, this compiles: int isnegative(unsigned int X) { return !(X < 2147483648U); } Into this code: x86: movl 4(%esp), %eax shrl $31, %eax ret arm: mov r0, r0, lsr #31 bx lr thumb: lsr r0, r0, #31 bx lr instead of: x86: cmpl $0, 4(%esp) sets %al movzbl %al, %eax ret arm: mov r3, #0 cmp r0, #0 movlt r3, #1 mov r0, r3 bx lr thumb: mov r2, #1 mov r1, #0 cmp r0, #0 blt LBB1_2 @entry LBB1_1: @entry cpy r2, r1 LBB1_2: @entry cpy r0, r2 bx lr Testcase here: test/CodeGen/Generic/ispositive.ll llvm-svn: 35883
-
Chris Lattner authored
improves codegen on many architectures. Tests committed as CodeGen/*/iabs.ll X86 Old: X86 New: _test: _test: movl 4(%esp), %ecx movl 4(%esp), %eax movl %ecx, %eax movl %eax, %ecx negl %eax sarl $31, %ecx testl %ecx, %ecx addl %ecx, %eax cmovns %ecx, %eax xorl %ecx, %eax ret ret PPC Old: PPC New: _test: _test: cmpwi cr0, r3, -1 srawi r2, r3, 31 neg r2, r3 add r3, r3, r2 bgt cr0, LBB1_2 ; xor r3, r3, r2 LBB1_1: ; blr mr r3, r2 LBB1_2: ; blr ARM Old: ARM New: _test: _test: rsb r3, r0, #0 add r3, r0, r0, asr #31 cmp r0, #0 eor r0, r3, r0, asr #31 movge r3, r0 bx lr mov r0, r3 bx lr Thumb Old: Thumb New: _test: _test: neg r2, r0 asr r2, r0, #31 cmp r0, #0 add r0, r0, r2 bge LBB1_2 eor r0, r2 LBB1_1: @ bx lr cpy r0, r2 LBB1_2: @ bx lr Sparc Old: Sparc New: test: test: save -96, %o6, %o6 save -96, %o6, %o6 sethi 0, %l0 sra %i0, 31, %l0 sub %l0, %i0, %l0 add %i0, %l0, %l1 subcc %i0, -1, %l1 xor %l1, %l0, %i0 bg .BB1_2 restore %g0, %g0, %g0 nop retl .BB1_1: nop or %g0, %l0, %i0 .BB1_2: restore %g0, %g0, %g0 retl nop It also helps alpha/ia64 :) llvm-svn: 35881
-
Reid Spencer authored
Put the parameter attributes in their own ParamAttr name space. Adjust the rest of llvm as a result. llvm-svn: 35877
-
- Apr 10, 2007
-
-
Chris Lattner authored
commit a LOT of files. llvm-svn: 35858
-
Jeff Cohen authored
llvm-svn: 35850
-
Chris Lattner authored
llvm-svn: 35847
-
Chris Lattner authored
llvm-svn: 35840
-
- Apr 09, 2007
-
-
Anton Korobeynikov authored
llvm-svn: 35834
-
Jeff Cohen authored
llvm-svn: 35817
-
Anton Korobeynikov authored
1. Fix some bugs in the jump table lowering threshold 2. Implement much better metric for optimal pivot selection 3. Tune thresholds for different lowering methods 4. Implement shift-and trick for lowering small (<machine word length) cases with few destinations. Good testcase will follow. llvm-svn: 35816
-
Reid Spencer authored
Adapt handling of parameter attributes to use the new ParamAttrsList class. llvm-svn: 35814
-
Chris Lattner authored
llvm-svn: 35802
-
Chris Lattner authored
llvm-svn: 35800
-
Chris Lattner authored
instructions which replace themselves when FI's are rewritten (common on ppc). This fixes CodeGen/PowerPC/2006-10-17-ppc64-alloca.ll llvm-svn: 35789
-
Chris Lattner authored
some instructions can have multiple frame indices in them. If this happens, rewrite all of them. llvm-svn: 35785
-
Chris Lattner authored
llvm-svn: 35783
-
Chris Lattner authored
llvm-svn: 35779
-