- Feb 27, 2006
-
-
Chris Lattner authored
Make this code more powerful by using ComputeMaskedBits instead of looking for an AND operand. This lets us fold this: int %test23(int %a) { %tmp.1 = and int %a, 1 %tmp.2 = seteq int %tmp.1, 0 %tmp.3 = cast bool %tmp.2 to int ;; xor tmp1, 1 ret int %tmp.3 } into: xor (and a, 1), 1 llvm-svn: 26396
-
Chris Lattner authored
llvm-svn: 26395
-
Chris Lattner authored
and (A-B) == A -> B == 0 llvm-svn: 26394
-
Chris Lattner authored
llvm-svn: 26393
-
Chris Lattner authored
PowerPC/div-2.ll llvm-svn: 26392
-
Chris Lattner authored
llvm-svn: 26391
-
Chris Lattner authored
llvm-svn: 26390
-
Chris Lattner authored
on PowerPC/small-arguments.ll llvm-svn: 26389
-
Chris Lattner authored
simplify the RHS. This allows for the elimination of many thousands of ands from multisource, and compiles CodeGen/PowerPC/and-elim.ll:test2 into this: _test2: srwi r2, r3, 1 xori r3, r2, 40961 blr instead of this: _test2: rlwinm r2, r3, 31, 17, 31 xori r2, r2, 40961 rlwinm r3, r2, 0, 16, 31 blr llvm-svn: 26388
-
Chris Lattner authored
llvm-svn: 26387
-
Chris Lattner authored
assertzext produces zero bits. llvm-svn: 26386
-
- Feb 26, 2006
-
-
Chris Lattner authored
InstCombine/or.ll:test23. llvm-svn: 26385
-
Chris Lattner authored
llvm-svn: 26384
-
Jim Laskey authored
target assembler code gen. llvm-svn: 26383
-
Evan Cheng authored
than base). llvm-svn: 26382
-
Evan Cheng authored
llvm-svn: 26381
-
Evan Cheng authored
and 2005-05-12-Int64ToFP. llvm-svn: 26380
-
- Feb 25, 2006
-
-
Jim Laskey authored
llvm-svn: 26379
-
Evan Cheng authored
llvm-svn: 26378
-
Evan Cheng authored
llvm-svn: 26377
-
Evan Cheng authored
* Cleaned up and tweaked LEA cost analysis code. Removed some hacks. * Handle ADD $X, c to MOV32ri $X+c. These patterns cannot be autogen'd and they need to be matched before LEA. llvm-svn: 26376
-
Evan Cheng authored
llvm-svn: 26375
-
Evan Cheng authored
* Add patterns to handle GlobalAddress, ConstantPool, etc. MOV32ri to materialize these nodes in registers. ADD32ri to handle %reg + GA, etc. MOV32mi to handle store GA, etc. to memory. llvm-svn: 26374
-
Evan Cheng authored
llvm-svn: 26373
-
Evan Cheng authored
llvm-svn: 26372
-
Evan Cheng authored
llvm-svn: 26371
-
Chris Lattner authored
llvm-svn: 26370
-
Chris Lattner authored
exposed with a fastcc problem (breaking pcompress2 on x86 with -enable-x86-fastcc). When reloading a reused reg, make sure to invalidate the reloaded reg, and check to see if there are any other pending uses of the same register. llvm-svn: 26369
-
Chris Lattner authored
Add a minor compile time win, no codegen change. llvm-svn: 26368
-
Chris Lattner authored
This gets rid of two gotos, which is always nice, and also adds some comments. No functionality change, this is just a refactor. llvm-svn: 26367
-
Evan Cheng authored
ADD X, 4 ==> MOV32ri $X+4, ... llvm-svn: 26366
-
- Feb 24, 2006
-
-
Chris Lattner authored
inline asms! :) llvm-svn: 26365
-
Chris Lattner authored
llvm-svn: 26364
-
Chris Lattner authored
llvm-svn: 26363
-
Chris Lattner authored
llvm-svn: 26362
-
Chris Lattner authored
Add support for addressing modes. llvm-svn: 26361
-
Chris Lattner authored
llvm-svn: 26358
-
Chris Lattner authored
llvm-svn: 26357
-
Chris Lattner authored
in the code that does "select C, (X+Y), (X-Y) --> (X+(select C, Y, (-Y)))". We now compile this loop: LBB1_1: ; no_exit add r6, r2, r3 subf r3, r2, r3 cmpwi cr0, r2, 0 addi r7, r5, 4 lwz r2, 0(r5) addi r4, r4, 1 blt cr0, LBB1_4 ; no_exit LBB1_3: ; no_exit mr r3, r6 LBB1_4: ; no_exit cmpwi cr0, r4, 16 mr r5, r7 bne cr0, LBB1_1 ; no_exit into this instead: LBB1_1: ; no_exit srawi r6, r2, 31 add r2, r2, r6 xor r6, r2, r6 addi r7, r5, 4 lwz r2, 0(r5) addi r4, r4, 1 add r3, r3, r6 cmpwi cr0, r4, 16 mr r5, r7 bne cr0, LBB1_1 ; no_exit llvm-svn: 26356
-
Jim Laskey authored
(to allow llvm-gcc4 to build.) llvm-svn: 26355
-