- Mar 01, 2006
-
-
Chris Lattner authored
llvm-svn: 26450
-
Chris Lattner authored
llvm-svn: 26448
-
Chris Lattner authored
void foo(float a, int *b) { *b = a; } to this: _foo: fctiwz f0, f1 stfiwx f0, 0, r4 blr instead of this: _foo: fctiwz f0, f1 stfd f0, -8(r1) lwz r2, -4(r1) stw r2, 0(r4) blr This implements CodeGen/PowerPC/stfiwx.ll, and also incidentally does the right thing for GCC bugzilla 26505. llvm-svn: 26447
-
Chris Lattner authored
llvm-svn: 26445
-
Chris Lattner authored
llvm-svn: 26443
-
Chris Lattner authored
llvm-svn: 26442
-
Chris Lattner authored
llvm-svn: 26441
-
Chris Lattner authored
implementing Regression/CodeGen/X86/mul-shift-reassoc.ll llvm-svn: 26440
-
Evan Cheng authored
llvm-svn: 26438
-
Evan Cheng authored
llvm-svn: 26436
-
Evan Cheng authored
llvm-svn: 26435
-
Evan Cheng authored
- All abstrct vector nodes must have # of elements and element type as their first two operands. llvm-svn: 26432
-
Evan Cheng authored
llvm-svn: 26430
-
- Feb 28, 2006
-
-
Evan Cheng authored
llvm-svn: 26429
-
Jim Laskey authored
Add array of debug descriptor support. llvm-svn: 26428
-
Chris Lattner authored
Transforms/InstCombine/2006-02-28-Crash.ll llvm-svn: 26427
-
Chris Lattner authored
but I don't know what other PPC impls do. If someone could update the proc table, I would appreciate it :) llvm-svn: 26421
-
Chris Lattner authored
unsigned foo4(unsigned short *P) { return *P & 255; } unsigned foo5(short *P) { return *P & 255; } to: _foo4: lbz r3,1(r3) blr _foo5: lbz r3,1(r3) blr not: _foo4: lhz r2, 0(r3) rlwinm r3, r2, 0, 24, 31 blr _foo5: lhz r2, 0(r3) rlwinm r3, r2, 0, 24, 31 blr llvm-svn: 26419
-
Chris Lattner authored
llvm-svn: 26418
-
Chris Lattner authored
unsigned foo3(unsigned *P) { return *P & 255; } as: _foo3: lbz r3, 3(r3) blr instead of: _foo3: lwz r2, 0(r3) rlwinm r3, r2, 0, 24, 31 blr and: unsigned short foo2(float a) { return a; } as: _foo2: fctiwz f0, f1 stfd f0, -8(r1) lhz r3, -2(r1) blr instead of: _foo2: fctiwz f0, f1 stfd f0, -8(r1) lwz r2, -4(r1) rlwinm r3, r2, 0, 16, 31 blr llvm-svn: 26417
-
Chris Lattner authored
llvm-svn: 26416
-
Chris Lattner authored
llvm-svn: 26415
-
Chris Lattner authored
llvm-svn: 26413
-
Chris Lattner authored
llvm-svn: 26411
-
Chris Lattner authored
llvm-svn: 26410
-
- Feb 27, 2006
-
-
Jim Laskey authored
llvm-svn: 26409
-
Nate Begeman authored
llvm-svn: 26405
-
Jim Laskey authored
llvm-svn: 26404
-
Chris Lattner authored
llvm-svn: 26403
-
Jim Laskey authored
llvm-svn: 26402
-
Jim Laskey authored
llvm-svn: 26401
-
Jim Laskey authored
llvm-svn: 26400
-
Jim Laskey authored
llvm-svn: 26399
-
Chris Lattner authored
Make this code more powerful by using ComputeMaskedBits instead of looking for an AND operand. This lets us fold this: int %test23(int %a) { %tmp.1 = and int %a, 1 %tmp.2 = seteq int %tmp.1, 0 %tmp.3 = cast bool %tmp.2 to int ;; xor tmp1, 1 ret int %tmp.3 } into: xor (and a, 1), 1 llvm-svn: 26396
-
Chris Lattner authored
and (A-B) == A -> B == 0 llvm-svn: 26394
-
Chris Lattner authored
PowerPC/div-2.ll llvm-svn: 26392
-
Chris Lattner authored
llvm-svn: 26390
-
Chris Lattner authored
on PowerPC/small-arguments.ll llvm-svn: 26389
-
Chris Lattner authored
simplify the RHS. This allows for the elimination of many thousands of ands from multisource, and compiles CodeGen/PowerPC/and-elim.ll:test2 into this: _test2: srwi r2, r3, 1 xori r3, r2, 40961 blr instead of this: _test2: rlwinm r2, r3, 31, 17, 31 xori r2, r2, 40961 rlwinm r3, r2, 0, 16, 31 blr llvm-svn: 26388
-
Chris Lattner authored
assertzext produces zero bits. llvm-svn: 26386
-