- Mar 07, 2006
-
-
Evan Cheng authored
- Conditionalize Dwarf debugging output (Darwin only for now). llvm-svn: 26582
-
Evan Cheng authored
llvm-svn: 26581
-
- Mar 06, 2006
-
-
Chris Lattner authored
llvm-svn: 26562
-
- Mar 05, 2006
-
-
Chris Lattner authored
llvm-svn: 26549
-
Chris Lattner authored
we want to canonicalize the other way. llvm-svn: 26547
-
Chris Lattner authored
implement copysign as a native op if they have it. llvm-svn: 26541
-
Chris Lattner authored
llvm-svn: 26539
-
Chris Lattner authored
llvm-svn: 26536
-
- Mar 04, 2006
-
-
Chris Lattner authored
llvm-svn: 26523
-
Evan Cheng authored
llvm-svn: 26520
-
Evan Cheng authored
rep/stos and rep/mov if the count is not a constant. We could do rep/stosl; and $count, 3; rep/stosb For now, I will lower them to memset / memcpy calls. We will revisit this after a little bit experiment. Also need to take care of the trailing bytes even if the count is a constant. Since the max. number of trailing bytes are 3, we will simply issue loads / stores. llvm-svn: 26517
-
Chris Lattner authored
llvm-svn: 26513
-
Evan Cheng authored
llvm-svn: 26512
-
- Mar 03, 2006
-
-
Evan Cheng authored
llvm-svn: 26503
-
Chris Lattner authored
llvm-svn: 26490
-
Chris Lattner authored
llvm-svn: 26479
-
- Mar 02, 2006
-
-
Chris Lattner authored
llvm-svn: 26472
-
- Mar 01, 2006
-
-
Chris Lattner authored
llvm-svn: 26450
-
Chris Lattner authored
llvm-svn: 26448
-
Chris Lattner authored
void foo(float a, int *b) { *b = a; } to this: _foo: fctiwz f0, f1 stfiwx f0, 0, r4 blr instead of this: _foo: fctiwz f0, f1 stfd f0, -8(r1) lwz r2, -4(r1) stw r2, 0(r4) blr This implements CodeGen/PowerPC/stfiwx.ll, and also incidentally does the right thing for GCC bugzilla 26505. llvm-svn: 26447
-
Chris Lattner authored
llvm-svn: 26445
-
Chris Lattner authored
llvm-svn: 26442
-
Evan Cheng authored
llvm-svn: 26438
-
Evan Cheng authored
llvm-svn: 26435
-
Evan Cheng authored
llvm-svn: 26430
-
- Feb 28, 2006
-
-
Evan Cheng authored
llvm-svn: 26429
-
Chris Lattner authored
but I don't know what other PPC impls do. If someone could update the proc table, I would appreciate it :) llvm-svn: 26421
-
Chris Lattner authored
llvm-svn: 26418
-
- Feb 27, 2006
-
-
Nate Begeman authored
llvm-svn: 26405
-
Chris Lattner authored
llvm-svn: 26403
-
Jim Laskey authored
llvm-svn: 26399
-
Chris Lattner authored
PowerPC/div-2.ll llvm-svn: 26392
-
Chris Lattner authored
on PowerPC/small-arguments.ll llvm-svn: 26389
-
Chris Lattner authored
simplify the RHS. This allows for the elimination of many thousands of ands from multisource, and compiles CodeGen/PowerPC/and-elim.ll:test2 into this: _test2: srwi r2, r3, 1 xori r3, r2, 40961 blr instead of this: _test2: rlwinm r2, r3, 31, 17, 31 xori r2, r2, 40961 rlwinm r3, r2, 0, 16, 31 blr llvm-svn: 26388
-
Chris Lattner authored
assertzext produces zero bits. llvm-svn: 26386
-
- Feb 26, 2006
-
-
Evan Cheng authored
than base). llvm-svn: 26382
-
Evan Cheng authored
and 2005-05-12-Int64ToFP. llvm-svn: 26380
-
- Feb 25, 2006
-
-
Evan Cheng authored
* Cleaned up and tweaked LEA cost analysis code. Removed some hacks. * Handle ADD $X, c to MOV32ri $X+c. These patterns cannot be autogen'd and they need to be matched before LEA. llvm-svn: 26376
-
Evan Cheng authored
llvm-svn: 26375
-
Evan Cheng authored
* Add patterns to handle GlobalAddress, ConstantPool, etc. MOV32ri to materialize these nodes in registers. ADD32ri to handle %reg + GA, etc. MOV32mi to handle store GA, etc. to memory. llvm-svn: 26374
-