- May 08, 2006
-
-
Evan Cheng authored
movw. That is we promote the destination operand to r16. So %CH = TRUNC_R16_R8 %BP is emitted as movw %bp, %cx. This is incorrect. If %cl is live, it would be clobbered. Ideally we want to do the opposite, that is emitted it as movb ??, %ch But this is not possible since %bp does not have a r8 sub-register. We are now defining a new register class R16_ which is a subclass of R16 containing only those 16-bit registers that have r8 sub-registers (i.e. AX - DX). We isel the truncate to two instructions, a MOV16to16_ to copy the value to the R16_ class, followed by a TRUNC_R16_R8. Due to bug 770, the register colaescer is not going to coalesce between R16 and R16_. That will be fixed later so we can eliminate the MOV16to16_. Right now, it can only be eliminated if we are lucky that source and destination registers are the same. llvm-svn: 28164
-
Nate Begeman authored
llvm-svn: 28162
-
Nate Begeman authored
llvm-svn: 28161
-
- May 07, 2006
-
-
Evan Cheng authored
llvm-svn: 28158
-
Jeff Cohen authored
Unlike Unix, Windows won't let a file be implicitly replaced via renaming without explicit permission. llvm-svn: 28157
-
Nate Begeman authored
still a couple missed optimizations, but we now generate all the possible rlwimis for multiple inserts into the same bitfield. More regression tests to come. llvm-svn: 28156
-
Chris Lattner authored
to handle all kinds of stuff, including silly things like: sextinreg(setcc,i16) -> setcc. llvm-svn: 28155
-
Chris Lattner authored
llvm-svn: 28154
-
Jeff Cohen authored
Apply bug fix supplied by Greg Pettyjohn for a bug he found: '<invalid>' is not a legal path on Windows. llvm-svn: 28153
-
Chris Lattner authored
llvm-svn: 28152
-
Chris Lattner authored
llvm-svn: 28151
-
Chris Lattner authored
llvm-svn: 28150
-
Chris Lattner authored
llvm-svn: 28149
-
- May 06, 2006
-
-
Jeff Cohen authored
llvm-svn: 28148
-
Chris Lattner authored
sign_extend_inreg operations. Though ComputeNumSignBits is still rudimentary, this is enough to compile this: short test(short X, short x) { int Y = X+x; return (Y >> 1); } short test2(short X, short x) { int Y = (short)(X+x); return Y >> 1; } into: _test: add r2, r3, r4 srawi r3, r2, 1 blr _test2: add r2, r3, r4 extsh r2, r2 srawi r3, r2, 1 blr instead of: _test: add r2, r3, r4 srawi r2, r2, 1 extsh r3, r2 blr _test2: add r2, r3, r4 extsh r2, r2 srawi r2, r2, 1 extsh r3, r2 blr llvm-svn: 28146
-
Chris Lattner authored
This will certainly be enhanced in the future. llvm-svn: 28145
-
Chris Lattner authored
a cast immediately before a PHI node. This fixes Regression/CodeGen/Generic/2006-05-06-GEP-Cast-Sink-Crash.ll llvm-svn: 28143
-
Chris Lattner authored
Make the "fold (and (cast A), (cast B)) -> (cast (and A, B))" transformation only apply when both casts really will cause code to be generated. If one or both doesn't, then this xform doesn't remove a cast. This fixes Transforms/InstCombine/2006-05-06-Infloop.ll llvm-svn: 28141
-
Chris Lattner authored
llvm-svn: 28139
-
Chris Lattner authored
llvm-svn: 28138
-
Chris Lattner authored
27,28c27 < movzwl %di, %edi < movl %edi, %ebx --- > movw %di, %bx llvm-svn: 28137
-
Chris Lattner authored
llvm-svn: 28136
-
Chris Lattner authored
llvm-svn: 28135
-
- May 05, 2006
-
-
Chris Lattner authored
using them. llvm-svn: 28134
-
Chris Lattner authored
llvm-svn: 28133
-
Chris Lattner authored
llvm-svn: 28132
-
Chris Lattner authored
llvm-svn: 28131
-
Chris Lattner authored
llvm-svn: 28130
-
Chris Lattner authored
generated: movl 8(%esp), %eax movl %eax, %edx addl $4316, %edx cmpb $1, %cl ja LBB1_2 #cond_false LBB1_1: #cond_true movl L_QuantizationTables720$non_lazy_ptr, %ecx movl %ecx, (%edx) movl L_QNOtoQuantTableShift720$non_lazy_ptr, %edx movl %edx, 4460(%eax) ret ... Now we generate: movl 8(%esp), %eax cmpb $1, %cl ja LBB1_2 #cond_false LBB1_1: #cond_true movl L_QuantizationTables720$non_lazy_ptr, %ecx movl %ecx, 4316(%eax) movl L_QNOtoQuantTableShift720$non_lazy_ptr, %ecx movl %ecx, 4460(%eax) ret ... which uses one fewer register. llvm-svn: 28129
-
Chris Lattner authored
llvm-svn: 28128
-
Evan Cheng authored
llvm-svn: 28127
-
Chris Lattner authored
llvm-svn: 28126
-
Chris Lattner authored
llvm-svn: 28124
-
Chris Lattner authored
// fold (and (sext x), (sext y)) -> (sext (and x, y)) // fold (or (sext x), (sext y)) -> (sext (or x, y)) // fold (xor (sext x), (sext y)) -> (sext (xor x, y)) // fold (and (aext x), (aext y)) -> (aext (and x, y)) // fold (or (aext x), (aext y)) -> (aext (or x, y)) // fold (xor (aext x), (aext y)) -> (aext (xor x, y)) llvm-svn: 28123
-
Chris Lattner authored
mov EAX, DWORD PTR [ESP + 4] mov ECX, DWORD PTR [EAX] mov EDX, ECX add EDX, EDX or EDX, ECX and EDX, -2147483648 and ECX, 2147483647 or EDX, ECX mov DWORD PTR [EAX], EDX ret instead of: sub ESP, 4 mov DWORD PTR [ESP], ESI mov EAX, DWORD PTR [ESP + 8] mov ECX, DWORD PTR [EAX] mov EDX, ECX add EDX, EDX mov ESI, ECX and ESI, -2147483648 and EDX, -2147483648 or EDX, ESI and ECX, 2147483647 or EDX, ECX mov DWORD PTR [EAX], EDX mov ESI, DWORD PTR [ESP] add ESP, 4 ret llvm-svn: 28122
-
Chris Lattner authored
llvm-svn: 28121
-
Chris Lattner authored
// fold (and (trunc x), (trunc y)) -> (trunc (and x, y)) // fold (or (trunc x), (trunc y)) -> (trunc (or x, y)) // fold (xor (trunc x), (trunc y)) -> (trunc (xor x, y)) llvm-svn: 28120
-
Evan Cheng authored
that gets emitted as movl (for r32 to i16, i8) or a movw (for r16 to i8). And if the destination gets allocated a subregister of the source operand, then the instruction will not be emitted at all. llvm-svn: 28119
-
Chris Lattner authored
llvm-svn: 28118
-
Jeff Cohen authored
llvm-svn: 28117
-