- May 08, 2006
-
-
Evan Cheng authored
movw. That is we promote the destination operand to r16. So %CH = TRUNC_R16_R8 %BP is emitted as movw %bp, %cx. This is incorrect. If %cl is live, it would be clobbered. Ideally we want to do the opposite, that is emitted it as movb ??, %ch But this is not possible since %bp does not have a r8 sub-register. We are now defining a new register class R16_ which is a subclass of R16 containing only those 16-bit registers that have r8 sub-registers (i.e. AX - DX). We isel the truncate to two instructions, a MOV16to16_ to copy the value to the R16_ class, followed by a TRUNC_R16_R8. Due to bug 770, the register colaescer is not going to coalesce between R16 and R16_. That will be fixed later so we can eliminate the MOV16to16_. Right now, it can only be eliminated if we are lucky that source and destination registers are the same. llvm-svn: 28164
-
Chris Lattner authored
definition of the User class is available, this fixes the build with some compiler versions. llvm-svn: 28163
-
Nate Begeman authored
llvm-svn: 28162
-
Nate Begeman authored
llvm-svn: 28161
-
Nate Begeman authored
llvm-svn: 28160
-
- May 07, 2006
-
-
Chris Lattner authored
Change test to be a positive test instead of a negative test llvm-svn: 28159
-
Evan Cheng authored
llvm-svn: 28158
-
Jeff Cohen authored
Unlike Unix, Windows won't let a file be implicitly replaced via renaming without explicit permission. llvm-svn: 28157
-
Nate Begeman authored
still a couple missed optimizations, but we now generate all the possible rlwimis for multiple inserts into the same bitfield. More regression tests to come. llvm-svn: 28156
-
Chris Lattner authored
to handle all kinds of stuff, including silly things like: sextinreg(setcc,i16) -> setcc. llvm-svn: 28155
-
Chris Lattner authored
llvm-svn: 28154
-
Jeff Cohen authored
Apply bug fix supplied by Greg Pettyjohn for a bug he found: '<invalid>' is not a legal path on Windows. llvm-svn: 28153
-
Chris Lattner authored
llvm-svn: 28152
-
Chris Lattner authored
llvm-svn: 28151
-
Chris Lattner authored
llvm-svn: 28150
-
Chris Lattner authored
llvm-svn: 28149
-
- May 06, 2006
-
-
Jeff Cohen authored
llvm-svn: 28148
-
Chris Lattner authored
llvm-svn: 28147
-
Chris Lattner authored
sign_extend_inreg operations. Though ComputeNumSignBits is still rudimentary, this is enough to compile this: short test(short X, short x) { int Y = X+x; return (Y >> 1); } short test2(short X, short x) { int Y = (short)(X+x); return Y >> 1; } into: _test: add r2, r3, r4 srawi r3, r2, 1 blr _test2: add r2, r3, r4 extsh r2, r2 srawi r3, r2, 1 blr instead of: _test: add r2, r3, r4 srawi r2, r2, 1 extsh r3, r2 blr _test2: add r2, r3, r4 extsh r2, r2 srawi r2, r2, 1 extsh r3, r2 blr llvm-svn: 28146
-
Chris Lattner authored
This will certainly be enhanced in the future. llvm-svn: 28145
-
Chris Lattner authored
llvm-svn: 28144
-
Chris Lattner authored
a cast immediately before a PHI node. This fixes Regression/CodeGen/Generic/2006-05-06-GEP-Cast-Sink-Crash.ll llvm-svn: 28143
-
Chris Lattner authored
llvm-svn: 28142
-
Chris Lattner authored
Make the "fold (and (cast A), (cast B)) -> (cast (and A, B))" transformation only apply when both casts really will cause code to be generated. If one or both doesn't, then this xform doesn't remove a cast. This fixes Transforms/InstCombine/2006-05-06-Infloop.ll llvm-svn: 28141
-
Chris Lattner authored
llvm-svn: 28140
-
Chris Lattner authored
llvm-svn: 28139
-
Chris Lattner authored
llvm-svn: 28138
-
Chris Lattner authored
27,28c27 < movzwl %di, %edi < movl %edi, %ebx --- > movw %di, %bx llvm-svn: 28137
-
Chris Lattner authored
llvm-svn: 28136
-
Chris Lattner authored
llvm-svn: 28135
-
- May 05, 2006
-
-
Chris Lattner authored
using them. llvm-svn: 28134
-
Chris Lattner authored
llvm-svn: 28133
-
Chris Lattner authored
llvm-svn: 28132
-
Chris Lattner authored
llvm-svn: 28131
-
Chris Lattner authored
llvm-svn: 28130
-
Chris Lattner authored
generated: movl 8(%esp), %eax movl %eax, %edx addl $4316, %edx cmpb $1, %cl ja LBB1_2 #cond_false LBB1_1: #cond_true movl L_QuantizationTables720$non_lazy_ptr, %ecx movl %ecx, (%edx) movl L_QNOtoQuantTableShift720$non_lazy_ptr, %edx movl %edx, 4460(%eax) ret ... Now we generate: movl 8(%esp), %eax cmpb $1, %cl ja LBB1_2 #cond_false LBB1_1: #cond_true movl L_QuantizationTables720$non_lazy_ptr, %ecx movl %ecx, 4316(%eax) movl L_QNOtoQuantTableShift720$non_lazy_ptr, %ecx movl %ecx, 4460(%eax) ret ... which uses one fewer register. llvm-svn: 28129
-
Chris Lattner authored
llvm-svn: 28128
-
Evan Cheng authored
llvm-svn: 28127
-
Chris Lattner authored
llvm-svn: 28126
-
Chris Lattner authored
llvm-svn: 28125
-