- Nov 15, 2006
-
-
Evan Cheng authored
llvm-svn: 31761
-
Evan Cheng authored
llvm-svn: 31760
-
Evan Cheng authored
llvm-svn: 31759
-
Evan Cheng authored
llvm-svn: 31758
-
Chris Lattner authored
addrmodes. llvm-svn: 31757
-
Devang Patel authored
llvm-svn: 31756
-
Chris Lattner authored
CBE and interpreter. llvm-svn: 31755
-
Chris Lattner authored
llvm-svn: 31754
-
Chris Lattner authored
llvm-svn: 31753
-
Chris Lattner authored
llvm-svn: 31752
-
Chris Lattner authored
llvm-svn: 31751
-
Reid Spencer authored
llvm-svn: 31750
-
Chris Lattner authored
pair for cleanliness. Add instructions for PPC32 preinc-stores with commented out patterns. More improvement is needed to enable the patterns, but we're getting close. llvm-svn: 31749
-
Chris Lattner authored
llvm-svn: 31748
-
Devang Patel authored
llvm-svn: 31747
-
Devang Patel authored
llvm-svn: 31746
-
Devang Patel authored
llvm-svn: 31745
-
Devang Patel authored
Now BasicBlockPassManager_New is a FunctionPass, FunctionPassManager_New is a ModulePass llvm-svn: 31744
-
- Nov 14, 2006
-
-
Chris Lattner authored
why. llvm-svn: 31743
-
Chris Lattner authored
llvm-svn: 31742
-
Devang Patel authored
Update LastUser to recursively walk required transitive set. llvm-svn: 31741
-
Chris Lattner authored
llvm-svn: 31740
-
Chris Lattner authored
llvm-svn: 31739
-
Chris Lattner authored
stores. llvm-svn: 31738
-
Evan Cheng authored
llvm-svn: 31737
-
Chris Lattner authored
llvm-svn: 31736
-
Chris Lattner authored
stores. llvm-svn: 31735
-
Chris Lattner authored
clobber. This allows LR8 to be save/restored correctly as a 64-bit quantity, instead of handling it as a 32-bit quantity. This unbreaks ppc64 codegen when the code is actually located above the 4G boundary. llvm-svn: 31734
-
Chris Lattner authored
llvm-svn: 31733
-
Chris Lattner authored
that there were two input operands before the variable operand portion. This *happened* to be true for all call instructions, which took a chain and a destination, but was not true for the PPC BCTRL instruction, whose destination is implicit. Making this code more general allows elimination of the custom selection logic for BCTRL. llvm-svn: 31732
-
Chris Lattner authored
llvm-svn: 31730
-
Chris Lattner authored
(X >> Z) op (Y >> Z) -> (X op Y) >> Z for all shifts and all ops={and/or/xor}. llvm-svn: 31729
-
Chris Lattner authored
llvm-svn: 31728
-
Chris Lattner authored
typedef struct { unsigned prefix : 4; unsigned code : 4; unsigned unsigned_p : 4; } tree_common; int foo(tree_common *a, tree_common *b) { return a->code == b->code; } into: _foo: movl 4(%esp), %eax movl 8(%esp), %ecx movl (%eax), %eax xorl (%ecx), %eax # TRUNCATE movb %al, %al shrb $4, %al testb %al, %al sete %al movzbl %al, %eax ret instead of: _foo: movl 8(%esp), %eax movb (%eax), %al shrb $4, %al movl 4(%esp), %ecx movb (%ecx), %cl shrb $4, %cl cmpb %al, %cl sete %al movzbl %al, %eax ret saving one cycle by eliminating a shift. llvm-svn: 31727
-
Chris Lattner authored
llvm-svn: 31726
-
Chris Lattner authored
'(shr (ctlz (sub Y, Z)), 5)'. The use of xor better exposes the operation to bit-twiddling logic in the dag combiner. For example, this: typedef struct { unsigned prefix : 4; unsigned code : 4; unsigned unsigned_p : 4; } tree_common; int foo(tree_common *a, tree_common *b) { return a->code == b->code; } Now compiles to: _foo: lwz r2, 0(r4) lwz r3, 0(r3) xor r2, r3, r2 rlwinm r2, r2, 28, 28, 31 cntlzw r2, r2 srwi r3, r2, 5 blr instead of: _foo: lbz r2, 3(r4) lbz r3, 3(r3) srwi r2, r2, 4 srwi r3, r3, 4 subf r2, r2, r3 cntlzw r2, r2 srwi r3, r2, 5 blr saving a cycle. llvm-svn: 31725
-
Andrew Lenharth authored
llvm-svn: 31724
-
Reid Spencer authored
Reader code much easier to read and maintain. Backwards compatibility from version 5 format has been retained. Older formats will produce an error. llvm-svn: 31723
-
Devang Patel authored
llvm-svn: 31722
-
Devang Patel authored
llvm-svn: 31721
-