- Nov 16, 2006
-
-
Chris Lattner authored
before printing it. llvm-svn: 31791
-
Evan Cheng authored
llvm-svn: 31790
-
Chris Lattner authored
llvm-svn: 31785
-
Chris Lattner authored
llvm-svn: 31778
-
Chris Lattner authored
llvm-svn: 31776
-
Chris Lattner authored
llvm-svn: 31775
-
Chris Lattner authored
llvm-svn: 31774
-
Chris Lattner authored
llvm-svn: 31771
-
Chris Lattner authored
Tell the codegen emitter that specific operands are not to be encoded, fixing JIT regressions w.r.t. pre-inc loads and stores (e.g. lwzu, which we generate even when general preinc loads are not enabled). llvm-svn: 31770
-
- Nov 15, 2006
-
-
Chris Lattner authored
llvm-svn: 31768
-
Evan Cheng authored
llvm-svn: 31765
-
Evan Cheng authored
llvm-svn: 31764
-
Evan Cheng authored
llvm-svn: 31763
-
Chris Lattner authored
addrmodes. llvm-svn: 31757
-
Chris Lattner authored
CBE and interpreter. llvm-svn: 31755
-
Chris Lattner authored
llvm-svn: 31754
-
Chris Lattner authored
llvm-svn: 31752
-
Chris Lattner authored
pair for cleanliness. Add instructions for PPC32 preinc-stores with commented out patterns. More improvement is needed to enable the patterns, but we're getting close. llvm-svn: 31749
-
- Nov 14, 2006
-
-
Evan Cheng authored
llvm-svn: 31737
-
Chris Lattner authored
llvm-svn: 31736
-
Chris Lattner authored
stores. llvm-svn: 31735
-
Chris Lattner authored
clobber. This allows LR8 to be save/restored correctly as a 64-bit quantity, instead of handling it as a 32-bit quantity. This unbreaks ppc64 codegen when the code is actually located above the 4G boundary. llvm-svn: 31734
-
Chris Lattner authored
llvm-svn: 31733
-
Chris Lattner authored
llvm-svn: 31730
-
Chris Lattner authored
'(shr (ctlz (sub Y, Z)), 5)'. The use of xor better exposes the operation to bit-twiddling logic in the dag combiner. For example, this: typedef struct { unsigned prefix : 4; unsigned code : 4; unsigned unsigned_p : 4; } tree_common; int foo(tree_common *a, tree_common *b) { return a->code == b->code; } Now compiles to: _foo: lwz r2, 0(r4) lwz r3, 0(r3) xor r2, r3, r2 rlwinm r2, r2, 28, 28, 31 cntlzw r2, r2 srwi r3, r2, 5 blr instead of: _foo: lbz r2, 3(r4) lbz r3, 3(r3) srwi r2, r2, 4 srwi r3, r3, 4 subf r2, r2, r3 cntlzw r2, r2 srwi r3, r2, 5 blr saving a cycle. llvm-svn: 31725
-
Chris Lattner authored
llvm-svn: 31719
-
Chris Lattner authored
llvm-svn: 31717
-
Evan Cheng authored
llvm-svn: 31712
-
- Nov 13, 2006
-
-
Chris Lattner authored
Ptrdist/anagram among others. llvm-svn: 31708
-
Nick Lewycky authored
llvm-svn: 31696
-
- Nov 11, 2006
-
-
Jim Laskey authored
llvm-svn: 31690
-
Chris Lattner authored
llvm-svn: 31684
-
Jim Laskey authored
potentially some system calls/exception handling from working. TOS must always link to previous frame. This is a short term workaround until alloca scheme is reworked. llvm-svn: 31677
-
Evan Cheng authored
llvm-svn: 31676
-
Evan Cheng authored
llvm-svn: 31674
-
Chris Lattner authored
produces this clever code: _millisecs: lis r2, ha16(_Time.1182) lwzu r3, lo16(_Time.1182)(r2) lwz r2, 4(r2) addic r4, r2, 1 addze r3, r3 blr instead of this: _millisecs: lis r2, ha16(_Time.1182) la r3, lo16(_Time.1182)(r2) lwz r2, lo16(_Time.1182)(r2) lwz r3, 4(r3) addic r4, r3, 1 addze r3, r2 blr for: long %millisecs() { %tmp = load long* %Time.1182 ; <long> [#uses=1] %tmp1 = add long %tmp, 1 ; <long> [#uses=1] ret long %tmp1 } llvm-svn: 31673
-
Chris Lattner authored
globals. llvm-svn: 31672
-
Chris Lattner authored
llvm-svn: 31656
-
Chris Lattner authored
llvm-svn: 31654
-
- Nov 10, 2006
-
-
Evan Cheng authored
llvm-svn: 31650
-