- Feb 09, 2006
-
-
Chris Lattner authored
llvm-svn: 26088
-
Chris Lattner authored
1. Teach it new tricks: in particular how to propagate through signed shr and sexts. 2. Teach it to return a bitset of known-1 and known-0 bits, instead of just zero. 3. Teach instcombine (AND X, C) to fold when we know all C bits of X. This implements Regression/Transforms/InstCombine/bittest.ll, and allows future things to be simplified. llvm-svn: 26087
-
- Feb 08, 2006
-
-
Chris Lattner authored
optimization where we reduce the number of bits in AND masks when possible. llvm-svn: 26056
-
Chris Lattner authored
instruction onto the worklist (in case they are now dead). Add a really trivial local DSE implementation to help out bitfield code. We now fold this: struct S { unsigned char a : 1, b : 1, c : 1, d : 2, e : 3; S(); }; S::S() : a(0), b(0), c(1), d(0), e(6) {} to this: void %_ZN1SC1Ev(%struct.S* %this) { entry: %tmp.1 = getelementptr %struct.S* %this, int 0, uint 0 store ubyte 38, ubyte* %tmp.1 ret void } much earlier (in gccas instead of only in gccld after DSE runs). llvm-svn: 26050
-
Chris Lattner authored
test/Regression/Transforms/SCCP/select.ll llvm-svn: 26049
-
Chris Lattner authored
llvm-svn: 26045
-
- Feb 07, 2006
-
-
Chris Lattner authored
llvm-svn: 26040
-
Chris Lattner authored
is just as efficient as MVIZ and is also more general. Fix a few minor bugs introduced in recent patches llvm-svn: 26036
-
Chris Lattner authored
mask. This allows the code to be simpler and more efficient. Also, generalize some of the cases in MVIZ a bit, making it slightly more aggressive. llvm-svn: 26035
-
Chris Lattner authored
llvm-svn: 26034
-
Chris Lattner authored
'demanded bits', inspired by Nate's work in the dag combiner. This isn't complete, but needs to unrelated instcombiner changes to continue. llvm-svn: 26033
-
- Feb 05, 2006
-
-
Chris Lattner authored
Turn A / (C1 << N), where C1 is "1<<C2" into A >> (N+C2) [udiv only]. Tested with: rem.ll:test5, div.ll:test10 llvm-svn: 26003
-
- Feb 04, 2006
-
-
Chris Lattner authored
#LLVM LOC, and auto-cse's cast instructions. llvm-svn: 25974
-
Chris Lattner authored
1. When rewriting code in outer loops, sometimes we would insert code into inner loops that is invariant in that loop. 2. Notice that 4*(2+x) is 8+4*x and use that to simplify expressions. This is a performance neutral change. llvm-svn: 25964
-
- Jan 26, 2006
-
-
Jeff Cohen authored
llvm-svn: 25661
-
- Jan 24, 2006
-
-
Chris Lattner authored
llvm-svn: 25587
-
- Jan 23, 2006
-
-
Chris Lattner authored
llvm-svn: 25514
-
- Jan 19, 2006
-
-
Chris Lattner authored
need the float->double part. llvm-svn: 25452
-
- Jan 17, 2006
-
-
Robert Bocchino authored
llvm-svn: 25406
-
- Jan 16, 2006
-
-
Chris Lattner authored
llvm-svn: 25363
-
Chris Lattner authored
llvm-svn: 25349
-
- Jan 14, 2006
-
-
Chris Lattner authored
llvm-svn: 25315
-
- Jan 13, 2006
-
-
Robert Bocchino authored
llvm-svn: 25299
-
Chris Lattner authored
llvm-svn: 25294
-
Chris Lattner authored
llvm-svn: 25292
-
- Jan 11, 2006
-
-
Chris Lattner authored
Patch written by Daniel Berlin! llvm-svn: 25202
-
Chris Lattner authored
Patch written by Daniel Berlin! llvm-svn: 25201
-
- Jan 10, 2006
-
-
Robert Bocchino authored
llvm-svn: 25180
-
- Jan 07, 2006
-
-
Chris Lattner authored
llvm-svn: 25137
-
- Jan 06, 2006
-
-
Chris Lattner authored
llvm-svn: 25130
-
Chris Lattner authored
the shifts. This allows us to fold this (which is the 'integer add a constant' sequence from cozmic's scheme compmiler): int %x(uint %anf-temporary776) { %anf-temporary777 = shr uint %anf-temporary776, ubyte 1 %anf-temporary800 = cast uint %anf-temporary777 to int %anf-temporary804 = shl int %anf-temporary800, ubyte 1 %anf-temporary805 = add int %anf-temporary804, -2 %anf-temporary806 = or int %anf-temporary805, 1 ret int %anf-temporary806 } into this: int %x(uint %anf-temporary776) { %anf-temporary776 = cast uint %anf-temporary776 to int %anf-temporary776.mask1 = add int %anf-temporary776, -2 %anf-temporary805 = or int %anf-temporary776.mask1, 1 ret int %anf-temporary805 } note that instcombine already knew how to eliminate the AND that the two shifts fold into. This is tested by InstCombine/shift.ll:test26 -Chris llvm-svn: 25128
-
Chris Lattner authored
llvm-svn: 25126
-
Chris Lattner authored
functionality changes. llvm-svn: 25125
-
- Dec 26, 2005
-
-
Duraid Madina authored
llvm-svn: 25021
-
- Dec 14, 2005
-
-
Chris Lattner authored
behavior in 126.gcc on big-endian systems. llvm-svn: 24708
-
- Dec 12, 2005
-
-
Chris Lattner authored
186.crafty by about 16% (from 15.109s to 13.045s) on my system. This turns allocas with unions/casts into scalars. For example crafty has something like this: union doub { unsigned short i[4]; long long d; }; int f(long long a) { return ((union doub){.d=a}).i[1]; } Instead of generating loads and stores to an alloca, we now promote the whole thing to a scalar long value. This implements: Transforms/ScalarRepl/AggregatePromote.ll llvm-svn: 24667
-
- Dec 05, 2005
-
-
Chris Lattner authored
know that small negative values fit into the immediate field of addressing modes. llvm-svn: 24608
-
- Nov 30, 2005
-
-
Chris Lattner authored
Transforms/DeadStoreElimination/2005-11-30-vaarg.ll llvm-svn: 24545
-
- Nov 25, 2005
-
-
Andrew Lenharth authored
llvm-svn: 24491
-
- Nov 22, 2005
-
-
Andrew Lenharth authored
llvm-svn: 24488
-