- Jan 16, 2006
-
-
Chris Lattner authored
llvm-svn: 25363
-
Chris Lattner authored
llvm-svn: 25349
-
- Jan 14, 2006
-
-
Chris Lattner authored
llvm-svn: 25315
-
- Jan 13, 2006
-
-
Robert Bocchino authored
llvm-svn: 25299
-
Chris Lattner authored
llvm-svn: 25294
-
Chris Lattner authored
llvm-svn: 25292
-
- Jan 11, 2006
-
-
Chris Lattner authored
Patch written by Daniel Berlin! llvm-svn: 25202
-
Chris Lattner authored
Patch written by Daniel Berlin! llvm-svn: 25201
-
- Jan 10, 2006
-
-
Robert Bocchino authored
llvm-svn: 25180
-
- Jan 07, 2006
-
-
Chris Lattner authored
llvm-svn: 25137
-
- Jan 06, 2006
-
-
Chris Lattner authored
llvm-svn: 25130
-
Chris Lattner authored
the shifts. This allows us to fold this (which is the 'integer add a constant' sequence from cozmic's scheme compmiler): int %x(uint %anf-temporary776) { %anf-temporary777 = shr uint %anf-temporary776, ubyte 1 %anf-temporary800 = cast uint %anf-temporary777 to int %anf-temporary804 = shl int %anf-temporary800, ubyte 1 %anf-temporary805 = add int %anf-temporary804, -2 %anf-temporary806 = or int %anf-temporary805, 1 ret int %anf-temporary806 } into this: int %x(uint %anf-temporary776) { %anf-temporary776 = cast uint %anf-temporary776 to int %anf-temporary776.mask1 = add int %anf-temporary776, -2 %anf-temporary805 = or int %anf-temporary776.mask1, 1 ret int %anf-temporary805 } note that instcombine already knew how to eliminate the AND that the two shifts fold into. This is tested by InstCombine/shift.ll:test26 -Chris llvm-svn: 25128
-
Chris Lattner authored
llvm-svn: 25126
-
Chris Lattner authored
functionality changes. llvm-svn: 25125
-
- Dec 26, 2005
-
-
Duraid Madina authored
llvm-svn: 25021
-
- Dec 14, 2005
-
-
Chris Lattner authored
behavior in 126.gcc on big-endian systems. llvm-svn: 24708
-
- Dec 12, 2005
-
-
Chris Lattner authored
186.crafty by about 16% (from 15.109s to 13.045s) on my system. This turns allocas with unions/casts into scalars. For example crafty has something like this: union doub { unsigned short i[4]; long long d; }; int f(long long a) { return ((union doub){.d=a}).i[1]; } Instead of generating loads and stores to an alloca, we now promote the whole thing to a scalar long value. This implements: Transforms/ScalarRepl/AggregatePromote.ll llvm-svn: 24667
-
- Dec 05, 2005
-
-
Chris Lattner authored
know that small negative values fit into the immediate field of addressing modes. llvm-svn: 24608
-
- Nov 30, 2005
-
-
Chris Lattner authored
Transforms/DeadStoreElimination/2005-11-30-vaarg.ll llvm-svn: 24545
-
- Nov 25, 2005
-
-
Andrew Lenharth authored
llvm-svn: 24491
-
- Nov 22, 2005
-
-
Andrew Lenharth authored
llvm-svn: 24488
-
Andrew Lenharth authored
llvm-svn: 24487
-
- Nov 18, 2005
-
-
Chris Lattner authored
half the problem. llvm-svn: 24414
-
- Nov 17, 2005
-
-
Chris Lattner authored
compiling mysql reported by Ted Kremenek. llvm-svn: 24402
-
- Nov 10, 2005
-
-
Andrew Lenharth authored
llvm-svn: 24288
-
Andrew Lenharth authored
llvm-svn: 24270
-
Andrew Lenharth authored
Reg2Mem for fun you can opt -reg2mem -mem2reg llvm-svn: 24267
-
- Nov 05, 2005
-
-
Nate Begeman authored
Add support for specifying alignment and size of setjmp jmpbufs. No targets currently do anything with this information, nor is it presrved in the bytecode representation. That's coming up next. llvm-svn: 24196
-
Chris Lattner authored
that has been sitting in my inbox since May 18. :) llvm-svn: 24194
-
Chris Lattner authored
a few times in crafty: OLD: %tmp.36 = div int %tmp.35, 8 ; <int> [#uses=1] NEW: %tmp.36 = div uint %tmp.35, 8 ; <uint> [#uses=0] OLD: %tmp.19 = div int %tmp.18, 8 ; <int> [#uses=1] NEW: %tmp.19 = div uint %tmp.18, 8 ; <uint> [#uses=0] OLD: %tmp.117 = div int %tmp.116, 8 ; <int> [#uses=1] NEW: %tmp.117 = div uint %tmp.116, 8 ; <uint> [#uses=0] OLD: %tmp.92 = div int %tmp.91, 8 ; <int> [#uses=1] NEW: %tmp.92 = div uint %tmp.91, 8 ; <uint> [#uses=0] Which all turn into shrs. llvm-svn: 24190
-
Chris Lattner authored
8 times in vortex, allowing the srems to be turned into shrs: OLD: %tmp.104 = rem int %tmp.5.i37, 16 ; <int> [#uses=1] NEW: %tmp.104 = rem uint %tmp.5.i37, 16 ; <uint> [#uses=0] OLD: %tmp.98 = rem int %tmp.5.i24, 16 ; <int> [#uses=1] NEW: %tmp.98 = rem uint %tmp.5.i24, 16 ; <uint> [#uses=0] OLD: %tmp.91 = rem int %tmp.5.i19, 8 ; <int> [#uses=1] NEW: %tmp.91 = rem uint %tmp.5.i19, 8 ; <uint> [#uses=0] OLD: %tmp.88 = rem int %tmp.5.i14, 8 ; <int> [#uses=1] NEW: %tmp.88 = rem uint %tmp.5.i14, 8 ; <uint> [#uses=0] OLD: %tmp.85 = rem int %tmp.5.i9, 1024 ; <int> [#uses=2] NEW: %tmp.85 = rem uint %tmp.5.i9, 1024 ; <uint> [#uses=0] OLD: %tmp.82 = rem int %tmp.5.i, 512 ; <int> [#uses=2] NEW: %tmp.82 = rem uint %tmp.5.i1, 512 ; <uint> [#uses=0] OLD: %tmp.48.i = rem int %tmp.5.i.i161, 4 ; <int> [#uses=1] NEW: %tmp.48.i = rem uint %tmp.5.i.i161, 4 ; <uint> [#uses=0] OLD: %tmp.20.i2 = rem int %tmp.5.i.i, 4 ; <int> [#uses=1] NEW: %tmp.20.i2 = rem uint %tmp.5.i.i, 4 ; <uint> [#uses=0] it also occurs 9 times in gcc, but with odd constant divisors (1009 and 61) so the payoff isn't as great. llvm-svn: 24189
-
- Nov 02, 2005
-
-
Andrew Lenharth authored
llvm-svn: 24158
-
- Oct 31, 2005
-
-
Chris Lattner authored
bad cases. This fixes Markus's second testcase in PR639, and should seal it for good. llvm-svn: 24123
-
- Oct 29, 2005
-
-
Chris Lattner authored
infrastructure and the simple isels have been removed. llvm-svn: 24090
-
Chris Lattner authored
This allows us to turn code like malloc(4*x+4) -> malloc int, (x+1) llvm-svn: 24081
-
Chris Lattner authored
change. llvm-svn: 24076
-
- Oct 28, 2005
-
-
Chris Lattner authored
llvm-svn: 24056
-
- Oct 27, 2005
-
-
Chris Lattner authored
PR640 llvm-svn: 24046
-
Chris Lattner authored
llvm-svn: 24033
-
Chris Lattner authored
into: malloc int, (2*X) llvm-svn: 24032
-