- Jan 06, 2006
-
-
Chris Lattner authored
functionality changes. llvm-svn: 25125
-
- Nov 05, 2005
-
-
Nate Begeman authored
Add support for specifying alignment and size of setjmp jmpbufs. No targets currently do anything with this information, nor is it presrved in the bytecode representation. That's coming up next. llvm-svn: 24196
-
Chris Lattner authored
a few times in crafty: OLD: %tmp.36 = div int %tmp.35, 8 ; <int> [#uses=1] NEW: %tmp.36 = div uint %tmp.35, 8 ; <uint> [#uses=0] OLD: %tmp.19 = div int %tmp.18, 8 ; <int> [#uses=1] NEW: %tmp.19 = div uint %tmp.18, 8 ; <uint> [#uses=0] OLD: %tmp.117 = div int %tmp.116, 8 ; <int> [#uses=1] NEW: %tmp.117 = div uint %tmp.116, 8 ; <uint> [#uses=0] OLD: %tmp.92 = div int %tmp.91, 8 ; <int> [#uses=1] NEW: %tmp.92 = div uint %tmp.91, 8 ; <uint> [#uses=0] Which all turn into shrs. llvm-svn: 24190
-
Chris Lattner authored
8 times in vortex, allowing the srems to be turned into shrs: OLD: %tmp.104 = rem int %tmp.5.i37, 16 ; <int> [#uses=1] NEW: %tmp.104 = rem uint %tmp.5.i37, 16 ; <uint> [#uses=0] OLD: %tmp.98 = rem int %tmp.5.i24, 16 ; <int> [#uses=1] NEW: %tmp.98 = rem uint %tmp.5.i24, 16 ; <uint> [#uses=0] OLD: %tmp.91 = rem int %tmp.5.i19, 8 ; <int> [#uses=1] NEW: %tmp.91 = rem uint %tmp.5.i19, 8 ; <uint> [#uses=0] OLD: %tmp.88 = rem int %tmp.5.i14, 8 ; <int> [#uses=1] NEW: %tmp.88 = rem uint %tmp.5.i14, 8 ; <uint> [#uses=0] OLD: %tmp.85 = rem int %tmp.5.i9, 1024 ; <int> [#uses=2] NEW: %tmp.85 = rem uint %tmp.5.i9, 1024 ; <uint> [#uses=0] OLD: %tmp.82 = rem int %tmp.5.i, 512 ; <int> [#uses=2] NEW: %tmp.82 = rem uint %tmp.5.i1, 512 ; <uint> [#uses=0] OLD: %tmp.48.i = rem int %tmp.5.i.i161, 4 ; <int> [#uses=1] NEW: %tmp.48.i = rem uint %tmp.5.i.i161, 4 ; <uint> [#uses=0] OLD: %tmp.20.i2 = rem int %tmp.5.i.i, 4 ; <int> [#uses=1] NEW: %tmp.20.i2 = rem uint %tmp.5.i.i, 4 ; <uint> [#uses=0] it also occurs 9 times in gcc, but with odd constant divisors (1009 and 61) so the payoff isn't as great. llvm-svn: 24189
-
- Nov 02, 2005
-
-
Andrew Lenharth authored
llvm-svn: 24158
-
- Oct 31, 2005
-
-
Chris Lattner authored
bad cases. This fixes Markus's second testcase in PR639, and should seal it for good. llvm-svn: 24123
-
- Oct 29, 2005
-
-
Chris Lattner authored
This allows us to turn code like malloc(4*x+4) -> malloc int, (x+1) llvm-svn: 24081
-
Chris Lattner authored
change. llvm-svn: 24076
-
- Oct 28, 2005
-
-
Chris Lattner authored
llvm-svn: 24056
-
- Oct 27, 2005
-
-
Chris Lattner authored
PR640 llvm-svn: 24046
-
Chris Lattner authored
llvm-svn: 24033
-
Chris Lattner authored
into: malloc int, (2*X) llvm-svn: 24032
-
Chris Lattner authored
(malloc [25 x int]) directly without having to convert to (malloc [100 x sbyte]) first. llvm-svn: 24031
-
Chris Lattner authored
llvm-svn: 24030
-
- Oct 26, 2005
-
-
Chris Lattner authored
fixes a very slow compile in PR639. llvm-svn: 24011
-
- Oct 24, 2005
-
-
Chris Lattner authored
one use (but one is a cast). This handles the very common case of: X = alloc [n x byte] Y = cast X to somethingbetter seteq X, null In order to avoid infinite looping when there are multiple casts, we only allow this if the xform is strictly increasing the alignment of the allocation. llvm-svn: 23961
-
Chris Lattner authored
where the second has less alignment required. If we had explicit alignment support in the IR, we could handle this case, but we can't until we do. llvm-svn: 23960
-
Chris Lattner authored
more effective at promoting these allocations, catching them earlier in the compile process. llvm-svn: 23959
-
Chris Lattner authored
llvm-svn: 23958
-
- Oct 17, 2005
-
-
Chris Lattner authored
llvm-svn: 23773
-
Chris Lattner authored
llvm-svn: 23772
-
Chris Lattner authored
llvm-svn: 23771
-
- Oct 10, 2005
-
-
Chris Lattner authored
llvm-svn: 23677
-
- Oct 09, 2005
-
-
Chris Lattner authored
llvm-svn: 23674
-
- Oct 07, 2005
-
-
Jeff Cohen authored
llvm-svn: 23656
-
- Sep 26, 2005
-
-
Chris Lattner authored
as ConstantFoldLoadThroughGEPConstantExpr. llvm-svn: 23445
-
- Sep 25, 2005
-
-
Chris Lattner authored
Match a bunch of idioms for sign extensions, implementing InstCombine/signext.ll llvm-svn: 23428
-
- Sep 18, 2005
-
-
Chris Lattner authored
struct S { unsigned int i : 6, j : 11, k : 15; } b; void plus2 (unsigned int x) { b.j += x; } To: _plus2: lis r2, ha16(L_b$non_lazy_ptr) lwz r2, lo16(L_b$non_lazy_ptr)(r2) lwz r4, 0(r2) slwi r3, r3, 6 add r3, r4, r3 rlwimi r3, r4, 0, 26, 14 stw r3, 0(r2) blr instead of: _plus2: lis r2, ha16(L_b$non_lazy_ptr) lwz r2, lo16(L_b$non_lazy_ptr)(r2) lwz r4, 0(r2) rlwinm r5, r4, 26, 21, 31 add r3, r5, r3 rlwimi r4, r3, 6, 15, 25 stw r4, 0(r2) blr by eliminating an 'and'. I'm pretty sure this is as small as we can go :) llvm-svn: 23386
-
Chris Lattner authored
struct S { unsigned int i : 6, j : 11, k : 15; } b; void plus2 (unsigned int x) { b.j += x; } to: plus2: mov %EAX, DWORD PTR [b] mov %ECX, %EAX and %ECX, 131008 mov %EDX, DWORD PTR [%ESP + 4] shl %EDX, 6 add %EDX, %ECX and %EDX, 131008 and %EAX, -131009 or %EDX, %EAX mov DWORD PTR [b], %EDX ret instead of: plus2: mov %EAX, DWORD PTR [b] mov %ECX, %EAX shr %ECX, 6 and %ECX, 2047 add %ECX, DWORD PTR [%ESP + 4] shl %ECX, 6 and %ECX, 131008 and %EAX, -131009 or %ECX, %EAX mov DWORD PTR [b], %ECX ret llvm-svn: 23385
-
Chris Lattner authored
struct S { unsigned int i : 6, j : 11, k : 15; } b; void plus3 (unsigned int x) { b.k += x; } To: plus3: mov %EAX, DWORD PTR [%ESP + 4] shl %EAX, 17 add DWORD PTR [b], %EAX ret instead of: plus3: mov %EAX, DWORD PTR [%ESP + 4] shl %EAX, 17 mov %ECX, DWORD PTR [b] add %EAX, %ECX and %EAX, -131072 and %ECX, 131071 or %ECX, %EAX mov DWORD PTR [b], %ECX ret llvm-svn: 23384
-
Chris Lattner authored
llvm-svn: 23383
-
Chris Lattner authored
llvm-svn: 23382
-
Chris Lattner authored
struct S { unsigned int i : 6, j : 11, k : 15; } b; void plus3 (unsigned int x) { b.k += x; } to: _plus3: lis r2, ha16(L_b$non_lazy_ptr) lwz r2, lo16(L_b$non_lazy_ptr)(r2) lwz r3, 0(r2) rlwinm r4, r3, 0, 0, 14 add r4, r4, r3 rlwimi r4, r3, 0, 15, 31 stw r4, 0(r2) blr instead of: _plus3: lis r2, ha16(L_b$non_lazy_ptr) lwz r2, lo16(L_b$non_lazy_ptr)(r2) lwz r4, 0(r2) srwi r5, r4, 17 add r3, r5, r3 slwi r3, r3, 17 rlwimi r3, r4, 0, 15, 31 stw r3, 0(r2) blr llvm-svn: 23381
-
Chris Lattner authored
struct S { unsigned int i : 6, j : 11, k : 15; } b; void plus1 (unsigned int x) { b.i += x; } as: _plus1: lis r2, ha16(L_b$non_lazy_ptr) lwz r2, lo16(L_b$non_lazy_ptr)(r2) lwz r4, 0(r2) add r3, r4, r3 rlwimi r3, r4, 0, 0, 25 stw r3, 0(r2) blr instead of: _plus1: lis r2, ha16(L_b$non_lazy_ptr) lwz r2, lo16(L_b$non_lazy_ptr)(r2) lwz r4, 0(r2) rlwinm r5, r4, 0, 26, 31 add r3, r5, r3 rlwimi r3, r4, 0, 0, 25 stw r3, 0(r2) blr llvm-svn: 23379
-
Chris Lattner authored
llvm-svn: 23377
-
Chris Lattner authored
struct { unsigned int bit0:1; unsigned int ubyte:31; } sdata; void foo() { sdata.ubyte++; } into this: foo: add DWORD PTR [sdata], 2 ret instead of this: foo: mov %EAX, DWORD PTR [sdata] mov %ECX, %EAX add %ECX, 2 and %ECX, -2 and %EAX, 1 or %EAX, %ECX mov DWORD PTR [sdata], %EAX ret llvm-svn: 23376
-
- Sep 14, 2005
-
-
Chris Lattner authored
llvm-svn: 23348
-
- Sep 13, 2005
-
-
Chris Lattner authored
This is useful for 178.galgel where resolution of dope vectors (by the optimizer) causes the scales to become apparent. llvm-svn: 23328
-
Chris Lattner authored
indentation, no functionality change llvm-svn: 23325
-
Chris Lattner authored
if () { store A -> P; } else { store B -> P; } into a PHI node with one store, in the most trival case. This implements load.ll:test10. llvm-svn: 23324
-