- Sep 20, 2005
-
-
Chris Lattner authored
not define a value that is used outside of it's block. This catches many more simplifications, e.g. 854 in 176.gcc, 137 in vpr, etc. This implements branch-phi-thread.ll:test3.ll llvm-svn: 23397
-
Chris Lattner authored
predecessors. This implements branch-phi-thread.ll::test1 llvm-svn: 23395
-
Chris Lattner authored
llvm-svn: 23393
-
Chris Lattner authored
llvm-svn: 23392
-
Chris Lattner authored
control across branches with determined outcomes. More generality to follow. This triggers a couple thousand times in specint. llvm-svn: 23391
-
Nate Begeman authored
select_cc bits and then wrap it in a convenience function for use with regular select. llvm-svn: 23389
-
- Sep 19, 2005
-
-
Chris Lattner authored
when possible, avoiding the load (and avoiding the copy if the value is already in the right register). This patch came about when I noticed code like the following being generated: store R17 -> [SS1] ...blah... R4 = load [SS1] This was causing an LSU reject on the G5. This problem was due to the register allocator folding spill code into a reg-reg copy (producing the load), which prevented the spiller from being able to rewrite the load into a copy, despite the fact that the value was already available in a register. In the case above, we now rip out the R4 load and replace it with a R4 = R17 copy. This speeds up several programs on X86 (which spills a lot :) ), e.g. smg2k from 22.39->20.60s, povray from 12.93->12.66s, 168.wupwise from 68.54->53.83s (!), 197.parser from 7.33->6.62s (!), etc. This may have a larger impact in some cases on the G5 (by avoiding LSU rejects), though it probably won't trigger as often (less spilling in general). Targets that implement folding of loads/stores into copies should implement the isLoadFromStackSlot hook to get this. llvm-svn: 23388
-
Chris Lattner authored
llvm-svn: 23387
-
- Sep 18, 2005
-
-
Chris Lattner authored
struct S { unsigned int i : 6, j : 11, k : 15; } b; void plus2 (unsigned int x) { b.j += x; } To: _plus2: lis r2, ha16(L_b$non_lazy_ptr) lwz r2, lo16(L_b$non_lazy_ptr)(r2) lwz r4, 0(r2) slwi r3, r3, 6 add r3, r4, r3 rlwimi r3, r4, 0, 26, 14 stw r3, 0(r2) blr instead of: _plus2: lis r2, ha16(L_b$non_lazy_ptr) lwz r2, lo16(L_b$non_lazy_ptr)(r2) lwz r4, 0(r2) rlwinm r5, r4, 26, 21, 31 add r3, r5, r3 rlwimi r4, r3, 6, 15, 25 stw r4, 0(r2) blr by eliminating an 'and'. I'm pretty sure this is as small as we can go :) llvm-svn: 23386
-
Chris Lattner authored
struct S { unsigned int i : 6, j : 11, k : 15; } b; void plus2 (unsigned int x) { b.j += x; } to: plus2: mov %EAX, DWORD PTR [b] mov %ECX, %EAX and %ECX, 131008 mov %EDX, DWORD PTR [%ESP + 4] shl %EDX, 6 add %EDX, %ECX and %EDX, 131008 and %EAX, -131009 or %EDX, %EAX mov DWORD PTR [b], %EDX ret instead of: plus2: mov %EAX, DWORD PTR [b] mov %ECX, %EAX shr %ECX, 6 and %ECX, 2047 add %ECX, DWORD PTR [%ESP + 4] shl %ECX, 6 and %ECX, 131008 and %EAX, -131009 or %ECX, %EAX mov DWORD PTR [b], %ECX ret llvm-svn: 23385
-
Chris Lattner authored
struct S { unsigned int i : 6, j : 11, k : 15; } b; void plus3 (unsigned int x) { b.k += x; } To: plus3: mov %EAX, DWORD PTR [%ESP + 4] shl %EAX, 17 add DWORD PTR [b], %EAX ret instead of: plus3: mov %EAX, DWORD PTR [%ESP + 4] shl %EAX, 17 mov %ECX, DWORD PTR [b] add %EAX, %ECX and %EAX, -131072 and %ECX, 131071 or %ECX, %EAX mov DWORD PTR [b], %ECX ret llvm-svn: 23384
-
Chris Lattner authored
llvm-svn: 23383
-
Chris Lattner authored
llvm-svn: 23382
-
Chris Lattner authored
struct S { unsigned int i : 6, j : 11, k : 15; } b; void plus3 (unsigned int x) { b.k += x; } to: _plus3: lis r2, ha16(L_b$non_lazy_ptr) lwz r2, lo16(L_b$non_lazy_ptr)(r2) lwz r3, 0(r2) rlwinm r4, r3, 0, 0, 14 add r4, r4, r3 rlwimi r4, r3, 0, 15, 31 stw r4, 0(r2) blr instead of: _plus3: lis r2, ha16(L_b$non_lazy_ptr) lwz r2, lo16(L_b$non_lazy_ptr)(r2) lwz r4, 0(r2) srwi r5, r4, 17 add r3, r5, r3 slwi r3, r3, 17 rlwimi r3, r4, 0, 15, 31 stw r3, 0(r2) blr llvm-svn: 23381
-
Chris Lattner authored
struct S { unsigned int i : 6, j : 11, k : 15; } b; void plus1 (unsigned int x) { b.i += x; } as: _plus1: lis r2, ha16(L_b$non_lazy_ptr) lwz r2, lo16(L_b$non_lazy_ptr)(r2) lwz r4, 0(r2) add r3, r4, r3 rlwimi r3, r4, 0, 0, 25 stw r3, 0(r2) blr instead of: _plus1: lis r2, ha16(L_b$non_lazy_ptr) lwz r2, lo16(L_b$non_lazy_ptr)(r2) lwz r4, 0(r2) rlwinm r5, r4, 0, 26, 31 add r3, r5, r3 rlwimi r3, r4, 0, 0, 25 stw r3, 0(r2) blr llvm-svn: 23379
-
Chris Lattner authored
llvm-svn: 23377
-
Chris Lattner authored
struct { unsigned int bit0:1; unsigned int ubyte:31; } sdata; void foo() { sdata.ubyte++; } into this: foo: add DWORD PTR [sdata], 2 ret instead of this: foo: mov %EAX, DWORD PTR [sdata] mov %ECX, %EAX add %ECX, 2 and %ECX, -2 and %EAX, 1 or %EAX, %ECX mov DWORD PTR [sdata], %EAX ret llvm-svn: 23376
-
- Sep 17, 2005
-
-
Chris Lattner authored
llvm-svn: 23374
-
- Sep 16, 2005
-
-
Nate Begeman authored
llvm-svn: 23371
-
- Sep 15, 2005
-
-
Chris Lattner authored
llvm-svn: 23366
-
- Sep 14, 2005
-
-
Chris Lattner authored
llvm-svn: 23357
-
Chris Lattner authored
llvm-svn: 23356
-
Chris Lattner authored
specified. The various *imm operands defined by PPC are really all i32, even though the actual immediate is restricted to a smaller value in it. llvm-svn: 23352
-
Chris Lattner authored
llvm-svn: 23350
-
Chris Lattner authored
llvm-svn: 23348
-
Chris Lattner authored
llvm-svn: 23347
-
Chris Lattner authored
llvm-svn: 23342
-
Chris Lattner authored
can use/define class methods llvm-svn: 23339
-
- Sep 13, 2005
-
-
Chris Lattner authored
with incoming arguments instead of the pregs themselves. This fixes the scheduler from causing problems by moving a copyfromreg for an argument to after a select_cc node (now it can, and bad things won't happen). llvm-svn: 23334
-
Chris Lattner authored
llvm-svn: 23333
-
Chris Lattner authored
llvm-svn: 23332
-
Chris Lattner authored
into particular vregs, emit copies into the entry MBB. llvm-svn: 23331
-
Chris Lattner authored
llvm-svn: 23330
-
Chris Lattner authored
llvm-svn: 23329
-
Chris Lattner authored
This is useful for 178.galgel where resolution of dope vectors (by the optimizer) causes the scales to become apparent. llvm-svn: 23328
-
Chris Lattner authored
Fix an issue where LSR would miss rewriting a use of an IV expression by a PHI node that is not the original PHI. This fixes up a dot-product loop in galgel, speeding it up from 18.47s to 16.13s. llvm-svn: 23327
-
Chris Lattner authored
indentation, no functionality change llvm-svn: 23325
-
Chris Lattner authored
if () { store A -> P; } else { store B -> P; } into a PHI node with one store, in the most trival case. This implements load.ll:test10. llvm-svn: 23324
-
Chris Lattner authored
each other. This implements InstCombine/load.ll:test9 llvm-svn: 23322
-
Chris Lattner authored
load are exactly consequtive. This is picked up by other passes, but this triggers thousands of times in fortran programs that use static locals (and is thus a compile-time speedup). llvm-svn: 23320
-