- Sep 24, 2005
-
-
Chris Lattner authored
prefix to a symbol name llvm-svn: 23421
-
Chris Lattner authored
llvm-svn: 23420
-
Chris Lattner authored
llvm-svn: 23419
-
Chris Lattner authored
generated isel now tries li then lis, then lis+ori. llvm-svn: 23418
-
Chris Lattner authored
Fix a few corner cases parsing things like (i32 imm:$foo) llvm-svn: 23417
-
Chris Lattner authored
(e.g. things like rotates). llvm-svn: 23416
-
- Sep 23, 2005
-
-
Chris Lattner authored
llvm-svn: 23415
-
Chris Lattner authored
This does not check that types match yet, but PPC only has one integer type ;-). This also doesn't have the code to build the resultant dag. llvm-svn: 23414
-
Chris Lattner authored
llvm-svn: 23413
-
Chris Lattner authored
llvm-svn: 23412
-
Chris Lattner authored
llvm-svn: 23411
-
Chris Lattner authored
This implements SimplifyCFG/branch-fold.ll, and is useful on ?:/min/max heavy code llvm-svn: 23410
-
Chris Lattner authored
llvm-svn: 23409
-
Chris Lattner authored
llvm-svn: 23408
-
Chris Lattner authored
llvm-svn: 23407
-
Chris Lattner authored
an llvm-ranlib symtab. This speeds up gccld -native on an almost empty .o file from 1.63s to 0.18s. llvm-svn: 23406
-
Chris Lattner authored
not completely painful to use. Once we decide a directory has a bytecode library, we know it this function returns true, no need to scan entire directories. llvm-svn: 23405
-
Chris Lattner authored
2. Concatenate -lfoo and -L/bar options into a single option instead of passing "-L /bar" (for example) which doesn't work on Darwin. 3. Send -v output to stderr instead of stdout llvm-svn: 23404
-
Chris Lattner authored
This happens all the time on PPC for bool values, e.g. eliminating a xori in inverted-bool-compares.ll. This should be added to the dag combiner as well. llvm-svn: 23403
-
Chris Lattner authored
llvm-svn: 23402
-
- Sep 21, 2005
-
-
Chris Lattner authored
llvm-svn: 23401
-
Chris Lattner authored
llvm-svn: 23400
-
Chris Lattner authored
llvm-svn: 23399
-
Chris Lattner authored
llvm-svn: 23398
-
- Sep 20, 2005
-
-
Chris Lattner authored
not define a value that is used outside of it's block. This catches many more simplifications, e.g. 854 in 176.gcc, 137 in vpr, etc. This implements branch-phi-thread.ll:test3.ll llvm-svn: 23397
-
Chris Lattner authored
threaded over llvm-svn: 23396
-
Chris Lattner authored
predecessors. This implements branch-phi-thread.ll::test1 llvm-svn: 23395
-
Chris Lattner authored
llvm-svn: 23394
-
Chris Lattner authored
llvm-svn: 23393
-
Chris Lattner authored
llvm-svn: 23392
-
Chris Lattner authored
control across branches with determined outcomes. More generality to follow. This triggers a couple thousand times in specint. llvm-svn: 23391
-
Chris Lattner authored
llvm-svn: 23390
-
Nate Begeman authored
select_cc bits and then wrap it in a convenience function for use with regular select. llvm-svn: 23389
-
- Sep 19, 2005
-
-
Chris Lattner authored
when possible, avoiding the load (and avoiding the copy if the value is already in the right register). This patch came about when I noticed code like the following being generated: store R17 -> [SS1] ...blah... R4 = load [SS1] This was causing an LSU reject on the G5. This problem was due to the register allocator folding spill code into a reg-reg copy (producing the load), which prevented the spiller from being able to rewrite the load into a copy, despite the fact that the value was already available in a register. In the case above, we now rip out the R4 load and replace it with a R4 = R17 copy. This speeds up several programs on X86 (which spills a lot :) ), e.g. smg2k from 22.39->20.60s, povray from 12.93->12.66s, 168.wupwise from 68.54->53.83s (!), 197.parser from 7.33->6.62s (!), etc. This may have a larger impact in some cases on the G5 (by avoiding LSU rejects), though it probably won't trigger as often (less spilling in general). Targets that implement folding of loads/stores into copies should implement the isLoadFromStackSlot hook to get this. llvm-svn: 23388
-
Chris Lattner authored
llvm-svn: 23387
-
- Sep 18, 2005
-
-
Chris Lattner authored
struct S { unsigned int i : 6, j : 11, k : 15; } b; void plus2 (unsigned int x) { b.j += x; } To: _plus2: lis r2, ha16(L_b$non_lazy_ptr) lwz r2, lo16(L_b$non_lazy_ptr)(r2) lwz r4, 0(r2) slwi r3, r3, 6 add r3, r4, r3 rlwimi r3, r4, 0, 26, 14 stw r3, 0(r2) blr instead of: _plus2: lis r2, ha16(L_b$non_lazy_ptr) lwz r2, lo16(L_b$non_lazy_ptr)(r2) lwz r4, 0(r2) rlwinm r5, r4, 26, 21, 31 add r3, r5, r3 rlwimi r4, r3, 6, 15, 25 stw r4, 0(r2) blr by eliminating an 'and'. I'm pretty sure this is as small as we can go :) llvm-svn: 23386
-
Chris Lattner authored
struct S { unsigned int i : 6, j : 11, k : 15; } b; void plus2 (unsigned int x) { b.j += x; } to: plus2: mov %EAX, DWORD PTR [b] mov %ECX, %EAX and %ECX, 131008 mov %EDX, DWORD PTR [%ESP + 4] shl %EDX, 6 add %EDX, %ECX and %EDX, 131008 and %EAX, -131009 or %EDX, %EAX mov DWORD PTR [b], %EDX ret instead of: plus2: mov %EAX, DWORD PTR [b] mov %ECX, %EAX shr %ECX, 6 and %ECX, 2047 add %ECX, DWORD PTR [%ESP + 4] shl %ECX, 6 and %ECX, 131008 and %EAX, -131009 or %ECX, %EAX mov DWORD PTR [b], %ECX ret llvm-svn: 23385
-
Chris Lattner authored
struct S { unsigned int i : 6, j : 11, k : 15; } b; void plus3 (unsigned int x) { b.k += x; } To: plus3: mov %EAX, DWORD PTR [%ESP + 4] shl %EAX, 17 add DWORD PTR [b], %EAX ret instead of: plus3: mov %EAX, DWORD PTR [%ESP + 4] shl %EAX, 17 mov %ECX, DWORD PTR [b] add %EAX, %ECX and %EAX, -131072 and %ECX, 131071 or %ECX, %EAX mov DWORD PTR [b], %ECX ret llvm-svn: 23384
-
Chris Lattner authored
llvm-svn: 23383
-
Chris Lattner authored
llvm-svn: 23382
-