- Jun 02, 2004
-
-
Chris Lattner authored
llvm-svn: 13953
-
Chris Lattner authored
llvm-svn: 13952
-
Chris Lattner authored
llvm-svn: 13951
-
Chris Lattner authored
llvm-svn: 13949
-
- May 30, 2004
-
-
Brian Gaeke authored
llvm-svn: 13911
-
Brian Gaeke authored
llvm-svn: 13908
-
Brian Gaeke authored
state. Also, save the state for the incoming register of each phi node. llvm-svn: 13906
-
Brian Gaeke authored
Call it at a more appropriate point. llvm-svn: 13905
-
Brian Gaeke authored
corresponding MachineCodeForInstruction vectors. I need to be able to get the register allocated for the thing which is called PhiCpRes in this code; this should make that task easier, plus, Phi nodes are no longer "special" in the sense that their MachineCodeForInstruction is empty. llvm-svn: 13904
-
Brian Gaeke authored
llvm-svn: 13899
-
Brian Gaeke authored
llvm-svn: 13898
-
Brian Gaeke authored
llvm-svn: 13897
-
Brian Gaeke authored
llvm-svn: 13896
-
- May 28, 2004
-
-
Brian Gaeke authored
Simplify InsertPhiElimInstructions(), and give it a better doxygen comment. llvm-svn: 13880
-
Brian Gaeke authored
the transformed LLVM code which is the input to the instruction selector. llvm-svn: 13879
-
Chris Lattner authored
few days. Apparently the old symbol table used to auto rename collisions in the type symbol table and the new one does not. It doesn't really make sense for the new one to do so, so we just make the client do it. llvm-svn: 13877
-
Chris Lattner authored
llvm-svn: 13874
-
- May 27, 2004
-
-
Brian Gaeke authored
llvm-svn: 13858
-
- May 26, 2004
-
-
Chris Lattner authored
llvm-svn: 13790
-
- May 25, 2004
-
-
Brian Gaeke authored
llvm-svn: 13773
-
Reid Spencer authored
llvm-svn: 13754
-
Reid Spencer authored
llvm-svn: 13748
-
- May 23, 2004
-
-
Chris Lattner authored
llvm-svn: 13696
-
Chris Lattner authored
llvm-svn: 13695
-
Chris Lattner authored
llvm-svn: 13694
-
Brian Gaeke authored
llvm-svn: 13643
-
- May 20, 2004
-
-
Chris Lattner authored
fix the really bad code we're getting on PPC. llvm-svn: 13609
-
Brian Gaeke authored
a full 64-bit address, it must be adjusted to fit in the branch instruction's immediate field. (This is only used in the reoptimizer, for now.) llvm-svn: 13608
-
- May 19, 2004
-
-
Brian Gaeke authored
Fix a typo in a debug message. llvm-svn: 13607
-
- May 14, 2004
-
-
Brian Gaeke authored
MachineBasicBlocks instead. llvm-svn: 13568
-
Brian Gaeke authored
Get rid of separate numbering for LLVM BasicBlocks; use the automatically generated MachineBasicBlock numbering. llvm-svn: 13567
-
Brian Gaeke authored
LLVM BasicBlock operands. llvm-svn: 13566
-
- May 13, 2004
-
-
Chris Lattner authored
and passing a null pointer into a function. For this testcase: void %test(int** %X) { store int* null, int** %X call void %test(int** null) ret void } we now generate this: test: sub %ESP, 12 mov %EAX, DWORD PTR [%ESP + 16] mov DWORD PTR [%EAX], 0 mov DWORD PTR [%ESP], 0 call test add %ESP, 12 ret instead of this: test: sub %ESP, 12 mov %EAX, DWORD PTR [%ESP + 16] mov %ECX, 0 mov DWORD PTR [%EAX], %ECX mov %EAX, 0 mov DWORD PTR [%ESP], %EAX call test add %ESP, 12 ret llvm-svn: 13558
-
Chris Lattner authored
the alloca address into common operations like loads/stores. In a simple testcase like this (which is just designed to excersize the alloca A, nothing more): int %test(int %X, bool %C) { %A = alloca int store int %X, int* %A store int* %A, int** %G br bool %C, label %T, label %F T: call int %test(int 1, bool false) %V = load int* %A ret int %V F: call int %test(int 123, bool true) %V2 = load int* %A ret int %V2 } We now generate: test: sub %ESP, 12 mov %EAX, DWORD PTR [%ESP + 16] mov %CL, BYTE PTR [%ESP + 20] *** mov DWORD PTR [%ESP + 8], %EAX mov %EAX, OFFSET G lea %EDX, DWORD PTR [%ESP + 8] mov DWORD PTR [%EAX], %EDX test %CL, %CL je .LBB2 # PC rel: F .LBB1: # T mov DWORD PTR [%ESP], 1 mov DWORD PTR [%ESP + 4], 0 call test *** mov %EAX, DWORD PTR [%ESP + 8] add %ESP, 12 ret .LBB2: # F mov DWORD PTR [%ESP], 123 mov DWORD PTR [%ESP + 4], 1 call test *** mov %EAX, DWORD PTR [%ESP + 8] add %ESP, 12 ret Instead of: test: sub %ESP, 20 mov %EAX, DWORD PTR [%ESP + 24] mov %CL, BYTE PTR [%ESP + 28] *** lea %EDX, DWORD PTR [%ESP + 16] *** mov DWORD PTR [%EDX], %EAX mov %EAX, OFFSET G mov DWORD PTR [%EAX], %EDX test %CL, %CL *** mov DWORD PTR [%ESP + 12], %EDX je .LBB2 # PC rel: F .LBB1: # T mov DWORD PTR [%ESP], 1 mov %EAX, 0 mov DWORD PTR [%ESP + 4], %EAX call test *** mov %EAX, DWORD PTR [%ESP + 12] *** mov %EAX, DWORD PTR [%EAX] add %ESP, 20 ret .LBB2: # F mov DWORD PTR [%ESP], 123 mov %EAX, 1 mov DWORD PTR [%ESP + 4], %EAX call test *** mov %EAX, DWORD PTR [%ESP + 12] *** mov %EAX, DWORD PTR [%EAX] add %ESP, 20 ret llvm-svn: 13557
-
Chris Lattner authored
sized allocas in the entry block). Instead of generating code like this: entry: reg1024 = ESP+1234 ... (much later) *reg1024 = 17 Generate code that looks like this: entry: (no code generated) ... (much later) t = ESP+1234 *t = 17 The advantage being that we DRAMATICALLY reduce the register pressure for these silly temporaries (they were all being spilled to the stack, resulting in very silly code). This is actually a manual implementation of rematerialization :) I have a patch to fold the alloca address computation into loads & stores, which will make this much better still, but just getting this right took way too much time and I'm sleepy. llvm-svn: 13554
-
- May 12, 2004
-
-
Chris Lattner authored
mov DWORD PTR [%ESP + 4], 1 instead of: mov %EAX, 1 mov DWORD PTR [%ESP + 4], %EAX llvm-svn: 13494
-
- May 10, 2004
-
-
Chris Lattner authored
compiling things like 'add long %X, 1'. The problem is that we were switching the order of the operands for longs even though we can't fold them yet. llvm-svn: 13451
-
Chris Lattner authored
llvm-svn: 13440
-
Chris Lattner authored
llvm-svn: 13439
-
- May 09, 2004
-
-
Chris Lattner authored
syntactically loopify natural loops so that the GCC loop optimizer can find them. This should *dramatically* improve the performance of CBE compiled code on targets that depend on GCC's loop optimizations (like PPC) llvm-svn: 13438
-