- May 23, 2004
-
-
Chris Lattner authored
llvm-svn: 13696
-
Chris Lattner authored
llvm-svn: 13695
-
Chris Lattner authored
llvm-svn: 13694
-
Brian Gaeke authored
llvm-svn: 13643
-
- May 20, 2004
-
-
Chris Lattner authored
fix the really bad code we're getting on PPC. llvm-svn: 13609
-
Brian Gaeke authored
a full 64-bit address, it must be adjusted to fit in the branch instruction's immediate field. (This is only used in the reoptimizer, for now.) llvm-svn: 13608
-
- May 19, 2004
-
-
Brian Gaeke authored
Fix a typo in a debug message. llvm-svn: 13607
-
- May 14, 2004
-
-
Brian Gaeke authored
MachineBasicBlocks instead. llvm-svn: 13568
-
Brian Gaeke authored
Get rid of separate numbering for LLVM BasicBlocks; use the automatically generated MachineBasicBlock numbering. llvm-svn: 13567
-
Brian Gaeke authored
LLVM BasicBlock operands. llvm-svn: 13566
-
- May 13, 2004
-
-
Chris Lattner authored
and passing a null pointer into a function. For this testcase: void %test(int** %X) { store int* null, int** %X call void %test(int** null) ret void } we now generate this: test: sub %ESP, 12 mov %EAX, DWORD PTR [%ESP + 16] mov DWORD PTR [%EAX], 0 mov DWORD PTR [%ESP], 0 call test add %ESP, 12 ret instead of this: test: sub %ESP, 12 mov %EAX, DWORD PTR [%ESP + 16] mov %ECX, 0 mov DWORD PTR [%EAX], %ECX mov %EAX, 0 mov DWORD PTR [%ESP], %EAX call test add %ESP, 12 ret llvm-svn: 13558
-
Chris Lattner authored
the alloca address into common operations like loads/stores. In a simple testcase like this (which is just designed to excersize the alloca A, nothing more): int %test(int %X, bool %C) { %A = alloca int store int %X, int* %A store int* %A, int** %G br bool %C, label %T, label %F T: call int %test(int 1, bool false) %V = load int* %A ret int %V F: call int %test(int 123, bool true) %V2 = load int* %A ret int %V2 } We now generate: test: sub %ESP, 12 mov %EAX, DWORD PTR [%ESP + 16] mov %CL, BYTE PTR [%ESP + 20] *** mov DWORD PTR [%ESP + 8], %EAX mov %EAX, OFFSET G lea %EDX, DWORD PTR [%ESP + 8] mov DWORD PTR [%EAX], %EDX test %CL, %CL je .LBB2 # PC rel: F .LBB1: # T mov DWORD PTR [%ESP], 1 mov DWORD PTR [%ESP + 4], 0 call test *** mov %EAX, DWORD PTR [%ESP + 8] add %ESP, 12 ret .LBB2: # F mov DWORD PTR [%ESP], 123 mov DWORD PTR [%ESP + 4], 1 call test *** mov %EAX, DWORD PTR [%ESP + 8] add %ESP, 12 ret Instead of: test: sub %ESP, 20 mov %EAX, DWORD PTR [%ESP + 24] mov %CL, BYTE PTR [%ESP + 28] *** lea %EDX, DWORD PTR [%ESP + 16] *** mov DWORD PTR [%EDX], %EAX mov %EAX, OFFSET G mov DWORD PTR [%EAX], %EDX test %CL, %CL *** mov DWORD PTR [%ESP + 12], %EDX je .LBB2 # PC rel: F .LBB1: # T mov DWORD PTR [%ESP], 1 mov %EAX, 0 mov DWORD PTR [%ESP + 4], %EAX call test *** mov %EAX, DWORD PTR [%ESP + 12] *** mov %EAX, DWORD PTR [%EAX] add %ESP, 20 ret .LBB2: # F mov DWORD PTR [%ESP], 123 mov %EAX, 1 mov DWORD PTR [%ESP + 4], %EAX call test *** mov %EAX, DWORD PTR [%ESP + 12] *** mov %EAX, DWORD PTR [%EAX] add %ESP, 20 ret llvm-svn: 13557
-
Chris Lattner authored
sized allocas in the entry block). Instead of generating code like this: entry: reg1024 = ESP+1234 ... (much later) *reg1024 = 17 Generate code that looks like this: entry: (no code generated) ... (much later) t = ESP+1234 *t = 17 The advantage being that we DRAMATICALLY reduce the register pressure for these silly temporaries (they were all being spilled to the stack, resulting in very silly code). This is actually a manual implementation of rematerialization :) I have a patch to fold the alloca address computation into loads & stores, which will make this much better still, but just getting this right took way too much time and I'm sleepy. llvm-svn: 13554
-
- May 12, 2004
-
-
Chris Lattner authored
mov DWORD PTR [%ESP + 4], 1 instead of: mov %EAX, 1 mov DWORD PTR [%ESP + 4], %EAX llvm-svn: 13494
-
- May 10, 2004
-
-
Chris Lattner authored
compiling things like 'add long %X, 1'. The problem is that we were switching the order of the operands for longs even though we can't fold them yet. llvm-svn: 13451
-
Chris Lattner authored
llvm-svn: 13440
-
Chris Lattner authored
llvm-svn: 13439
-
- May 09, 2004
-
-
Chris Lattner authored
syntactically loopify natural loops so that the GCC loop optimizer can find them. This should *dramatically* improve the performance of CBE compiled code on targets that depend on GCC's loop optimizations (like PPC) llvm-svn: 13438
-
Chris Lattner authored
llvm-svn: 13437
-
Chris Lattner authored
FindUsedTypes manipulation stuff out to be a seperate pass, and make the main CWriter be a function pass now! llvm-svn: 13435
-
Chris Lattner authored
llvm-svn: 13433
-
Chris Lattner authored
Print all PHI copies for successor blocks before the terminator, whether it be a conditional branch or switch. llvm-svn: 13430
-
- May 08, 2004
-
-
Tanya Lattner authored
llvm-svn: 13425
-
Brian Gaeke authored
Flesh out the SetCC support... which currently ends in a little bit of unfinished code (which is probably completely hilarious) for generating the condition value splitting the basic block up into 4 blocks, like this (clearly a better API is needed for this!): BB cond. branch / / R1=1 R2=0 \ / \ / R=phi(R1,R2) Other minor edits. llvm-svn: 13423
-
Brian Gaeke authored
llvm-svn: 13422
-
Brian Gaeke authored
llvm-svn: 13421
-
Brian Gaeke authored
llvm-svn: 13420
-
Brian Gaeke authored
llvm-svn: 13419
-
Brian Gaeke authored
Add support for branches (based loosely on X86/InstSelectSimple). Add support for not visiting phi nodes in the first pass. Add support for loading bools. Flesh out support for stores. llvm-svn: 13418
-
- May 07, 2004
-
-
Brian Gaeke authored
Disable the code that copies long constants to registers - it looks fishy. Implement some simple casts: integral, smaller than longs, and equal-width or narrowing only. llvm-svn: 13413
-
Chris Lattner authored
allows us to compile: store float 10.0, float* %P into: mov DWORD PTR [%EAX], 1092616192 instead of: .CPItest_0: # float 0x4024000000000000 .long 1092616192 # float 10 ... fld DWORD PTR [.CPItest_0] fstp DWORD PTR [%EAX] llvm-svn: 13409
-
Chris Lattner authored
against zero. In particular, don't emit: mov %ESI, 0 cmp %ECX, %ESI instead, emit: test %ECX, %ECX llvm-svn: 13407
-
- May 04, 2004
-
-
Brian Gaeke authored
llvm-svn: 13362
-
Brian Gaeke authored
constant pool member's name. This is intended to address Bug 333. Also, fix an anachronistic usage of "M" as a parameter of type Function *. llvm-svn: 13357
-
Chris Lattner authored
llvm-svn: 13355
-
Chris Lattner authored
div: mov %EDX, DWORD PTR [%ESP + 4] mov %ECX, 64 mov %EAX, %EDX sar %EDX, 31 idiv %ECX ret to this: div: mov %EAX, DWORD PTR [%ESP + 4] mov %ECX, %EAX sar %ECX, 5 shr %ECX, 26 mov %EDX, %EAX add %EDX, %ECX sar %EAX, 6 ret Note that the intel compiler is currently making this: div: movl 4(%esp), %edx #3.5 movl %edx, %eax #4.14 sarl $5, %eax #4.14 shrl $26, %eax #4.14 addl %edx, %eax #4.14 sarl $6, %eax #4.14 ret #4.14 Which has one less register->register copy. (hint hint alkis :) llvm-svn: 13354
-
Chris Lattner authored
llvm-svn: 13342
-
- May 01, 2004
-
-
Chris Lattner authored
llvm-svn: 13304
-
Chris Lattner authored
Look at all of the pretty minuses. :) llvm-svn: 13303
-
Chris Lattner authored
llvm-svn: 13297
-