- Jan 18, 2005
-
-
Chris Lattner authored
llvm-svn: 19656
-
Chris Lattner authored
llvm-svn: 19651
-
- Jan 17, 2005
-
-
Chris Lattner authored
X86/reg-pressure.ll again, and allows us to do nice things in other cases. For example, we now codegen this sort of thing: int %loadload(int *%X, int* %Y) { %Z = load int* %Y %Y = load int* %X ;; load between %Z and store %Q = add int %Z, 1 store int %Q, int* %Y ret int %Y } Into this: loadload: mov %EAX, DWORD PTR [%ESP + 4] mov %EAX, DWORD PTR [%EAX] mov %ECX, DWORD PTR [%ESP + 8] inc DWORD PTR [%ECX] ret where we weren't able to form the 'inc [mem]' before. This also lets the instruction selector emit loads in any order it wants to, which can be good for register pressure as well. llvm-svn: 19644
-
Chris Lattner authored
llvm-svn: 19642
-
Chris Lattner authored
the basic block that uses them if possible. This is a big win on X86, as it lets us fold the argument loads into instructions and reduce register pressure (by not loading all of the arguments in the entry block). For this (contrived to show the optimization) testcase: int %argtest(int %A, int %B) { %X = sub int 12345, %A br label %L L: %Y = add int %X, %B ret int %Y } we used to produce: argtest: mov %ECX, DWORD PTR [%ESP + 4] mov %EAX, 12345 sub %EAX, %ECX mov %EDX, DWORD PTR [%ESP + 8] .LBBargtest_1: # L add %EAX, %EDX ret now we produce: argtest: mov %EAX, 12345 sub %EAX, DWORD PTR [%ESP + 4] .LBBargtest_1: # L add %EAX, DWORD PTR [%ESP + 8] ret This also fixes the FIXME in the code. BTW, this occurs in real code. 164.gzip shrinks from 8623 to 8608 lines of .s file. The stack frame in huft_build shrinks from 1644->1628 bytes, inflate_codes shrinks from 116->108 bytes, and inflate_block from 2620->2612, due to fewer spills. Take that alkis. :-) llvm-svn: 19639
-
Chris Lattner authored
llvm-svn: 19635
-
- Jan 16, 2005
-
-
Chris Lattner authored
llvm-svn: 19617
-
Chris Lattner authored
track of how to deal with it, and provide the target with a hook that they can use to legalize arbitrary operations in arbitrary ways. Implement custom lowering for a couple of ops, implement promotion for select operations (which x86 needs). llvm-svn: 19613
-
Chris Lattner authored
llvm-svn: 19612
-
Chris Lattner authored
llvm-svn: 19611
-
Chris Lattner authored
llvm-svn: 19606
-
Chris Lattner authored
llvm-svn: 19597
-
Chris Lattner authored
llvm-svn: 19596
-
Chris Lattner authored
llvm-svn: 19595
-
Chris Lattner authored
llvm-svn: 19583
-
Chris Lattner authored
llvm-svn: 19582
-
Chris Lattner authored
llvm-svn: 19580
-
Chris Lattner authored
llvm-svn: 19579
-
Chris Lattner authored
llvm-svn: 19578
-
Chris Lattner authored
llvm-svn: 19577
-
- Jan 15, 2005
-
-
Chris Lattner authored
basically everything. llvm-svn: 19576
-
Chris Lattner authored
llvm-svn: 19575
-
Chris Lattner authored
ZERO_EXTEND_INREG for targets that don't support them. llvm-svn: 19573
-
Chris Lattner authored
llvm-svn: 19572
-
Chris Lattner authored
Add support for new SIGN_EXTEND_INREG, ZERO_EXTEND_INREG, and FP_ROUND_INREG operators. Realize that if we do any promotions, we need to iterate SelectionDAG construction. llvm-svn: 19569
-
Chris Lattner authored
llvm-svn: 19568
-
Chris Lattner authored
llvm-svn: 19565
-
- Jan 14, 2005
-
-
Chris Lattner authored
stores/loads. llvm-svn: 19562
-
Chris Lattner authored
llvm-svn: 19559
-
- Jan 13, 2005
-
-
Chris Lattner authored
llvm-svn: 19535
-
Chris Lattner authored
llvm-svn: 19531
-
Chris Lattner authored
llvm-svn: 19528
-
Chris Lattner authored
llvm-svn: 19527
-
Chris Lattner authored
llvm-svn: 19526
-
- Jan 12, 2005
-
-
Chris Lattner authored
This fixes llvm-test/SingleSource/Regression/C/casts.c llvm-svn: 19519
-
Chris Lattner authored
llvm-svn: 19517
-
Chris Lattner authored
movsbl 4(%esp), %eax movl %eax, %edx sarl $7, %edx Now we generate: movsbl 4(%esp), %eax movl %eax, %edx sarl $31, %edx Which is right. llvm-svn: 19515
-
Reid Spencer authored
llvm-svn: 19512
-
Chris Lattner authored
llvm-svn: 19498
-
- Jan 11, 2005
-
-
Chris Lattner authored
llvm-svn: 19485
-