- Apr 12, 2004
-
-
Chris Lattner authored
llvm-svn: 12849
-
Chris Lattner authored
llvm-svn: 12848
-
Chris Lattner authored
generator bug if multiple loops are extracted from a function. llvm-svn: 12847
-
Chris Lattner authored
If the source of the cast is a load, we can just use the source memory location, without having to create a temporary stack slot entry. Before we code generated this: double %int(int* %P) { %V = load int* %P %V2 = cast int %V to double ret double %V2 } into: int: sub %ESP, 4 mov %EAX, DWORD PTR [%ESP + 8] mov %EAX, DWORD PTR [%EAX] mov DWORD PTR [%ESP], %EAX fild DWORD PTR [%ESP] add %ESP, 4 ret Now we produce this: int: mov %EAX, DWORD PTR [%ESP + 4] fild DWORD PTR [%EAX] ret ... which is nicer. llvm-svn: 12846
-
Chris Lattner authored
llvm-svn: 12845
-
Chris Lattner authored
test/Regression/CodeGen/X86/fp_load_fold.llx llvm-svn: 12844
-
Chris Lattner authored
llvm-svn: 12843
-
- Apr 11, 2004
-
-
Chris Lattner authored
llvm-svn: 12842
-
Chris Lattner authored
for mul and div. Instead of generating this: test_divr: fld QWORD PTR [%ESP + 4] fld QWORD PTR [.CPItest_divr_0] fdivrp %ST(1) ret We now generate this: test_divr: fld QWORD PTR [%ESP + 4] fdivr QWORD PTR [.CPItest_divr_0] ret This code desperately needs refactoring, which will come in the next patch. llvm-svn: 12841
-
Chris Lattner authored
instructions use. This doesn't change any functionality except that long constant expressions of these operations will now magically start working. llvm-svn: 12840
-
Chris Lattner authored
fld QWORD PTR [%ESP + 4] fadd QWORD PTR [.CPItest_add_0] instead of: fld QWORD PTR [%ESP + 4] fld QWORD PTR [.CPItest_add_0] faddp %ST(1) I also intend to do this for mul & div, but it appears that I have to refactor a bit of code before I can do so. This is tested by: test/Regression/CodeGen/X86/fp_constant_op.llx llvm-svn: 12839
-
Chris Lattner authored
llvm-svn: 12838
-
Chris Lattner authored
llvm-svn: 12837
-
Chris Lattner authored
llvm-svn: 12836
-
Chris Lattner authored
1. If an incoming argument is dead, don't load it from the stack 2. Do not code gen noop copies at all (ie, cast int -> uint), not even to a move. This should reduce register pressure for allocators that are unable to coallesce away these copies in some cases. llvm-svn: 12835
-
Chris Lattner authored
llvm-svn: 12834
-
Chris Lattner authored
llvm-svn: 12833
-
Chris Lattner authored
llvm-svn: 12832
-
Chris Lattner authored
llvm-svn: 12831
-
Chris Lattner authored
llvm-svn: 12830
-
Chris Lattner authored
llvm-svn: 12829
-
Chris Lattner authored
llvm-svn: 12826
-
Chris Lattner authored
llvm-svn: 12825
-
Chris Lattner authored
llvm-svn: 12824
-
Chris Lattner authored
llvm-svn: 12823
-
Chris Lattner authored
llvm-svn: 12822
-
Chris Lattner authored
llvm-svn: 12821
-
Chris Lattner authored
llvm-svn: 12820
-
Chris Lattner authored
Canonicalize add of sign bit constant into a xor llvm-svn: 12819
-
Chris Lattner authored
llvm-svn: 12818
-
- Apr 10, 2004
-
-
Chris Lattner authored
and a bit more powerful llvm-svn: 12817
-
Chris Lattner authored
llvm-svn: 12816
-
Chris Lattner authored
llvm-svn: 12815
-
Chris Lattner authored
llvm-svn: 12814
-
Chris Lattner authored
llvm-svn: 12813
-
Chris Lattner authored
llvm-svn: 12811
-
Chris Lattner authored
llvm-svn: 12810
-
Chris Lattner authored
llvm-svn: 12809
-
Chris Lattner authored
don't write to memory llvm-svn: 12808
-
Chris Lattner authored
call and invoke instructions that are known to not write to memory. llvm-svn: 12807
-