- Apr 12, 2004
-
-
Chris Lattner authored
llvm-svn: 12849
-
Chris Lattner authored
llvm-svn: 12848
-
Chris Lattner authored
If the source of the cast is a load, we can just use the source memory location, without having to create a temporary stack slot entry. Before we code generated this: double %int(int* %P) { %V = load int* %P %V2 = cast int %V to double ret double %V2 } into: int: sub %ESP, 4 mov %EAX, DWORD PTR [%ESP + 8] mov %EAX, DWORD PTR [%EAX] mov DWORD PTR [%ESP], %EAX fild DWORD PTR [%ESP] add %ESP, 4 ret Now we produce this: int: mov %EAX, DWORD PTR [%ESP + 4] fild DWORD PTR [%EAX] ret ... which is nicer. llvm-svn: 12846
-
Chris Lattner authored
test/Regression/CodeGen/X86/fp_load_fold.llx llvm-svn: 12844
-
- Apr 11, 2004
-
-
Chris Lattner authored
llvm-svn: 12842
-
Chris Lattner authored
for mul and div. Instead of generating this: test_divr: fld QWORD PTR [%ESP + 4] fld QWORD PTR [.CPItest_divr_0] fdivrp %ST(1) ret We now generate this: test_divr: fld QWORD PTR [%ESP + 4] fdivr QWORD PTR [.CPItest_divr_0] ret This code desperately needs refactoring, which will come in the next patch. llvm-svn: 12841
-
Chris Lattner authored
instructions use. This doesn't change any functionality except that long constant expressions of these operations will now magically start working. llvm-svn: 12840
-
Chris Lattner authored
fld QWORD PTR [%ESP + 4] fadd QWORD PTR [.CPItest_add_0] instead of: fld QWORD PTR [%ESP + 4] fld QWORD PTR [.CPItest_add_0] faddp %ST(1) I also intend to do this for mul & div, but it appears that I have to refactor a bit of code before I can do so. This is tested by: test/Regression/CodeGen/X86/fp_constant_op.llx llvm-svn: 12839
-
Chris Lattner authored
llvm-svn: 12838
-
Chris Lattner authored
llvm-svn: 12836
-
Chris Lattner authored
1. If an incoming argument is dead, don't load it from the stack 2. Do not code gen noop copies at all (ie, cast int -> uint), not even to a move. This should reduce register pressure for allocators that are unable to coallesce away these copies in some cases. llvm-svn: 12835
-
Chris Lattner authored
llvm-svn: 12833
-
Chris Lattner authored
llvm-svn: 12831
-
Chris Lattner authored
llvm-svn: 12826
-
Chris Lattner authored
llvm-svn: 12825
-
Chris Lattner authored
llvm-svn: 12824
-
Chris Lattner authored
llvm-svn: 12821
-
Chris Lattner authored
Canonicalize add of sign bit constant into a xor llvm-svn: 12819
-
- Apr 10, 2004
-
-
Chris Lattner authored
and a bit more powerful llvm-svn: 12817
-
Chris Lattner authored
llvm-svn: 12816
-
Chris Lattner authored
llvm-svn: 12815
-
Chris Lattner authored
llvm-svn: 12814
-
Chris Lattner authored
llvm-svn: 12813
-
Chris Lattner authored
llvm-svn: 12811
-
Chris Lattner authored
llvm-svn: 12810
-
Chris Lattner authored
don't write to memory llvm-svn: 12808
-
Chris Lattner authored
call and invoke instructions that are known to not write to memory. llvm-svn: 12807
-
Chris Lattner authored
This transforms code like this: %C = or %A, %B %D = select %cond, %C, %A into: %C = select %cond, %B, 0 %D = or %A, %C Since B is often a constant, the select can often be eliminated. In any case, this reduces the usage count of A, allowing subsequent optimizations to happen. This xform applies when the operator is any of: add, sub, mul, or, xor, and, shl, shr llvm-svn: 12800
-
Chris Lattner authored
if (C) V1 |= V2; into: Vx = V1 | V2; V1 = select C, V1, Vx when the expression can be evaluated unconditionally and is *cheap* to execute. This limited form of if conversion is quite handy in lots of cases. For example, it turns this testcase into straight-line code: int in0 ; int in1 ; int in2 ; int in3 ; int in4 ; int in5 ; int in6 ; int in7 ; int in8 ; int in9 ; int in10; int in11; int in12; int in13; int in14; int in15; long output; void mux(void) { output = (in0 ? 0x00000001 : 0) | (in1 ? 0x00000002 : 0) | (in2 ? 0x00000004 : 0) | (in3 ? 0x00000008 : 0) | (in4 ? 0x00000010 : 0) | (in5 ? 0x00000020 : 0) | (in6 ? 0x00000040 : 0) | (in7 ? 0x00000080 : 0) | (in8 ? 0x00000100 : 0) | (in9 ? 0x00000200 : 0) | (in10 ? 0x00000400 : 0) | (in11 ? 0x00000800 : 0) | (in12 ? 0x00001000 : 0) | (in13 ? 0x00002000 : 0) | (in14 ? 0x00004000 : 0) | (in15 ? 0x00008000 : 0) ; } llvm-svn: 12798
-
- Apr 09, 2004
-
-
John Criswell authored
is listed first and the address is listed second. llvm-svn: 12795
-
Chris Lattner authored
that have a constant operand. This implements add.ll:test19, shift.ll:test15*, and others that are not tested llvm-svn: 12794
-
Chris Lattner authored
llvm-svn: 12793
-
Alkis Evlogimenos authored
llvm-svn: 12791
-
John Criswell authored
llvm-svn: 12787
-
John Criswell authored
InstSelectSimple.cpp: Change the checks for proper I/O port address size into an exit() instead of an assertion. Assertions aren't used in Release builds, and handling this error should be graceful (not that this counts as graceful, but it's more graceful). Modified the generation of the IN/OUT instructions to have 0 arguments. X86InstrInfo.td: Added the OpSize attribute to the 16 bit IN and OUT instructions. llvm-svn: 12786
-
- Apr 08, 2004
-
-
Chris Lattner authored
llvm-svn: 12784
-
John Criswell authored
I/O port instructions on x86. The specific code sequence is tailored to the parameters and return value of the intrinsic call. Added the ability for implicit defintions to be printed in the Instruction Printer. Added the ability for RawFrm instruction to print implict uses and defintions with correct comma output. This required adjustment to some methods so that a leading comma would or would not be printed. llvm-svn: 12782
-
John Criswell authored
The Verifier ensures that their parameters are of integral types and have the correct sign, but it does not enforce any size restrictions because such restrictions are platform dependent. llvm-svn: 12781
-
Chris Lattner authored
llvm-svn: 12779
-
Chris Lattner authored
Now we collect all of the call sites we are interested in inlining, then inline them. This entirely avoids issues with trying to inline a call site we got by inlining another call site. This also eliminates iterator invalidation issues. llvm-svn: 12770
-