- Jan 27, 2009
-
-
Evan Cheng authored
llvm-svn: 63090
-
Dan Gohman authored
llvm-svn: 63088
-
Dan Gohman authored
Don't use the Red Zone when dynamic stack realignment is needed. This could be implemented, but most x86-64 ABIs don't require dynamic stack realignment so it isn't urgent. llvm-svn: 63074
-
- Jan 26, 2009
-
-
Dan Gohman authored
disabled by default; I'll enable it when I hook it up with the llvm-gcc flag which controls it. llvm-svn: 63056
-
Evan Cheng authored
Enhance logic in X86DAGToDAGISel::PreprocessForRMW which move load inside callseq_start to allow it to be folded into a call. It was not considering the cases where a token factor is between the load and the callseq_start. llvm-svn: 63022
-
Dan Gohman authored
tidy up SDUse and related code. - Replace the operator= member functions with a set method, like LLVM Use has, and variants setInitial and setNode, which take care up updating use lists, like LLVM Use's does. This simplifies code that calls these functions. - getSDValue() is renamed to get(), as in LLVM Use, though most places can either use the implicit conversion to SDValue or the convenience functions instead. - Fix some more node vs. value terminology issues. Also, eliminate the one remaining use of SDOperandPtr, and SDOperandPtr itself. llvm-svn: 62995
-
Nate Begeman authored
llvm-svn: 62988
-
Nate Begeman authored
other x86 segments. address space 0 is stack/default, 1-255 are reserved for client use. llvm-svn: 62980
-
Nate Begeman authored
llvm-svn: 62979
-
- Jan 25, 2009
-
-
Torok Edwin authored
llvm-svn: 62973
-
Torok Edwin authored
for example in the case of va-args. XFAIL associated tests. llvm-svn: 62972
-
Torok Edwin authored
llvm-svn: 62967
-
- Jan 24, 2009
-
-
Nate Begeman authored
llvm-svn: 62940
-
- Jan 23, 2009
-
-
Chris Lattner authored
llvm-svn: 62887
-
- Jan 22, 2009
-
-
Bob Wilson authored
corresponding to the "not" and "vnot" PatFrags. Use the new method in some places where it seems appropriate. llvm-svn: 62768
-
Evan Cheng authored
Eliminate a couple of fields from TargetRegisterClass: SubRegClasses and SuperRegClasses. These are not necessary. Also eliminate getSubRegisterRegClass and getSuperRegisterRegClass. These are slow and their results can change if register file names change. Just use TargetLowering::getRegClassFor() to get the right TargetRegisterClass instead. llvm-svn: 62762
-
Dan Gohman authored
to be supported in the JIT. llvm-svn: 62730
-
- Jan 21, 2009
-
-
Evan Cheng authored
llvm-svn: 62710
-
Dan Gohman authored
we want to clear %ah to zero before a division, just use a zero-extending mov to %al. This fixes PR3366. llvm-svn: 62691
-
Evan Cheng authored
unsigned test(unsigned a) { return ~a; } llvm used to generate: movl $4294967295, %eax xorl 4(%esp), %eax Now it generates: movl 4(%esp), %eax notl %eax It's 3 bytes shorter. llvm-svn: 62661
-
- Jan 20, 2009
-
-
Evan Cheng authored
llvm-svn: 62600
-
- Jan 19, 2009
-
-
Evan Cheng authored
DIVREM isel deficiency: If sign bit is known zero, zero out DX/EDX/RDX instead of sign extending the low part (in AX/EAX/RAX) into it. llvm-svn: 62519
-
Evan Cheng authored
Minor tweak to LowerUINT_TO_FP_i32. Bias (after scalar_to_vector) has two uses so we should make it the second source operand of ISD::OR so 2-address pass won't have to be smart about commuting. %reg1024<def> = MOVSDrm %reg0, 1, %reg0, <cp#0>, Mem:LD(8,8) [ConstantPool + 0] %reg1025<def> = MOVSD2PDrr %reg1024 %reg1026<def> = MOVDI2PDIrm <fi#-1>, 1, %reg0, 0, Mem:LD(4,16) [FixedStack-1 + 0] %reg1027<def> = ORPSrr %reg1025<kill>, %reg1026<kill> %reg1028<def> = MOVPD2SDrr %reg1027<kill> %reg1029<def> = SUBSDrr %reg1028<kill>, %reg1024<kill> %reg1030<def> = CVTSD2SSrr %reg1029<kill> MOVSSmr <fi#0>, 1, %reg0, 0, %reg1030<kill>, Mem:ST(4,4) [FixedStack0 + 0] %reg1031<def> = LD_Fp32m80 <fi#0>, 1, %reg0, 0, Mem:LD(4,16) [FixedStack0 + 0] RET %reg1031<kill>, %ST0<imp-use,kill> The reason 2-addr pass isn't smart enough to commute the ORPSrr is because it can't look pass the MOVSD2PDrr instruction. llvm-svn: 62505
-
Evan Cheng authored
optimize it to a SINT_TO_FP when the sign bit is known zero. X86 isel should perform the optimization itself. llvm-svn: 62504
-
- Jan 17, 2009
-
-
Bill Wendling authored
llvm-svn: 62415
-
Evan Cheng authored
llvm-svn: 62413
-
Bill Wendling authored
llvm-svn: 62405
-
Bill Wendling authored
X86. This code: void f() { uint32_t x; float y = (float)x; } used to be: movl %eax, -8(%ebp) movl [2^52 double], -4(%ebp) movsd -8(%ebp), %xmm0 subsd [2^52 double], %xmm0 cvtsd2ss %xmm0, %xmm0 Is now: movsd [2^52 double], %xmm0 movsd %xmm0, %xmm1 movd %ecx, %xmm2 orps %xmm2, %xmm1 subsd %xmm0, %xmm1 cvtsd2ss %xmm1, %xmm0 This is faster on X86. Note that there's an extra load of %xmm0 into %xmm1. That will be fixed in a later coalescer fix. llvm-svn: 62404
-
- Jan 16, 2009
-
-
Bill Wendling authored
llvm-svn: 62338
-
- Jan 15, 2009
-
-
Mon P Wang authored
llvm-svn: 62281
-
Rafael Espindola authored
llvm-svn: 62279
-
Dan Gohman authored
and into the ScheduleDAGInstrs class, so that they don't get destructed and re-constructed for each block. This fixes a compile-time hot spot in the post-pass scheduler. To help facilitate this, tidy and do some minor reorganization in the scheduler constructor functions. llvm-svn: 62275
-
Dan Gohman authored
llvm-svn: 62267
-
Dan Gohman authored
llvm-svn: 62265
-
- Jan 14, 2009
-
-
Dan Gohman authored
llvm-svn: 62196
-
Dan Gohman authored
llvm-svn: 62195
-
Dan Gohman authored
to Eli for pointing out that these forms don't ignore the high bits of their index operands, and as such are not immediately suitable for use by isel. llvm-svn: 62194
-
- Jan 13, 2009
-
-
Dan Gohman authored
llvm-svn: 62180
-
Dan Gohman authored
llvm-svn: 62179
-
Devang Patel authored
Use DebugInfo interface to lower dbg_* intrinsics. llvm-svn: 62127
-