- Jan 13, 2008
-
-
Duncan Sands authored
llvm-svn: 45940
-
- Jan 12, 2008
-
-
Evan Cheng authored
llvm-svn: 45898
-
- Jan 11, 2008
-
-
Arnold Schwaighofer authored
Actually were not riding any arguments. Sadly there is no semantic spell checker that is going to safe you from such a mistake. llvm-svn: 45868
-
Arnold Schwaighofer authored
commit all arguments where moved to the stack slot where they would reside on a normal function call before the lowering to the tail call stack slot. This was done to prevent arguments overwriting each other. Now only arguments sourcing from a FORMAL_ARGUMENTS node or a CopyFromReg node with virtual register (could also be a caller's argument) are lowered indirectly. --This line, and those below, will be ignored-- M X86/X86ISelLowering.cpp M X86/README.txt llvm-svn: 45867
-
Arnold Schwaighofer authored
llvm-svn: 45865
-
- Jan 10, 2008
-
-
Evan Cheng authored
llvm-svn: 45813
-
Evan Cheng authored
Do not use the stack pointer directly, issue a copyfromreg instead. Otherwise we can end up with something like ADD32ri %esp, x which two-address pass won't like. llvm-svn: 45798
-
Evan Cheng authored
llvm-svn: 45792
-
- Jan 08, 2008
-
-
Evan Cheng authored
llvm-svn: 45725
-
- Jan 05, 2008
-
-
Nate Begeman authored
the target independent legalizer. llvm-svn: 45631
-
Gordon Henriksen authored
unifying the copied algorithms and saving over 500 LOC. There should be no functionality change, but please test on your favorite x86 target. llvm-svn: 45627
-
- Jan 03, 2008
-
-
Gordon Henriksen authored
llvm-svn: 45536
-
- Dec 31, 2007
-
-
Chris Lattner authored
that "machine" classes are used to represent the current state of the code being compiled. Given this expanded name, we can start moving other stuff into it. For now, move the UsedPhysRegs and LiveIn/LoveOuts vectors from MachineFunction into it. Update all the clients to match. This also reduces some needless #includes, such as MachineModuleInfo from MachineFunction. llvm-svn: 45467
-
Chris Lattner authored
e.g. MO.isMBB() instead of MO.isMachineBasicBlock(). I don't plan on switching everything over, so new clients should just start using the shorter names. Remove old long accessors, switching everything over to use the short accessor: getMachineBasicBlock() -> getMBB(), getConstantPoolIndex() -> getIndex(), setMachineBasicBlock -> setMBB(), etc. llvm-svn: 45464
-
- Dec 29, 2007
-
-
Chris Lattner authored
llvm-svn: 45418
-
Chris Lattner authored
as: _bar: pushl %esi subl $8, %esp movl 16(%esp), %esi call L_foo$stub fstps (%esi) addl $8, %esp popl %esi #FP_REG_KILL ret instead of: _bar: pushl %esi subl $8, %esp movl 16(%esp), %esi call L_foo$stub fstpl (%esi) cvtsd2ss (%esi), %xmm0 movss %xmm0, (%esi) addl $8, %esp popl %esi #FP_REG_KILL ret llvm-svn: 45401
-
Chris Lattner authored
if we are just going to store it back anyway. This improves things like: double foo(); void bar(double *P) { *P = foo(); } llvm-svn: 45399
-
- Dec 16, 2007
-
-
Chris Lattner authored
llvm-svn: 45075
-
- Dec 15, 2007
-
-
Evan Cheng authored
llvm-svn: 45058
-
- Dec 14, 2007
-
-
Evan Cheng authored
Fix ctlz and cttz. llvm definition requires them to return number of bits in of the src type when value is zero. llvm-svn: 45029
-
Evan Cheng authored
llvm-svn: 45024
-
- Dec 12, 2007
-
-
Dan Gohman authored
SelectionDAG::getConstant, in the same way as vector floating-point constants. This allows the legalize expansion code for @llvm.ctpop and friends to be usable with vector types. llvm-svn: 44954
-
Evan Cheng authored
llvm-svn: 44929
-
Evan Cheng authored
Lower a build_vector with all constants into a constpool load unless it can be done with a move to low part. llvm-svn: 44921
-
- Dec 11, 2007
-
-
Evan Cheng authored
possible before resorting to pextrw and pinsrw. - Better codegen for v4i32 shuffles masquerading as v8i16 or v16i8 shuffles. - Improves (i16 extract_vector_element 0) codegen by recognizing (i32 extract_vector_element 0) does not require a pextrw. llvm-svn: 44836
-
Nate Begeman authored
llvm-svn: 44835
-
- Dec 07, 2007
-
-
Evan Cheng authored
llvm-svn: 44686
-
Evan Cheng authored
llvm-svn: 44676
-
- Dec 06, 2007
-
-
Evan Cheng authored
Remove a bogus optimization. It's not possible to do a move to low element to a <8 x i16> or <16 x i8> vector. llvm-svn: 44669
-
- Nov 27, 2007
-
-
Duncan Sands authored
the function type, instead they belong to functions and function calls. This is an updated and slightly corrected version of Reid Spencer's original patch. The only known problem is that auto-upgrading of bitcode files doesn't seem to work properly (see test/Bitcode/AutoUpgradeIntrinsics.ll). Hopefully a bitcode guru (who might that be? :) ) will fix it. llvm-svn: 44359
-
- Nov 25, 2007
-
-
Chris Lattner authored
sometimes emit "zero" and "all one" vectors multiple times, for example: _test2: pcmpeqd %mm0, %mm0 movq %mm0, _M1 pcmpeqd %mm0, %mm0 movq %mm0, _M2 ret instead of: _test2: pcmpeqd %mm0, %mm0 movq %mm0, _M1 movq %mm0, _M2 ret This patch fixes this by always arranging for zero/one vectors to be defined as v4i32 or v2i32 (SSE/MMX) instead of letting them be any random type. This ensures they get trivially CSE'd on the dag. This fix is also important for LegalizeDAGTypes, as it gets unhappy when the x86 backend wants BUILD_VECTOR(i64 0) to be legal even when 'i64' isn't legal. This patch makes the following changes: 1) X86TargetLowering::LowerBUILD_VECTOR now lowers 0/1 vectors into their canonical types. 2) The now-dead patterns are removed from the SSE/MMX .td files. 3) All the patterns in the .td file that referred to immAllOnesV or immAllZerosV in the wrong form now use *_bc to match them with a bitcast wrapped around them. 4) X86DAGToDAGISel::SelectScalarSSELoad is generalized to handle bitcast'd zero vectors, which simplifies the code actually. 5) getShuffleVectorZeroOrUndef is updated to generate a shuffle that is legal, instead of generating one that is illegal and expecting a later legalize pass to clean it up. 6) isZeroShuffle is generalized to handle bitcast of zeros. 7) several other minor tweaks. This patch is definite goodness, but has the potential to cause random code quality regressions. Please be on the lookout for these and let me know if they happen. llvm-svn: 44310
-
- Nov 24, 2007
-
-
Chris Lattner authored
among others. llvm-svn: 44302
-
Chris Lattner authored
1) Change the interface to TargetLowering::ExpandOperationResult to take and return entire NODES that need a result expanded, not just the value. This allows us to handle things like READCYCLECOUNTER, which returns two values. 2) Implement (extremely limited) support in LegalizeDAG::ExpandOp for MERGE_VALUES. 3) Reimplement custom lowering in LegalizeDAGTypes in terms of the new ExpandOperationResult. This makes the result simpler and fully general. 4) Implement (fully general) expand support for MERGE_VALUES in LegalizeDAGTypes. 5) Implement ExpandOperationResult support for ARM f64->i64 bitconvert and ARM i64 shifts, allowing them to work with LegalizeDAGTypes. 6) Implement ExpandOperationResult support for X86 READCYCLECOUNTER and FP_TO_SINT, allowing them to work with LegalizeDAGTypes. LegalizeDAGTypes now passes several more X86 codegen tests when enabled and when type legalization in LegalizeDAG is ifdef'd out. llvm-svn: 44300
-
- Nov 16, 2007
-
-
Anton Korobeynikov authored
llvm-svn: 44183
-
- Nov 13, 2007
-
-
Bill Wendling authored
adjustment fields, and an optional flag. If there is a "dynamic_stackalloc" in the code, make sure that it's bracketed by CALLSEQ_START and CALLSEQ_END. If not, then there is the potential for the stack to be changed while the stack's being used by another instruction (like a call). This can only result in tears... llvm-svn: 44037
-
- Nov 10, 2007
-
-
Arnold Schwaighofer authored
llvm-svn: 43978
-
- Nov 09, 2007
-
-
Evan Cheng authored
Then: call "L1$pb" "L1$pb": popl %eax ... LBB1_1: # entry imull $4, %ecx, %ecx leal LJTI1_0-"L1$pb"(%eax), %edx addl LJTI1_0-"L1$pb"(%ecx,%eax), %edx jmpl *%edx .align 2 .set L1_0_set_3,LBB1_3-LJTI1_0 .set L1_0_set_2,LBB1_2-LJTI1_0 .set L1_0_set_5,LBB1_5-LJTI1_0 .set L1_0_set_4,LBB1_4-LJTI1_0 LJTI1_0: .long L1_0_set_3 .long L1_0_set_2 Now: call "L1$pb" "L1$pb": popl %eax ... LBB1_1: # entry addl LJTI1_0-"L1$pb"(%eax,%ecx,4), %eax jmpl *%eax .align 2 .set L1_0_set_3,LBB1_3-"L1$pb" .set L1_0_set_2,LBB1_2-"L1$pb" .set L1_0_set_5,LBB1_5-"L1$pb" .set L1_0_set_4,LBB1_4-"L1$pb" LJTI1_0: .long L1_0_set_3 .long L1_0_set_2 llvm-svn: 43924
-
- Nov 06, 2007
-
-
Rafael Espindola authored
Thanks for the suggestions Bill :-) llvm-svn: 43742
-
- Nov 04, 2007
-
-
Chris Lattner authored
regs on x86-64. llvm-svn: 43669
-
- Nov 02, 2007
-
-
Evan Cheng authored
llvm-svn: 43646
-