- Dec 12, 2008
-
-
Mon P Wang authored
Added support for TRUNC v8i16 to v8i8 for X86 (MMX) llvm-svn: 60916
-
- Dec 03, 2008
-
-
Evan Cheng authored
llvm-svn: 60499
-
Dan Gohman authored
llvm-svn: 60487
-
- Nov 05, 2008
-
-
Evan Cheng authored
llvm-svn: 58752
-
- Aug 27, 2008
-
-
Bill Wendling authored
SSE2 registers as well as the MMX registers. llvm-svn: 55436
-
- Aug 25, 2008
-
-
Bill Wendling authored
llvm-svn: 55318
-
Bill Wendling authored
instructions on having SSE2. llvm-svn: 55317
-
- Aug 23, 2008
-
-
Anton Korobeynikov authored
Is there way to avoid explicit target check? llvm-svn: 55238
-
- Jul 25, 2008
-
-
Nate Begeman authored
llvm-svn: 54026
-
- Jun 25, 2008
-
-
Dale Johannesen authored
load,store,call,return,bitcast. This is enough to make call and return work. llvm-svn: 52691
-
- May 29, 2008
-
-
Evan Cheng authored
llvm-svn: 51667
-
- May 09, 2008
-
-
Evan Cheng authored
Note, some of the code will be moved into target independent part of DAG combiner in a subsequent patch. llvm-svn: 50918
-
- May 08, 2008
-
-
Evan Cheng authored
Handle vector move / load which zero the destination register top bits (i.e. movd, movq, movss (addr), movsd (addr)) with X86 specific dag combine. llvm-svn: 50838
-
- May 03, 2008
-
-
Evan Cheng authored
Add separate intrinsics for MMX / SSE shifts with i32 integer operands. This allow us to simplify the horribly complicated matching code. llvm-svn: 50601
-
- Apr 25, 2008
-
-
Evan Cheng authored
llvm-svn: 50291
-
Evan Cheng authored
llvm-svn: 50289
-
Evan Cheng authored
llvm-svn: 50278
-
- Apr 21, 2008
-
-
Dan Gohman authored
llvm-svn: 50053
-
- Apr 16, 2008
-
-
Dan Gohman authored
to 64-bit GPR registers on x86-64. llvm-svn: 49757
-
- Mar 21, 2008
-
-
Evan Cheng authored
llvm-svn: 48627
-
- Mar 20, 2008
-
-
Evan Cheng authored
llvm-svn: 48569
-
- Mar 15, 2008
-
-
Evan Cheng authored
Replace all target specific implicit def instructions with a target independent one: TargetInstrInfo::IMPLICIT_DEF. llvm-svn: 48380
-
- Mar 12, 2008
-
-
Evan Cheng authored
X86 lowering normalize vector 0 to v4i32. However DAGCombine can fold (sub x, x) -> 0 after legalization. It can create a zero vector of a type that's not expected (e.g. v8i16). We don't want to disable the optimization since leaving a (sub x, x) is really bad. Add isel patterns for other types of vector 0 to ensure correctness. It's highly unlikely to happen other than in bugpoint reduced test cases. llvm-svn: 48279
-
- Feb 29, 2008
-
-
Anders Carlsson authored
llvm-svn: 47740
-
- Feb 19, 2008
-
-
Evan Cheng authored
- When DAG combiner is folding a bit convert into a BUILD_VECTOR, it should check if it's essentially a SCALAR_TO_VECTOR. Avoid turning (v8i16) <10, u, u, u> to <10, 0, u, u, u, u, u, u>. Instead, simply convert it to a SCALAR_TO_VECTOR of the proper type. - X86 now normalize SCALAR_TO_VECTOR to (BIT_CONVERT (v4i32 SCALAR_TO_VECTOR)). Get rid of X86ISD::S2VEC. llvm-svn: 47290
-
- Jan 10, 2008
-
-
Chris Lattner authored
x86 backend where instructions were not marked maystore/mayload, and perf issues where instructions were not marked neverHasSideEffects. It would be really nice if we could write patterns for copy instructions. I have audited all the x86 instructions down to MOVDQAmr. The flags on others and on other targets are probably not right in all cases, but no clients currently use this info that are enabled by default. llvm-svn: 45829
-
Chris Lattner authored
inferred from the instr patterns. llvm-svn: 45824
-
- Jan 07, 2008
-
-
Chris Lattner authored
llvm-svn: 45667
-
- Dec 29, 2007
-
-
Chris Lattner authored
llvm-svn: 45418
-
- Dec 18, 2007
-
-
Bill Wendling authored
based what flag to set on whether it was already marked as "isRematerializable". If there was a further check to determine if it's "really" rematerializable, then I marked it as "mayHaveSideEffects" and created a check in the X86 back-end similar to the remat one. llvm-svn: 45132
-
- Dec 13, 2007
-
-
Evan Cheng authored
Implicit def instructions, e.g. X86::IMPLICIT_DEF_GR32, are always re-materializable and they should not be spilled. llvm-svn: 44960
-
- Nov 25, 2007
-
-
Chris Lattner authored
sometimes emit "zero" and "all one" vectors multiple times, for example: _test2: pcmpeqd %mm0, %mm0 movq %mm0, _M1 pcmpeqd %mm0, %mm0 movq %mm0, _M2 ret instead of: _test2: pcmpeqd %mm0, %mm0 movq %mm0, _M1 movq %mm0, _M2 ret This patch fixes this by always arranging for zero/one vectors to be defined as v4i32 or v2i32 (SSE/MMX) instead of letting them be any random type. This ensures they get trivially CSE'd on the dag. This fix is also important for LegalizeDAGTypes, as it gets unhappy when the x86 backend wants BUILD_VECTOR(i64 0) to be legal even when 'i64' isn't legal. This patch makes the following changes: 1) X86TargetLowering::LowerBUILD_VECTOR now lowers 0/1 vectors into their canonical types. 2) The now-dead patterns are removed from the SSE/MMX .td files. 3) All the patterns in the .td file that referred to immAllOnesV or immAllZerosV in the wrong form now use *_bc to match them with a bitcast wrapped around them. 4) X86DAGToDAGISel::SelectScalarSSELoad is generalized to handle bitcast'd zero vectors, which simplifies the code actually. 5) getShuffleVectorZeroOrUndef is updated to generate a shuffle that is legal, instead of generating one that is illegal and expecting a later legalize pass to clean it up. 6) isZeroShuffle is generalized to handle bitcast of zeros. 7) several other minor tweaks. This patch is definite goodness, but has the potential to cause random code quality regressions. Please be on the lookout for these and let me know if they happen. llvm-svn: 44310
-
- Sep 11, 2007
-
-
Evan Cheng authored
llvm-svn: 41863
-
- Aug 30, 2007
-
-
Evan Cheng authored
llvm-svn: 41595
-
- Aug 02, 2007
-
-
Dan Gohman authored
X86InstrInfo::isReallyTriviallyReMaterializable knows how to handle with the isReMaterializable flag so that it is given a chance to handle them. Without hoisting constant-pool loads from loops this isn't very visible, though it does keep CodeGen/X86/constant-pool-remat-0.ll from making a copy of the constant pool on the stack. llvm-svn: 40736
-
- Jul 31, 2007
-
-
Dan Gohman authored
mnemonics from their operands instead of single spaces. This makes the assembly output a little more consistent with various other compilers (f.e. GCC), and slightly easier to read. Also, update the regression tests accordingly. llvm-svn: 40648
-
Evan Cheng authored
Redo and generalize previously removed opt for pinsrw: (vextract (v4i32 bc (v4f32 s2v (f32 load ))), 0) -> (i32 load ) llvm-svn: 40628
-
- Jul 19, 2007
-
-
Evan Cheng authored
InOperandList. This gives one piece of important information: # of results produced by an instruction. An example of the change: def ADD32rr : I<0x01, MRMDestReg, (ops GR32:$dst, GR32:$src1, GR32:$src2), "add{l} {$src2, $dst|$dst, $src2}", [(set GR32:$dst, (add GR32:$src1, GR32:$src2))]>; => def ADD32rr : I<0x01, MRMDestReg, (outs GR32:$dst), (ins GR32:$src1, GR32:$src2), "add{l} {$src2, $dst|$dst, $src2}", [(set GR32:$dst, (add GR32:$src1, GR32:$src2))]>; llvm-svn: 40033
-
- Jul 04, 2007
-
-
Bill Wendling authored
llvm-svn: 37866
-
Bill Wendling authored
Still need to have JIT generate this code. llvm-svn: 37863
-