Skip to content
  1. Apr 24, 2009
    • Nate Begeman's avatar
      PR2957 · bb881d66
      Nate Begeman authored
      ISD::VECTOR_SHUFFLE now stores an array of integers representing the shuffle
      mask internal to the node, rather than taking a BUILD_VECTOR of ConstantSDNodes
      as the shuffle mask.  A value of -1 represents UNDEF.
      
      In addition to eliminating the creation of illegal BUILD_VECTORS just to 
      represent shuffle masks, we are better about canonicalizing the shuffle mask,
      resulting in substantially better code for some classes of shuffles.
      
      A clean up of x86 shuffle code, and some canonicalizing in DAGCombiner is next.
      
      llvm-svn: 69952
      bb881d66
  2. Feb 23, 2009
  3. Dec 12, 2008
  4. Dec 03, 2008
  5. Nov 05, 2008
  6. Aug 27, 2008
  7. Aug 25, 2008
  8. Aug 23, 2008
  9. Jul 25, 2008
  10. Jun 25, 2008
  11. May 29, 2008
  12. May 09, 2008
  13. May 08, 2008
  14. May 03, 2008
  15. Apr 25, 2008
  16. Apr 21, 2008
  17. Apr 16, 2008
  18. Mar 21, 2008
  19. Mar 20, 2008
  20. Mar 15, 2008
  21. Mar 12, 2008
    • Evan Cheng's avatar
      Clean up my own mess. · 99ee78ef
      Evan Cheng authored
      X86 lowering normalize vector 0 to v4i32. However DAGCombine can fold (sub x, x) -> 0 after legalization. It can create a zero vector of a type that's not expected (e.g. v8i16). We don't want to disable the optimization since leaving a (sub x, x) is really bad. Add isel patterns for other types of vector 0 to ensure correctness. It's highly unlikely to happen other than in bugpoint reduced test cases.
      
      llvm-svn: 48279
      99ee78ef
  22. Feb 29, 2008
  23. Feb 19, 2008
    • Evan Cheng's avatar
      - When DAG combiner is folding a bit convert into a BUILD_VECTOR, it should... · 6200c225
      Evan Cheng authored
      - When DAG combiner is folding a bit convert into a BUILD_VECTOR, it should check if it's essentially a SCALAR_TO_VECTOR. Avoid turning (v8i16) <10, u, u, u> to <10, 0, u, u, u, u, u, u>. Instead, simply convert it to a SCALAR_TO_VECTOR of the proper type.
      - X86 now normalize SCALAR_TO_VECTOR to (BIT_CONVERT (v4i32 SCALAR_TO_VECTOR)). Get rid of X86ISD::S2VEC.
      
      llvm-svn: 47290
      6200c225
  24. Jan 10, 2008
  25. Jan 07, 2008
  26. Dec 29, 2007
  27. Dec 18, 2007
  28. Dec 13, 2007
  29. Nov 25, 2007
    • Chris Lattner's avatar
      Fix a long standing deficiency in the X86 backend: we would · 5728bdd4
      Chris Lattner authored
      sometimes emit "zero" and "all one" vectors multiple times,
      for example:
      
      _test2:
      	pcmpeqd	%mm0, %mm0
      	movq	%mm0, _M1
      	pcmpeqd	%mm0, %mm0
      	movq	%mm0, _M2
      	ret
      
      instead of:
      
      _test2:
      	pcmpeqd	%mm0, %mm0
      	movq	%mm0, _M1
      	movq	%mm0, _M2
      	ret
      
      This patch fixes this by always arranging for zero/one vectors
      to be defined as v4i32 or v2i32 (SSE/MMX) instead of letting them be
      any random type.  This ensures they get trivially CSE'd on the dag.
      This fix is also important for LegalizeDAGTypes, as it gets unhappy
      when the x86 backend wants BUILD_VECTOR(i64 0) to be legal even when
      'i64' isn't legal.
      
      This patch makes the following changes:
      
      1) X86TargetLowering::LowerBUILD_VECTOR now lowers 0/1 vectors into
         their canonical types.
      2) The now-dead patterns are removed from the SSE/MMX .td files.
      3) All the patterns in the .td file that referred to immAllOnesV or
         immAllZerosV in the wrong form now use *_bc to match them with a
         bitcast wrapped around them.
      4) X86DAGToDAGISel::SelectScalarSSELoad is generalized to handle 
         bitcast'd zero vectors, which simplifies the code actually.
      5) getShuffleVectorZeroOrUndef is updated to generate a shuffle that
         is legal, instead of generating one that is illegal and expecting
         a later legalize pass to clean it up.
      6) isZeroShuffle is generalized to handle bitcast of zeros.
      7) several other minor tweaks.
      
      This patch is definite goodness, but has the potential to cause random
      code quality regressions.  Please be on the lookout for these and let 
      me know if they happen.
      
      llvm-svn: 44310
      5728bdd4
  30. Sep 11, 2007
  31. Aug 30, 2007
  32. Aug 02, 2007
    • Dan Gohman's avatar
      Mark the SSE and MMX load instructions that · fa3eeeed
      Dan Gohman authored
      X86InstrInfo::isReallyTriviallyReMaterializable knows how to handle
      with the isReMaterializable flag so that it is given a chance to handle
      them. Without hoisting constant-pool loads from loops this isn't very
      visible, though it does keep CodeGen/X86/constant-pool-remat-0.ll from
      making a copy of the constant pool on the stack.
      
      llvm-svn: 40736
      fa3eeeed
  33. Jul 31, 2007
  34. Jul 19, 2007
    • Evan Cheng's avatar
      Change instruction description to split OperandList into OutOperandList and · 94b5a80b
      Evan Cheng authored
      InOperandList. This gives one piece of important information: # of results
      produced by an instruction.
      An example of the change:
      def ADD32rr  : I<0x01, MRMDestReg, (ops GR32:$dst, GR32:$src1, GR32:$src2),
                       "add{l} {$src2, $dst|$dst, $src2}",
                       [(set GR32:$dst, (add GR32:$src1, GR32:$src2))]>;
      =>
      def ADD32rr  : I<0x01, MRMDestReg, (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
                       "add{l} {$src2, $dst|$dst, $src2}",
                       [(set GR32:$dst, (add GR32:$src1, GR32:$src2))]>;
      
      llvm-svn: 40033
      94b5a80b
Loading