Skip to content
  1. Oct 01, 2010
    • Dale Johannesen's avatar
      Massive rewrite of MMX: · dd224d23
      Dale Johannesen authored
      The x86_mmx type is used for MMX intrinsics, parameters and
      return values where these use MMX registers, and is also
      supported in load, store, and bitcast.
      
      Only the above operations generate MMX instructions, and optimizations
      do not operate on or produce MMX intrinsics. 
      
      MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
      smaller pieces.  Optimizations may occur on these forms and the
      result casted back to x86_mmx, provided the result feeds into a
      previous existing x86_mmx operation.
      
      The point of all this is prevent optimizations from introducing
      MMX operations, which is unsafe due to the EMMS problem.
      
      llvm-svn: 115243
      dd224d23
  2. Sep 22, 2010
  3. Sep 21, 2010
    • Chris Lattner's avatar
      fix a long standing wart: all the ComplexPattern's were being · 0e023ea0
      Chris Lattner authored
      passed the root of the match, even though only a few patterns
      actually needed this (one in X86, several in ARM [which should
      be refactored anyway], and some in CellSPU that I don't feel 
      like detangling).   Instead of requiring all ComplexPatterns to
      take the dead root, have targets opt into getting the root by
      putting SDNPWantRoot on the ComplexPattern.
      
      llvm-svn: 114471
      0e023ea0
  4. Sep 13, 2010
  5. Sep 10, 2010
  6. Sep 07, 2010
  7. Sep 01, 2010
  8. Aug 31, 2010
  9. Aug 21, 2010
    • Bruno Cardoso Lopes's avatar
      This is the first step towards refactoring the x86 vector shuffle code. The · 6f3b38a8
      Bruno Cardoso Lopes authored
      general idea here is to have a group of x86 target specific nodes which are
      going to be selected during lowering and then directly matched in isel.
      
      The commit includes the addition of those specific nodes and a *bunch* of
      patterns, and incrementally we're going to switch between them and what we
      have right now. Both the patterns and target specific nodes can change as
      we move forward with this work.
      
      llvm-svn: 111691
      6f3b38a8
  10. Aug 13, 2010
  11. Aug 11, 2010
    • Bruno Cardoso Lopes's avatar
      Add AVX matching patterns to Packed Bit Test intrinsics. · 91d61df3
      Bruno Cardoso Lopes authored
      Apply the same approach of SSE4.1 ptest intrinsics but
      create a new x86 node "testp" since AVX introduces
      vtest{ps}{pd} instructions which set ZF and CF depending
      on sign bit AND and ANDN of packed floating-point sources.
      
      This is slightly different from what the "ptest" does.
      Tests comming with the other 256 intrinsics tests.
      
      llvm-svn: 110744
      91d61df3
  12. Aug 06, 2010
  13. Jul 22, 2010
  14. Jul 20, 2010
  15. Jul 13, 2010
    • David Greene's avatar
      · 03264efe
      David Greene authored
      Move some SIMD fragment code into X86InstrFragmentsSIMD so that the
      utility classes can be used from multiple files.  This will aid
      transitioning to a new refactored x86 SIMD specification.
      
      llvm-svn: 108213
      03264efe
  16. Feb 10, 2010
    • David Greene's avatar
      · 509be1fe
      David Greene authored
      TableGen fragment refactoring.
      
      Move some utility TableGen defs, classes, etc. into a common file so
      they may be used my multiple pattern files.  We will use this for
      the AVX specification to help with the transition from the current
      SSE specification.
      
      llvm-svn: 95727
      509be1fe
Loading