Skip to content
  1. Oct 19, 2010
  2. Oct 12, 2010
  3. Oct 11, 2010
  4. Oct 08, 2010
  5. Oct 07, 2010
  6. Oct 04, 2010
  7. Oct 01, 2010
    • Dale Johannesen's avatar
      Massive rewrite of MMX: · dd224d23
      Dale Johannesen authored
      The x86_mmx type is used for MMX intrinsics, parameters and
      return values where these use MMX registers, and is also
      supported in load, store, and bitcast.
      
      Only the above operations generate MMX instructions, and optimizations
      do not operate on or produce MMX intrinsics. 
      
      MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
      smaller pieces.  Optimizations may occur on these forms and the
      result casted back to x86_mmx, provided the result feeds into a
      previous existing x86_mmx operation.
      
      The point of all this is prevent optimizations from introducing
      MMX operations, which is unsafe due to the EMMS problem.
      
      llvm-svn: 115243
      dd224d23
  8. Sep 27, 2010
  9. Sep 24, 2010
  10. Sep 22, 2010
  11. Sep 21, 2010
  12. Sep 13, 2010
  13. Sep 08, 2010
    • Bruno Cardoso Lopes's avatar
      99a9f466
    • Bruno Cardoso Lopes's avatar
      x86 vector shuffle lowering now relies only on target specific · f7fee1c1
      Bruno Cardoso Lopes authored
      nodes to emit shuffles and don't do isel mask matching anymore.
      - Add the selection of the remaining shuffle opcode (movddup)
      - Introduce two new functions to "recognize" where we may get
      potential folds and add several comments to them explaining why
      they are not yet in the desidered shape.
      - Add more patterns to fallback the case where we select
      a specific shuffle opcode as if it could fold a load, but it
      can't, so remap to a valid instruction.
      - Add a couple of FIXMEs to address in the following days once
      there's a good solution to the current folding problem.
      
      llvm-svn: 113369
      f7fee1c1
  14. Sep 07, 2010
Loading