Skip to content
  1. Oct 08, 2010
  2. Oct 07, 2010
  3. Oct 01, 2010
    • Owen Anderson's avatar
      Now that the profitable bits of EnableFullLoadPRE have been enabled by... · 13a642da
      Owen Anderson authored
      Now that the profitable bits of EnableFullLoadPRE have been enabled by default, rip out the remainder.
      Anyone interested in more general PRE would be better served by implementing it separately, to get real
      anticipation calculation, etc.
      
      llvm-svn: 115337
      13a642da
    • Eric Christopher's avatar
      Fix the other half of the alignment changing issue by making sure that the · 3ad2f3a2
      Eric Christopher authored
      memcpy alignment is the minimum of the incoming alignments.
      
      Fixes PR 8266.
      
      llvm-svn: 115305
      3ad2f3a2
    • Dale Johannesen's avatar
      Massive rewrite of MMX: · dd224d23
      Dale Johannesen authored
      The x86_mmx type is used for MMX intrinsics, parameters and
      return values where these use MMX registers, and is also
      supported in load, store, and bitcast.
      
      Only the above operations generate MMX instructions, and optimizations
      do not operate on or produce MMX intrinsics. 
      
      MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
      smaller pieces.  Optimizations may occur on these forms and the
      result casted back to x86_mmx, provided the result feeds into a
      previous existing x86_mmx operation.
      
      The point of all this is prevent optimizations from introducing
      MMX operations, which is unsafe due to the EMMS problem.
      
      llvm-svn: 115243
      dd224d23
  4. Sep 30, 2010
  5. Sep 29, 2010
  6. Sep 28, 2010
  7. Sep 27, 2010
  8. Sep 25, 2010
    • Owen Anderson's avatar
      LoadPRE was not properly checking that the load it was PRE'ing post-dominated... · b590a927
      Owen Anderson authored
      LoadPRE was not properly checking that the load it was PRE'ing post-dominated the block it was being hoisted to.
      Splitting critical edges at the merge point only addressed part of the issue; it is also possible for non-post-domination
      to occur when the path from the load to the merge has branches in it.  Unfortunately, full anticipation analysis is
      time-consuming, so for now approximate it.  This is strictly more conservative than real anticipation, so we will miss
      some cases that real PRE would allow, but we also no longer insert loads into paths where they didn't exist before. :-)
      
      This is a very slight net positive on SPEC for me (0.5% on average).  Most of the benchmarks are largely unaffected, but
      when it pays off it pays off decently: 181.mcf improves by 4.5% on my machine.
      
      llvm-svn: 114785
      b590a927
    • Eric Christopher's avatar
      If we're changing the source of a memcpy we need to use the alignment · ebacd2b0
      Eric Christopher authored
      of the source, not the original alignment since it may no longer
      be valid.
      
      Fixes rdar://8400094
      
      llvm-svn: 114781
      ebacd2b0
  9. Sep 23, 2010
  10. Sep 22, 2010
  11. Sep 21, 2010
  12. Sep 18, 2010
  13. Sep 16, 2010
  14. Sep 14, 2010
Loading