Skip to content
  1. Oct 13, 2012
    • Benjamin Kramer's avatar
      X86: Promote i8 cmov when both operands are coming from truncates of the same width. · d6b9362f
      Benjamin Kramer authored
      X86 doesn't have i8 cmovs so isel would emit a branch. Emitting branches at this
      level is often not a good idea because it's too late for many optimizations to
      kick in. This solution doesn't add any extensions (truncs are free) and tries
      to avoid introducing partial register stalls by filtering direct copyfromregs.
      
      I'm seeing a ~10% speedup on reading a random .png file with libpng15 via
      graphicsmagick on x86_64/westmere, but YMMV depending on the microarchitecture.
      
      llvm-svn: 165868
      d6b9362f
  2. Oct 11, 2012
  3. Oct 10, 2012
    • Nadav Rotem's avatar
      Patch by Shuxin Yang <shuxin.llvm@gmail.com>. · 17418964
      Nadav Rotem authored
      Original message:
      
      The attached is the fix to radar://11663049. The optimization can be outlined by following rules:
      
         (select (x != c), e, c) -> select (x != c), e, x),
         (select (x == c), c, e) -> select (x == c), x, e)
      where the <c> is an integer constant.
      
       The reason for this change is that : on x86, conditional-move-from-constant needs two instructions;
      however, conditional-move-from-register need only one instruction.
      
        While the LowerSELECT() sounds to be the most convenient place for this optimization, it turns out to be a bad place. The reason is that by replacing the constant <c> with a symbolic value, it obscure some instruction-combining opportunities which would otherwise be very easy to spot. For that reason, I have to postpone the change to last instruction-combining phase.
      
        The change passes the test of "make check-all -C <build-root/test" and "make -C project/test-suite/SingleSource".
      
      llvm-svn: 165661
      17418964
    • Michael Liao's avatar
      Add support for FP_ROUND from v2f64 to v2f32 · e999b865
      Michael Liao authored
      - Due to the current matching vector elements constraints in
        ISD::FP_ROUND, rounding from v2f64 to v4f32 (after legalization from
        v2f32) is scalarized. Add a customized v2f32 widening to convert it
        into a target-specific X86ISD::VFPROUND to work around this
        constraints.
      
      llvm-svn: 165631
      e999b865
    • Michael Liao's avatar
      Add alternative support for FP_ROUND from v2f32 to v2f64 · effae0c8
      Michael Liao authored
      - Due to the current matching vector elements constraints in ISD::FP_EXTEND,
        rounding from v2f32 to v2f64 is scalarized. Add a customized v2f32 widening
        to convert it into a target-specific X86ISD::VFPEXT to work around this
        constraints. This patch also reverts a previous attempt to fix this issue by
        recovering the scalarized ISD::FP_EXTEND pattern and thus significantly
        reduces the overhead of supporting non-power-2 vector FP extend.
      
      llvm-svn: 165625
      effae0c8
    • Evan Cheng's avatar
  4. Oct 09, 2012
  5. Oct 08, 2012
  6. Oct 04, 2012
  7. Sep 30, 2012
  8. Sep 26, 2012
  9. Sep 25, 2012
  10. Sep 21, 2012
  11. Sep 20, 2012
    • Michael Liao's avatar
      Re-work X86 code generation of atomic ops with spin-loop · 3237662b
      Michael Liao authored
      - Rewrite/merge pseudo-atomic instruction emitters to address the
        following issue:
        * Reduce one unnecessary load in spin-loop
      
          previously the spin-loop looks like
      
              thisMBB:
              newMBB:
                ld  t1 = [bitinstr.addr]
                op  t2 = t1, [bitinstr.val]
                not t3 = t2  (if Invert)
                mov EAX = t1
                lcs dest = [bitinstr.addr], t3  [EAX is implicit]
                bz  newMBB
                fallthrough -->nextMBB
      
          the 'ld' at the beginning of newMBB should be lift out of the loop
          as lcs (or CMPXCHG on x86) will load the current memory value into
          EAX. This loop is refined as:
      
              thisMBB:
                EAX = LOAD [MI.addr]
              mainMBB:
                t1 = OP [MI.val], EAX
                LCMPXCHG [MI.addr], t1, [EAX is implicitly used & defined]
                JNE mainMBB
              sinkMBB:
      
        * Remove immopc as, so far, all pseudo-atomic instructions has
          all-register form only, there is no immedidate operand.
      
        * Remove unnecessary attributes/modifiers in pseudo-atomic instruction
          td
      
        * Fix issues in PR13458
      
      - Add comprehensive tests on atomic ops on various data types.
        NOTE: Some of them are turned off due to missing functionality.
      
      - Revise tests due to the new spin-loop generated.
      
      llvm-svn: 164281
      3237662b
  12. Sep 15, 2012
  13. Sep 13, 2012
  14. Sep 12, 2012
    • Michael Liao's avatar
      Fix PR11985 · abb87d48
      Michael Liao authored
          
      - BlockAddress has no support of BA + offset form and there is no way to
        propagate that offset into machine operand;
      - Add BA + offset support and a new interface 'getTargetBlockAddress' to
        simplify target block address forming;
      - All targets are modified to use new interface and X86 backend is enhanced to
        support BA + offset addressing.
      
      llvm-svn: 163743
      abb87d48
    • Craig Topper's avatar
      Indentation fixes. No functional change. · ad495964
      Craig Topper authored
      llvm-svn: 163682
      ad495964
  15. Sep 11, 2012
  16. Sep 10, 2012
  17. Sep 08, 2012
  18. Sep 06, 2012
Loading