Skip to content
  1. Dec 17, 2012
  2. Dec 11, 2012
  3. Dec 03, 2012
    • Chandler Carruth's avatar
      Use the new script to sort the includes of every file under lib. · ed0881b2
      Chandler Carruth authored
      Sooooo many of these had incorrect or strange main module includes.
      I have manually inspected all of these, and fixed the main module
      include to be the nearest plausible thing I could find. If you own or
      care about any of these source files, I encourage you to take some time
      and check that these edits were sensible. I can't have broken anything
      (I strictly added headers, and reordered them, never removed), but they
      may not be the headers you'd really like to identify as containing the
      API being implemented.
      
      Many forward declarations and missing includes were added to a header
      files to allow them to parse cleanly when included first. The main
      module rule does in fact have its merits. =]
      
      llvm-svn: 169131
      ed0881b2
  4. Oct 15, 2012
  5. Aug 17, 2012
    • Jakob Stoklund Olesen's avatar
      Use standard pattern for iterate+erase. · 714f595c
      Jakob Stoklund Olesen authored
      Increment the MBB iterator at the top of the loop to properly handle the
      current (and previous) instructions getting erased.
      
      This fixes PR13625.
      
      llvm-svn: 162099
      714f595c
    • Jakob Stoklund Olesen's avatar
      Add an MCID::Select flag and TII hooks for optimizing selects. · 2382d320
      Jakob Stoklund Olesen authored
      Select instructions pick one of two virtual registers based on a
      condition, like x86 cmov. On targets like ARM that support predication,
      selects can sometimes be eliminated by predicating the instruction
      defining one of the operands.
      
      Teach PeepholeOptimizer to recognize select instructions, and ask the
      target to optimize them.
      
      llvm-svn: 162059
      2382d320
  6. Aug 02, 2012
  7. Jul 29, 2012
  8. Jul 28, 2012
  9. Jun 29, 2012
  10. Jun 19, 2012
  11. Jun 07, 2012
    • Manman Ren's avatar
      Revert r157755. · 9c964181
      Manman Ren authored
      The commit is intended to fix rdar://11540023.
      It is implemented as part of peephole optimization. We can actually implement
      this in the SelectionDAG lowering phase.
      
      llvm-svn: 158122
      9c964181
  12. May 31, 2012
    • Manman Ren's avatar
      X86: replace SUB with CMP if possible · 9bccb64e
      Manman Ren authored
      This patch will optimize the following
              movq    %rdi, %rax
              subq    %rsi, %rax
              cmovsq  %rsi, %rdi
              movq    %rdi, %rax
      to
              cmpq    %rsi, %rdi
              cmovsq  %rsi, %rdi
              movq    %rdi, %rax
      
      Perform this optimization if the actual result of SUB is not used.
      
      rdar: 11540023
      llvm-svn: 157755
      9bccb64e
  13. May 20, 2012
  14. May 11, 2012
    • Manman Ren's avatar
      ARM: peephole optimization to remove cmp instruction · dc8ad005
      Manman Ren authored
      This patch will optimize the following cases:
        sub r1, r3 | sub r1, imm
        cmp r3, r1 or cmp r1, r3 | cmp r1, imm
        bge L1
      
      TO
        subs r1, r3
        bge  L1 or ble L1
      
      If the branch instruction can use flag from "sub", then we can replace
      "sub" with "subs" and eliminate the "cmp" instruction.
      
      rdar: 10734411
      llvm-svn: 156599
      dc8ad005
  15. May 10, 2012
  16. May 02, 2012
  17. Feb 25, 2012
  18. Feb 08, 2012
    • Andrew Trick's avatar
      Codegen pass definition cleanup. No functionality. · 1fa5bcbe
      Andrew Trick authored
      Moving toward a uniform style of pass definition to allow easier target configuration.
      Globally declare Pass ID.
      Globally declare pass initializer.
      Use INITIALIZE_PASS consistently.
      Add a call to the initializer from CodeGen.cpp.
      Remove redundant "createPass" functions and "getPassName" methods.
      
      While cleaning up declarations, cleaned up comments (sorry for large diff).
      
      llvm-svn: 150100
      1fa5bcbe
    • Andrew Trick's avatar
      whitespace · 9e761997
      Andrew Trick authored
      llvm-svn: 150094
      9e761997
  19. Dec 07, 2011
    • Evan Cheng's avatar
      Add bundle aware API for querying instruction properties and switch the code · 7f8e563a
      Evan Cheng authored
      generator to it. For non-bundle instructions, these behave exactly the same
      as the MC layer API.
      
      For properties like mayLoad / mayStore, look into the bundle and if any of the
      bundled instructions has the property it would return true.
      For properties like isPredicable, only return true if *all* of the bundled
      instructions have the property.
      For properties like canFoldAsLoad, isCompare, conservatively return false for
      bundles.
      
      llvm-svn: 146026
      7f8e563a
  20. Oct 13, 2011
  21. Jul 26, 2011
  22. Jun 28, 2011
  23. Mar 15, 2011
    • Evan Cheng's avatar
      Add a peephole optimization to optimize pairs of bitcasts. e.g. · e4b8ac9f
      Evan Cheng authored
      v2 = bitcast v1
      ...
      v3 = bitcast v2
      ...
         = v3
      =>
      v2 = bitcast v1
      ...
         = v1
      if v1 and v3 are of in the same register class.
      
      bitcast between i32 and fp (and others) are often not nops since they
      are in different register classes. These bitcast instructions are often
      left because they are in different basic blocks and cannot be
      eliminated by dag combine.
      
      rdar://9104514
      
      llvm-svn: 127668
      e4b8ac9f
  24. Feb 15, 2011
  25. Feb 14, 2011
  26. Jan 10, 2011
  27. Jan 08, 2011
    • Evan Cheng's avatar
      Do not model all INLINEASM instructions as having unmodelled side effects. · 6eb516db
      Evan Cheng authored
      Instead encode llvm IR level property "HasSideEffects" in an operand (shared
      with IsAlignStack). Added MachineInstrs::hasUnmodeledSideEffects() to check
      the operand when the instruction is an INLINEASM.
      
      This allows memory instructions to be moved around INLINEASM instructions.
      
      llvm-svn: 123044
      6eb516db
  28. Jan 07, 2011
  29. Nov 17, 2010
    • Evan Cheng's avatar
      Remove ARM isel hacks that fold large immediates into a pair of add, sub, and, · 7f8ab6ee
      Evan Cheng authored
      and xor. The 32-bit move immediates can be hoisted out of loops by machine
      LICM but the isel hacks were preventing them.
      
      Instead, let peephole optimization pass recognize registers that are defined by
      immediates and the ARM target hook will fold the immediates in.
      
      Other changes include 1) do not fold and / xor into cmp to isel TST / TEQ
      instructions if there are multiple uses. This happens when the 'and' is live
      out, machine sink would have sinked the computation and that ends up pessimizing
      code. The peephole pass would recognize situations where the 'and' can be
      toggled to define CPSR and eliminate the comparison anyway.
      
      2) Move peephole pass to after machine LICM, sink, and CSE to avoid blocking
      important optimizations.
      
      rdar://8663787, rdar://8241368
      
      llvm-svn: 119548
      7f8ab6ee
  30. Nov 15, 2010
  31. Nov 01, 2010
  32. Oct 31, 2010
    • Eric Christopher's avatar
      Revert r117876 for now, it's causing more testsuite failures. · ef5a1c3e
      Eric Christopher authored
      llvm-svn: 117879
      ef5a1c3e
    • Bill Wendling's avatar
      Disable the peephole optimizer until 186.crafty on armv6 is fixed. This is what · 0392f1b4
      Bill Wendling authored
      looks like is happening:
      
      Without the peephole optimizer:
        (1)   sub     r6, r6, #32
              orr     r12, r12, lr, lsl r9
              orr     r2, r2, r3, lsl r10
        (x)   cmp     r6, #0
              ldr     r9, LCPI2_10
              ldr     r10, LCPI2_11
        (2)   sub     r8, r8, #32
        (a)   movge   r12, lr, lsr r6
        (y)   cmp     r8, #0
      LPC2_10:
              ldr     lr, [pc, r10]
        (b)   movge   r2, r3, lsr r8
      
      With the peephole optimizer:
              ldr     r9, LCPI2_10
              ldr     r10, LCPI2_11
        (1*)  subs    r6, r6, #32
        (2*)  subs    r8, r8, #32
        (a*)  movge   r12, lr, lsr r6
        (b*)  movge   r2, r3, lsr r8
      
      (1) is used by (x) for the conditional move at (a). (2) is used by (y) for the
      conditional move at (b). After the peephole optimizer, these the flags resulting
      from (1*) are ignored and only the flags from (2*) are considered for both
      conditional moves.
      
      llvm-svn: 117876
      0392f1b4
Loading