Skip to content
  1. Aug 30, 2012
  2. Aug 29, 2012
    • Andrew Trick's avatar
      Preserve branch profile metadata during switch formation. · 3051aa1c
      Andrew Trick authored
      Patch by Michael Ilseman!
      This fixes SimplifyCFGOpt::FoldValueComparisonIntoPredecessors to preserve metata when folding conditional branches into switches.
      
      void foo(int x) {
        if (x == 0)
          bar(1);
        else if (__builtin_expect(x == 10, 1))
          bar(2);
        else if (x == 20)
          bar(3);
      }
      
      CFG:
      
      B0
      |  \
      |   X0
      B10
      |  \
      |   X10
      B20
      |  \
      E   X20
      
      Merge B0-B10:
      w(B0-X0) = w(B0-X0)*sum-weights(B10) = w(B0-X0) * (w(B10-X10) + w(B10-B20))
      w(B0-X10) = w(B0-B10) * w(B10-X10)
      w(B0-B20) = w(B0-B10) * w(B10-B20)
      
      B0 __
      | \  \
      | X10 X0
      B20
      |  \
      E  X20
      
      Merge B0-B20:
      w(B0-X0) = w(B0-X0) * sum-weights(B20) = w(B0-X0) * (w(B20-E) + w(B20-X20))
      w(B0-X10) = w(B0-X10) * sum-weights(B20) = ...
      w(B0-X20) = w(B0-B20) * w(B20-X20)
      w(B0-E) = w(B0-B20) * w(B20-E)
      
      llvm-svn: 162868
      3051aa1c
    • Andrew Trick's avatar
      whitespace · f3cf1932
      Andrew Trick authored
      llvm-svn: 162867
      f3cf1932
    • Jakob Stoklund Olesen's avatar
      Rename hasVolatileMemoryRef() to hasOrderedMemoryRef(). · cea3e774
      Jakob Stoklund Olesen authored
      Ordered memory operations are more constrained than volatile loads and
      stores because they must be ordered with respect to all other memory
      operations.
      
      llvm-svn: 162861
      cea3e774
    • Jakob Stoklund Olesen's avatar
      Add MachineMemOperand::isUnordered(). · 23793141
      Jakob Stoklund Olesen authored
      This means the same as LoadInst/StoreInst::isUnordered(), and implies
      !isVolatile().
      
      Atomic loads and stored are also ordered, and this is the right method
      to check if it is safe to reorder memory operations. Ordered atomics
      can't be reordered wrt normal loads and stores, which is a stronger
      constraint than volatile.
      
      llvm-svn: 162859
      23793141
    • Jakob Stoklund Olesen's avatar
      Don't move normal loads across volatile/atomic loads. · 813a109f
      Jakob Stoklund Olesen authored
      It is technically allowed to move a normal load across a volatile load,
      but probably not a good idea.
      
      It is not allowed to move a load across an atomic load with
      Ordering > Monotonic, and we model those with MOVolatile as well.
      
      I recently removed the mayStore flag from atomic load instructions, so
      they don't need a pseudo-opcode. This patch makes up for the difference.
      
      llvm-svn: 162857
      813a109f
    • Michael Liao's avatar
      fix C++ comment in C header · 84ee8bf3
      Michael Liao authored
      llvm-svn: 162856
      84ee8bf3
    • Bill Wendling's avatar
      Use the full path to output the .gcda file. · 11e61b95
      Bill Wendling authored
      This lets the user run the program from a different directory and still have the
      .gcda files show up in the correct place.
      <rdar://problem/12179524>
      
      llvm-svn: 162855
      11e61b95
    • Hal Finkel's avatar
      Reserve space for the mandatory traceback fields on PPC64. · 1859d265
      Hal Finkel authored
      We need to reserve space for the mandatory traceback fields,
      though leaving them as zero is appropriate for now.
      
      Although the ABI calls for these fields to be filled in fully, no
      compiler on Linux currently does this, and GDB does not read these
      fields.  GDB uses the first word of zeroes during exception handling to
      find the end of the function and the size field, allowing it to compute
      the beginning of the function.  DWARF information is used for everything
      else.  We need the extra 8 bytes of pad so the size field is found in
      the right place.
      
      As a comparison, GCC fills in a few of the fields -- language, number
      of saved registers -- but ignores the rest.  IBM's proprietary OSes do
      make use of the full traceback table facility.
      
      Patch by Bill Schmidt.
      
      llvm-svn: 162854
      1859d265
    • Bill Wendling's avatar
      e8aee6b8
    • Jakob Stoklund Olesen's avatar
      Verify the consistency of inline asm operands. · 7a837b9a
      Jakob Stoklund Olesen authored
      The operands on an INLINEASM machine instruction are divided into groups
      headed by immediate flag operands. Verify this structure.
      
      Extract verifyTiedOperands(), and only call it for non-inlineasm
      instructions.
      
      llvm-svn: 162849
      7a837b9a
Loading