Skip to content
  1. Aug 09, 2012
  2. Aug 08, 2012
  3. Aug 07, 2012
    • Bill Wendling's avatar
      For non-Darwin platforms, we want to generate stack protectors only for · 61396b81
      Bill Wendling authored
      character arrays. This is in line with what GCC does.
      <rdar://problem/10529227>
      
      llvm-svn: 161446
      61396b81
    • Jakob Stoklund Olesen's avatar
      Add a new kind of MachineOperand: MO_TargetIndex. · 84689b0d
      Jakob Stoklund Olesen authored
      A target index operand looks a lot like a constant pool reference, but
      it is completely target-defined. It contains the 8-bit TargetFlags, a
      32-bit index, and a 64-bit offset. It is preserved by all code generator
      passes.
      
      TargetIndex operands can be used to carry target-specific information in
      cases where immediate operands won't suffice.
      
      llvm-svn: 161441
      84689b0d
    • Andrew Kaylor's avatar
      Enable lazy compilation in MCJIT · 1a568c3a
      Andrew Kaylor authored
      llvm-svn: 161438
      1a568c3a
    • Jakob Stoklund Olesen's avatar
      Fix a couple of typos. · 296448b2
      Jakob Stoklund Olesen authored
      llvm-svn: 161437
      296448b2
    • Jakob Stoklund Olesen's avatar
      Add trace accessor methods, implement primitive if-conversion heuristic. · 75d9d515
      Jakob Stoklund Olesen authored
      Compare the critical paths of the two traces through an if-conversion
      candidate. If the difference is larger than the branch brediction
      penalty, reject the if-conversion. If would never pay.
      
      llvm-svn: 161433
      75d9d515
    • Jim Grosbach's avatar
      Tidy up a bit. · af9aec0c
      Jim Grosbach authored
      llvm-svn: 161430
      af9aec0c
    • Rafael Espindola's avatar
      The dominance computation already has logic for computing if an edge dominates · 59564079
      Rafael Espindola authored
      a use or a BB, but it is inline in the handling of the invoke instruction.
      
      This patch refactors it so that it can be used in other cases. For example, in
      
      define i32 @f(i32 %x) {
      bb0:
        %cmp = icmp eq i32 %x, 0
        br i1 %cmp, label %bb2, label %bb1
      bb1:
        br label %bb2
      bb2:
        %cond = phi i32 [ %x, %bb0 ], [ 0, %bb1 ]
        %foo = add i32 %cond, %x
        ret i32 %foo
      }
      
      GVN should be able to replace %x with 0 in any use that is dominated by the
      true edge out of bb0. In the above example the only such use is the one in
      the phi.
      
      llvm-svn: 161429
      59564079
    • Hal Finkel's avatar
      Add a comment about mftb vs. mfspr on PPC. · 895a5f5d
      Hal Finkel authored
      Thanks to Alex Rosenberg for the suggestion.
      
      llvm-svn: 161428
      895a5f5d
    • Alexey Samsonov's avatar
      Fix the representation of debug line table in DebugInfo LLVM library, · 947228c4
      Alexey Samsonov authored
      and "instruction address -> file/line" lookup.
      
      Instead of plain collection of rows, debug line table for compilation unit is now
      treated as the number of row ranges, describing sequences (series of contiguous machine
      instructions). The sequences are not always listed in the order of increasing
      address, so previously used std::lower_bound() sometimes produced wrong results.
      Now the instruction address lookup consists of two stages: finding the correct
      sequence, and searching for address in range of rows for this sequence.
      
      llvm-svn: 161414
      947228c4
    • Benjamin Kramer's avatar
      PR13095: Give an inline cost bonus to functions using byval arguments. · c99d0e91
      Benjamin Kramer authored
      We give a bonus for every argument because the argument setup is not needed
      anymore when the function is inlined. With this patch we interpret byval
      arguments as a compact representation of many arguments. The byval argument
      setup is implemented in the backend as an inline memcpy, so to model the
      cost as accurately as possible we take the number of pointer-sized elements
      in the byval argument and give a bonus of 2 instructions for every one of
      those. The bonus is capped at 8 elements, which is the number of stores
      at which the x86 backend switches from an expanded inline memcpy to a real
      memcpy. It would be better to use the real memcpy threshold from the backend,
      but it's not available via TargetData.
      
      This change brings the performance of c-ray in line with gcc 4.7. The included
      test case tries to reproduce the c-ray problem to catch regressions for this
      benchmark early, its performance is dominated by the inline decision of a
      specific call.
      
      This only has a small impact on most code, more on x86 and arm than on x86_64
      due to the way the ABI works. When building LLVM for x86 it gives a small
      inline cost boost to virtually any function using StringRef or STL allocators,
      but only a 0.01% increase in overall binary size. The size of gcc compiled by
      clang actually shrunk by a couple bytes with this patch applied, but not
      significantly.
      
      llvm-svn: 161413
      c99d0e91
    • Chandler Carruth's avatar
      Fix PR13412, a nasty miscompile due to the interleaved · 2f6cf488
      Chandler Carruth authored
      instsimplify+inline strategy.
      
      The crux of the problem is that instsimplify was reasonably relying on
      an invariant that is true within any single function, but is no longer
      true mid-inline the way we use it. This invariant is that an argument
      pointer != a local (alloca) pointer.
      
      The fix is really light weight though, and allows instsimplify to be
      resiliant to these situations: when checking the relation ships to
      function arguments, ensure that the argumets come from the same
      function. If they come from different functions, then none of these
      assumptions hold. All credit to Benjamin Kramer for coming up with this
      clever solution to the problem.
      
      llvm-svn: 161410
      2f6cf488
    • Chandler Carruth's avatar
      Add a much more conservative strategy for aligning branch targets. · 881d0a79
      Chandler Carruth authored
      Previously, MBP essentially aligned every branch target it could. This
      bloats code quite a bit, especially non-looping code which has no real
      reason to prefer aligned branch targets so heavily.
      
      As Andy said in review, it's still a bit odd to do this without a real
      cost model, but this at least has much more plausible heuristics.
      
      Fixes PR13265.
      
      llvm-svn: 161409
      881d0a79
    • Manman Ren's avatar
      MachineCSE: Update the heuristics for isProfitableToCSE. · cb36b8c2
      Manman Ren authored
      If the result of a common subexpression is used at all uses of the candidate
      expression, CSE should not increase the live range of the common subexpression.
      
      rdar://11393714 and rdar://11819721
      
      llvm-svn: 161396
      cb36b8c2
    • Bill Wendling's avatar
      Revert r161371. Removing the 'const' before Type is a "good thing". · 0acd0c0a
      Bill Wendling authored
      --- Reverse-merging r161371 into '.':
      U    include/llvm/Target/TargetData.h
      U    lib/Target/TargetData.cpp
      
      llvm-svn: 161394
      0acd0c0a
Loading