Skip to content
  1. May 13, 2020
  2. May 11, 2020
  3. May 10, 2020
  4. May 08, 2020
  5. May 07, 2020
    • Yevgeny Rouban's avatar
      SplitIndirectBrCriticalEdges: Fix Branch Probability update · b921543c
      Yevgeny Rouban authored
      Splitting critical edges for indirect branches
      the SplitIndirectBrCriticalEdges() function may break branch
      probabilities if target basic block happens to have unset
      a probability for any of its successors. That is because in
      such cases the getEdgeProbability(Target) function returns
      probability 1/NumOfSuccessors and it is called after Target
      was split (thus Target has a single successor). As the result
      the correspondent successor of the split block gets
      probability 100% but 1/NumOfSuccessors is expected (or better
      be left unset).
      
      Reviewers: yamauchi
      Differential Revision: https://reviews.llvm.org/D78806
      b921543c
  6. May 06, 2020
  7. May 05, 2020
  8. May 04, 2020
  9. May 03, 2020
    • Hongtao Yu's avatar
      [ICP] Handling must tail calls in indirect call promotion · 911e06f5
      Hongtao Yu authored
      Per the IR convention, a musttail call must precede a ret with an optional bitcast. This was violated by the indirect call promotion optimization which could result an IR like:
      
          ; <label>:2192:
            br i1 %2198, label %2199, label %2201, !dbg !226012, !prof !229483
      
          ; <label>:2199:                                   ; preds = %2192
            musttail call fastcc void @foo(i8* %2195), !dbg !226012
            br label %2202, !dbg !226012
      
          ; <label>:2201:                                   ; preds = %2192
            musttail call fastcc void %2197(i8* %2195), !dbg !226012
            br label %2202, !dbg !226012
      
          ; <label>:2202:                                   ; preds = %605, %2201, %2199
            ret void, !dbg !229485
      
      This is being fixed in this change where the return statement goes together with the promoted indirect call. The code generated is like:
      
          ; <label>:2192:
            br i1 %2198, label %2199, label %2201, !dbg !226012, !prof !229483
      
          ; <label>:2199:                                   ; preds = %2192
            musttail call fastcc void @foo(i8* %2195), !dbg !226012
            ret void, !dbg !229485
      
          ; <label>:2201:                                   ; preds = %2192
            musttail call fastcc void %2197(i8* %2195), !dbg !226012
            ret void, !dbg !229485
      
      Differential Revision: https://reviews.llvm.org/D79258
      911e06f5
  10. May 02, 2020
  11. Apr 30, 2020
    • Florian Hahn's avatar
      [LoopVersioning] Update setAliasChecks to take ArrayRef argument (NFC). · 19ab53f1
      Florian Hahn authored
      This cleanup was suggested as part of D78458.
      19ab53f1
    • Nikita Popov's avatar
      [InlineFunction] Disable emission of alignment assumptions by default · b74c6d2c
      Nikita Popov authored
      In D74183 clang started emitting alignment for sret parameters
      unconditionally. This caused a 1.5% compile-time regression on
      tramp3d-v4. The reason is that we now generate many instance of IR like
      
          %ptrint = ptrtoint %class.GuardLayers* %guards_m to i64
          %maskedptr = and i64 %ptrint, 3
          %maskcond = icmp eq i64 %maskedptr, 0
          tail call void @llvm.assume(i1 %maskcond)
      
      to preserve the alignment information during inlining. Based on IR
      analysis, these assumptions also regress optimization. The attached
      phase ordering test case illustrates two issues: One are instruction
      count based optimization heuristics, which are affected by the four
      additional instructions of the assumption. The other is blocking of
      SROA due to ptrtoint casts (PR45763).
      
      We already encountered the same problem in Rust, where we (unlike
      Clang) generally prefer to emit alignment information absolutely
      everywhere it is available. We were only able to do this after
      hardcoding -preserve-alignment-assumptions-during-inlining=false,
      because we were seeing significant optimization and compile-time
      regressions otherwise.
      
      This patch disables -preserve-alignment-assumptions-during-inlining
      by default, because we should not be punishing people for adding
      more alignment annotations.
      
      Once the assume bundle work shakes out and we can represent (and use)
      alignment assumptions using assume bundles, it should be possible to
      re-enable this with reduced overhead.
      
      Differential Revision: https://reviews.llvm.org/D76886
      b74c6d2c
    • Arthur Eubanks's avatar
      [NFC] Rename *ByValOrInalloca* to *PassPointeeByValue* · a90948fd
      Arthur Eubanks authored
      Summary: In preparation for preallocated.
      
      Subscribers: hiraditya, llvm-commits
      
      Tags: #llvm
      
      Differential Revision: https://reviews.llvm.org/D79152
      a90948fd
    • Mircea Trofin's avatar
      [llvm][NFC] Use CallBase explicitly instead of Instruction in FunctionComparator · 3ab319b2
      Mircea Trofin authored
      Reviewers: dblaikie, craig.topper
      
      Subscribers: hiraditya, llvm-commits
      
      Tags: #llvm
      
      Differential Revision: https://reviews.llvm.org/D79098
      3ab319b2
  12. Apr 29, 2020
  13. Apr 28, 2020
  14. Apr 27, 2020
  15. Apr 25, 2020
  16. Apr 24, 2020
  17. Apr 23, 2020
Loading