Skip to content
  1. Jan 13, 2012
  2. Jan 12, 2012
  3. Nov 13, 2011
  4. Nov 01, 2011
  5. Sep 14, 2011
  6. Sep 12, 2011
    • Jakob Stoklund Olesen's avatar
      Remove the -compact-regions flag. · 45df7e0f
      Jakob Stoklund Olesen authored
      It has been enabled by default for a while, it was only there to allow
      performance comparisons.
      
      llvm-svn: 139501
      45df7e0f
    • Jakob Stoklund Olesen's avatar
      Add an interface for SplitKit complement spill modes. · eecb2fb1
      Jakob Stoklund Olesen authored
      SplitKit always computes a complement live range to cover the places
      where the original live range was live, but no explicit region has been
      allocated.
      
      Currently, the complement live range is created to be as small as
      possible - it never overlaps any of the regions.  This minimizes
      register pressure, but if the complement is going to be spilled anyway,
      that is not very important.  The spiller will eliminate redundant
      spills, and hoist others by making the spill slot live range overlap
      some of the regions created by splitting.  Stack slots are cheap.
      
      This patch adds the interface to enable spill modes in SplitKit.  In
      spill mode, SplitKit will assume that the complement is going to spill,
      so it will allow it to overlap regions in order to avoid back-copies.
      By doing some of the spiller's work early, the complement live range
      becomes simpler.  In some cases, it can become much simpler because no
      extra PHI-defs are required.  This will speed up both splitting and
      spilling.
      
      This is only the interface to enable spill modes, no implementation yet.
      
      llvm-svn: 139500
      eecb2fb1
  7. Aug 19, 2011
  8. Aug 09, 2011
  9. Aug 06, 2011
  10. Aug 04, 2011
    • Jakob Stoklund Olesen's avatar
      Enable compact region splitting by default. · 11b788d5
      Jakob Stoklund Olesen authored
      This helps generate better code in functions with high register
      pressure.
      
      The previous version of compact region splitting caused regressions
      because the regions were a bit too large. A stronger negative bias
      applied in r136832 fixed this problem.
      
      llvm-svn: 136836
      11b788d5
    • Jakob Stoklund Olesen's avatar
      Be more conservative when forming compact regions. · 86954520
      Jakob Stoklund Olesen authored
      Apply twice the negative bias on transparent blocks when computing the
      compact regions. This excludes loop backedges from the region when only
      one of the loop blocks uses the register.
      
      Previously, we would include the backedge in the region if the loop
      preheader and the loop latch both used the register, but the loop header
      didn't.
      
      When both the header and latch blocks use the register, we still keep it
      live on the backedge.
      
      llvm-svn: 136832
      86954520
    • Chandler Carruth's avatar
      Fix some warnings from Clang in release builds: · 77eb5a0a
      Chandler Carruth authored
      lib/CodeGen/RegAllocGreedy.cpp:1176:18: warning: unused variable 'B' [-Wunused-variable]
          if (unsigned B = Cand.getBundles(BundleCand, BestCand)) {
                       ^
      lib/CodeGen/RegAllocGreedy.cpp:1188:18: warning: unused variable 'B' [-Wunused-variable]
          if (unsigned B = Cand.getBundles(BundleCand, 0)) {
                       ^
      
      llvm-svn: 136831
      77eb5a0a
  11. Aug 03, 2011
  12. Jul 31, 2011
  13. Jul 30, 2011
  14. Jul 28, 2011
    • Jakob Stoklund Olesen's avatar
      Reverse order of RS_Split live ranges under -compact-regions. · cad845f4
      Jakob Stoklund Olesen authored
      There are two conflicting strategies in play:
      
      - Under high register pressure, we want to assign large live ranges
        first. Smaller live ranges are easier to place afterwards.
      
      - Live range splitting is guided by interference, so splitting should be
        deferred until interference is as realistic as possible.
      
      With the recent changes to the live range stages, and with compact
      regions enabled, it is less traumatic to split a live range too early.
      If some of the split products were too big, they can often be split
      again.
      
      By reversing the RS_Split order, we get this queue order:
      
      1. Normal live ranges, large to small.
      2. RS_Split live ranges, large to small.
      
      The large-to-small order improves RAGreedy's puzzle solving skills under
      high register pressure. It may cause a bit more iterated splitting, but
      we handle that better now.
      
      With this change, -compact-regions is mostly an improvement on SPEC.
      
      llvm-svn: 136388
      cad845f4
  15. Jul 27, 2011
    • Jakob Stoklund Olesen's avatar
      Add support for multi-way live range splitting. · dab4b9a4
      Jakob Stoklund Olesen authored
      When splitting global live ranges, it is now possible to split for
      multiple destination intervals at once. Previously, we only had the main
      and stack intervals.
      
      Each edge bundle is assigned to a split candidate, and splitAroundRegion
      will insert copies between the candidate intervals and the stack
      interval as needed.
      
      The multi-way splitting is used to split around compact regions when
      enabled with -compact-regions. The best candidate register still gets
      all the bundles it wants, but everything outside the main interval is
      first split around compact regions before we create single-block
      intervals.
      
      Compact region splitting still causes some regressions, so it is not
      enabled by default.
      
      llvm-svn: 136186
      dab4b9a4
  16. Jul 26, 2011
    • Jakob Stoklund Olesen's avatar
      Revert to RA_Assign when a virtreg separates into components. · 5387bd34
      Jakob Stoklund Olesen authored
      When dead code elimination deletes a PHI value, the virtual register may
      split into multiple connected components. In that case, revert each
      component to the RS_Assign stage.
      
      The new components are guaranteed to be smaller (the original value
      numbers are distributed among the components), so this will always be
      making progress. The components are now allowed to evict other live
      ranges or be split again.
      
      llvm-svn: 136034
      5387bd34
  17. Jul 25, 2011
    • Jakob Stoklund Olesen's avatar
      Add an RS_Split2 stage used for loop prevention. · 45011171
      Jakob Stoklund Olesen authored
      This mechanism already exists, but the RS_Split2 stage makes it clearer.
      
      When live range splitting creates ranges that may not be making
      progress, they are marked RS_Split2 instead of RS_New. These ranges may
      be split again, but only in a way that can be proven to make progress.
      
      For local ranges, that means they must be split into ranges used by
      strictly fewer instructions.
      
      For global ranges, region splitting is bypassed and the RS_Split2
      ranges go straight to per-block splitting.
      
      llvm-svn: 135912
      45011171
    • Jakob Stoklund Olesen's avatar
      Rename live range stages to better reflect how they are used. · 3ef8cf13
      Jakob Stoklund Olesen authored
      The stage is used to control where a live range is going, not where it
      is coming from. Live ranges created by splitting will usually be marked
      RS_New, but some are marked RS_Spill to avoid wasting time trying to
      split them again.
      
      The old RS_Global and RS_Local stages are merged - they are really the
      same thing for local and global live ranges.
      
      llvm-svn: 135911
      3ef8cf13
  18. Jul 23, 2011
    • Jakob Stoklund Olesen's avatar
      Add RAGreedy::calcCompactRegion. · ecad62f9
      Jakob Stoklund Olesen authored
      This method computes the edge bundles that should be live when splitting
      around a compact region. This is independent of interference.
      
      The function returns false if the live range was already a compact
      region, or the compact region doesn't have any live bundles - it would
      be the same as splitting around basic blocks.
      
      Compact regions are computed using the normal spill placement code. We
      pretend there is interference in all live-through blocks that don't use
      the live range. This removes all edges from the Hopfield network used
      for spill placement, so it converges instantly.
      
      llvm-svn: 135847
      ecad62f9
    • Jakob Stoklund Olesen's avatar
      Prepare RAGreedy::growRegion for compact regions. · a953bf13
      Jakob Stoklund Olesen authored
      A split candidate can have a null PhysReg which means that it doesn't
      map to a real interference pattern. Instead, pretend that all through
      blocks have interference.
      
      This makes it possible to generate compact regions where the live range
      doesn't go through blocks that don't use it. The live range will still
      be live between directly connected blocks with uses.
      
      Splitting around a compact region tends to produce a live range with a
      high spill weight, so it may evict a less dense live range.
      
      llvm-svn: 135845
      a953bf13
  19. Jul 18, 2011
  20. Jul 16, 2011
  21. Jul 15, 2011
    • Jakob Stoklund Olesen's avatar
      Extract parts of RAGreedy::splitAroundRegion as SplitKit methods. · 795da1c1
      Jakob Stoklund Olesen authored
      This gets rid of some of the gory splitting details in RAGreedy and
      makes them available to future SplitKit clients.
      
      Slightly generalize the functionality to support multi-way splitting.
      Specifically, SplitEditor::splitLiveThroughBlock() supports switching
      between different register intervals in a block.
      
      llvm-svn: 135307
      795da1c1
  22. Jul 14, 2011
    • Jakob Stoklund Olesen's avatar
      Reapply r135121 with a fixed copy constructor. · a153ca58
      Jakob Stoklund Olesen authored
      Original commit message:
      
      Count references to interference cache entries.
      
      Each InterferenceCache::Cursor instance references a cache entry. A
      non-zero reference count guarantees that the entry won't be reused for a
      new register.
      
      This makes it possible to have multiple live cursors examining
      interference for different physregs.
      
      The total number of live cursors into a cache must be kept below
      InterferenceCache::getMaxCursors().
      
      Code generation should be unaffected by this change, and it doesn't seem
      to affect the cache replacement strategy either.
      
      llvm-svn: 135130
      a153ca58
    • Jakob Stoklund Olesen's avatar
      Revert r135121 which broke a gcc-4.2 builder. · 1d4badae
      Jakob Stoklund Olesen authored
      llvm-svn: 135122
      1d4badae
    • Jakob Stoklund Olesen's avatar
      Count references to interference cache entries. · c270cb6e
      Jakob Stoklund Olesen authored
      Each InterferenceCache::Cursor instance references a cache entry. A
      non-zero reference count guarantees that the entry won't be reused for a
      new register.
      
      This makes it possible to have multiple live cursors examining
      interference for different physregs.
      
      The total number of live cursors into a cache must be kept below
      InterferenceCache::getMaxCursors().
      
      Code generation should be unaffected by this change, and it doesn't seem
      to affect the cache replacement strategy either.
      
      llvm-svn: 135121
      c270cb6e
    • Jakob Stoklund Olesen's avatar
      Reapply r135074 and r135080 with a fix. · d7e99371
      Jakob Stoklund Olesen authored
      The cache entry referenced by the best split candidate could become
      clobbered by an unsuccessful candidate.
      
      The correct fix here is to use reference counts on the cache entries.
      Coming up.
      
      llvm-svn: 135113
      d7e99371
Loading