Skip to content
  1. Mar 17, 2011
  2. Mar 16, 2011
  3. Mar 15, 2011
  4. Mar 14, 2011
  5. Mar 13, 2011
  6. Mar 12, 2011
    • Duncan Sands's avatar
      Speculatively revert commit 127478 (jsjodin) in an attempt to fix the · b847bf54
      Duncan Sands authored
      llvm-gcc-i386-linux-selfhost and llvm-x86_64-linux-checks buildbots.
      The original log entry:
      Remove optimization emitting a reference insted of label difference, since
      it can create more relocations. Removed isBaseAddressKnownZero method,
      because it is no longer used.
      
      llvm-svn: 127540
      b847bf54
    • Jakob Stoklund Olesen's avatar
      Include snippets in the live stack interval. · e77005ef
      Jakob Stoklund Olesen authored
      llvm-svn: 127530
      e77005ef
    • Jakob Stoklund Olesen's avatar
      Spill multiple registers at once. · a86595e0
      Jakob Stoklund Olesen authored
      Live range splitting can create a number of small live ranges containing only a
      single real use. Spill these small live ranges along with the large range they
      are connected to with copies. This enables memory operand folding and maximizes
      the spill to fill distance.
      
      Work in progress with known bugs.
      
      llvm-svn: 127529
      a86595e0
    • Jakob Stoklund Olesen's avatar
      That's it, I am declaring this a failure of the C++03 STL. · dae1dc1f
      Jakob Stoklund Olesen authored
      There are too many compatibility problems with using mixed types in
      std::upper_bound, and I don't want to spend 110 lines of boilerplate setting up
      a call to a 10-line function. Binary search is not /that/ hard to implement
      correctly.
      
      I tried terminating the binary search with a linear search, but that actually
      made the algorithm slower against my expectation. Most live intervals have less
      than 4 segments. The early test against endIndex() does pay, and this version is
      25% faster than plain std::upper_bound().
      
      llvm-svn: 127522
      dae1dc1f
  7. Mar 11, 2011
  8. Mar 10, 2011
  9. Mar 09, 2011
    • Evan Cheng's avatar
      Change the definition of TargetRegisterInfo::getCrossCopyRegClass to be more · ca9a9363
      Evan Cheng authored
      flexible.
      
      If it returns a register class that's different from the input, then that's the
      register class used for cross-register class copies.
      If it returns a register class that's the same as the input, then no cross-
      register class copies are needed (normal copies would do).
      If it returns null, then it's not at all possible to copy registers of the
      specified register class.
      
      llvm-svn: 127368
      ca9a9363
    • Jakob Stoklund Olesen's avatar
      Make physreg coalescing independent on the number of uses of the virtual register. · d0db7052
      Jakob Stoklund Olesen authored
      The damage done by physreg coalescing only depends on the number of instructions
      the extended physreg live range covers. This fixes PR9438.
      
      The heuristic is still luck-based, and physreg coalescing really should be
      disabled completely. We need a register allocator with better hinting support
      before that is possible.
      
      Convert a test to FileCheck and force spilling by inserting an extra call. The
      previous spilling behavior was dependent on misguided physreg coalescing
      decisions.
      
      llvm-svn: 127351
      d0db7052
    • Andrew Trick's avatar
      Improve pre-RA-sched register pressure tracking for duplicate operands. · 072ed2ee
      Andrew Trick authored
      This helps cases like 2008-07-19-movups-spills.ll, but doesn't have an obvious impact on benchmarks
      
      llvm-svn: 127347
      072ed2ee
Loading