Skip to content
  1. Mar 01, 2012
  2. Feb 29, 2012
  3. Feb 28, 2012
  4. Feb 27, 2012
  5. Feb 25, 2012
  6. Feb 24, 2012
  7. Feb 23, 2012
    • Benjamin Kramer's avatar
      BitVectorize loop. · ef8bf395
      Benjamin Kramer authored
      llvm-svn: 151274
      ef8bf395
    • Benjamin Kramer's avatar
      post-ra-sched: Turn the KillIndices vector into a bitvector, it only stored two meaningful states. · 796fd469
      Benjamin Kramer authored
      Rename it to LiveRegs to make it more clear what's stored inside.
      
      llvm-svn: 151273
      796fd469
    • Benjamin Kramer's avatar
      post-ra-sched: Replace a std::set of regs with a bitvector. · 21974b1f
      Benjamin Kramer authored
      Assuming that a single std::set node adds 3 control words, a bitvector
      can store (3*8+4)*8=224 registers in the allocated memory of a single
      element in the std::set (x86_64). Also we don't have to call malloc
      for every register added.
      
      llvm-svn: 151269
      21974b1f
    • Jakob Stoklund Olesen's avatar
      Make calls scheduling boundaries post-ra. · a793a59f
      Jakob Stoklund Olesen authored
      Before register allocation, instructions can be moved across calls in
      order to reduce register pressure.  After register allocation, we don't
      gain a lot by moving callee-saved defs across calls.  In fact, since the
      scheduler doesn't have a good idea how registers are used in the callee,
      it can't really make good scheduling decisions.
      
      This changes the schedule in two ways: 1. Latencies to call uses and
      defs are no longer accounted for, causing some random shuffling around
      calls.  This isn't really a problem since those uses and defs are
      inaccurate proxies for what happens inside the callee.  They don't
      represent registers used by the call instruction itself.
      
      2. Instructions are no longer moved across calls.  This didn't happen
      very often, and the scheduling decision was made on dubious information
      anyway.
      
      As with any scheduling change, benchmark numbers shift around a bit,
      but there is no positive or negative trend from this change.
      
      This makes the post-ra scheduler 5% faster for ARM targets.
      
      The secret motivation for this patch is the introduction of register
      mask operands representing call clobbers.  The most efficient way of
      handling regmasks in ScheduleDAGInstrs is to model them as barriers for
      physreg live ranges, but not for virtreg live ranges.  That's fine
      pre-ra, but post-ra it would have the same effect as this patch.
      
      llvm-svn: 151265
      a793a59f
    • Benjamin Kramer's avatar
    • Anton Korobeynikov's avatar
      Fix to make sure that a comdat group gets generated correctly for a static member · a22828e0
      Anton Korobeynikov authored
      of instantiated C++ templates.
      
      Patch by Kristof Beyls!
      
      llvm-svn: 151250
      a22828e0
Loading