Skip to content
  1. Jun 15, 2009
  2. Jun 03, 2009
  3. Jun 02, 2009
    • Evan Cheng's avatar
      448641d8
    • Dale Johannesen's avatar
      Revert 72707 and 72709, for the moment. · 5234d379
      Dale Johannesen authored
      llvm-svn: 72712
      5234d379
    • Dale Johannesen's avatar
      Make the implicit inputs and outputs of target-independent · 0b8ca792
      Dale Johannesen authored
      ADDC/ADDE use MVT::i1 (later, whatever it gets legalized to)
      instead of MVT::Flag.  Remove CARRY_FALSE in favor of 0; adjust
      all target-independent code to use this format.
      
      Most targets will still produce a Flag-setting target-dependent
      version when selection is done.  X86 is converted to use i32
      instead, which means TableGen needs to produce different code
      in xxxGenDAGISel.inc.  This keys off the new supportsHasI1 bit
      in xxxInstrInfo, currently set only for X86; in principle this
      is temporary and should go away when all other targets have
      been converted.  All relevant X86 instruction patterns are
      modified to represent setting and using EFLAGS explicitly.  The
      same can be done on other targets.
      
      The immediate behavior change is that an ADC/ADD pair are no
      longer tightly coupled in the X86 scheduler; they can be
      separated by instructions that don't clobber the flags (MOV).
      I will soon add some peephole optimizations based on using
      other instructions that set the flags to feed into ADC.
      
      llvm-svn: 72707
      0b8ca792
  4. May 31, 2009
  5. May 30, 2009
  6. May 29, 2009
  7. May 05, 2009
  8. Apr 27, 2009
  9. Apr 24, 2009
  10. Apr 21, 2009
  11. Apr 17, 2009
  12. Apr 13, 2009
    • Dan Gohman's avatar
      Rename COPY_TO_SUBCLASS to COPY_TO_REGCLASS, and generalize · 6c142630
      Dan Gohman authored
      it accordingly. Thanks to Jakob Stoklund Olesen for pointing
      out how this might be useful.
      
      llvm-svn: 68986
      6c142630
    • Dan Gohman's avatar
      Implement x86 h-register extract support. · 57d6bd36
      Dan Gohman authored
       - Add patterns for h-register extract, which avoids a shift and mask,
         and in some cases a temporary register.
       - Add address-mode matching for turning (X>>(8-n))&(255<<n), where
         n is a valid address-mode scale value, into an h-register extract
         and a scaled-offset address.
       - Replace X86's MOV32to32_ and related instructions with the new
         target-independent COPY_TO_SUBREG instruction.
      
      On x86-64 there are complicated constraints on h registers, and
      CodeGen doesn't currently provide a high-level way to express all of them,
      so they are handled with a bunch of special code. This code currently only
      supports extracts where the result is used by a zero-extend or a store,
      though these are fairly common.
      
      These transformations are not always beneficial; since there are only
      4 h registers, they sometimes require extra move instructions, and
      this sometimes increases register pressure because it can force out
      values that would otherwise be in one of those registers. However,
      this appears to be relatively uncommon.
      
      llvm-svn: 68962
      57d6bd36
    • Dan Gohman's avatar
      Add a comment about MOVSX64rr8. · c5c2fc45
      Dan Gohman authored
      llvm-svn: 68950
      c5c2fc45
  13. Apr 08, 2009
    • Rafael Espindola's avatar
      Re-apply 68552. · 3b2df10c
      Rafael Espindola authored
      Tested by bootstrapping llvm-gcc and using that to build llvm.
      
      llvm-svn: 68645
      3b2df10c
    • Dan Gohman's avatar
      Implement support for using modeling implicit-zero-extension on x86-64 · ad3e549a
      Dan Gohman authored
      with SUBREG_TO_REG, teach SimpleRegisterCoalescing to coalesce
      SUBREG_TO_REG instructions (which are similar to INSERT_SUBREG
      instructions), and teach the DAGCombiner to take advantage of this on
      targets which support it. This eliminates many redundant
      zero-extension operations on x86-64.
      
      This adds a new TargetLowering hook, isZExtFree. It's similar to
      isTruncateFree, except it only applies to actual definitions, and not
      no-op truncates which may not zero the high bits.
      
      Also, this adds a new optimization to SimplifyDemandedBits: transform
      operations like x+y into (zext (add (trunc x), (trunc y))) on targets
      where all the casts are no-ops. In contexts where the high part of the
      add is explicitly masked off, this allows the mask operation to be
      eliminated. Fix the DAGCombiner to avoid undoing these transformations
      to eliminate casts on targets where the casts are no-ops.
      
      Also, this adds a new two-address lowering heuristic. Since
      two-address lowering runs before coalescing, it helps to be able to
      look through copies when deciding whether commuting and/or
      three-address conversion are profitable.
      
      Also, fix a bug in LiveInterval::MergeInClobberRanges. It didn't handle
      the case that a clobber range extended both before and beyond an
      existing live range. In that case, multiple live ranges need to be
      added. This was exposed by the new subreg coalescing code.
      
      Remove 2008-05-06-SpillerBug.ll. It was bugpoint-reduced, and the
      spiller behavior it was looking for no longer occurrs with the new
      instruction selection.
      
      llvm-svn: 68576
      ad3e549a
    • Bill Wendling's avatar
      Temporarily revert r68552. This was causing a failure in the self-hosting LLVM · 4aa25b79
      Bill Wendling authored
      builds.
      
      --- Reverse-merging (from foreign repository) r68552 into '.':
      U    test/CodeGen/X86/tls8.ll
      U    test/CodeGen/X86/tls10.ll
      U    test/CodeGen/X86/tls2.ll
      U    test/CodeGen/X86/tls6.ll
      U    lib/Target/X86/X86Instr64bit.td
      U    lib/Target/X86/X86InstrSSE.td
      U    lib/Target/X86/X86InstrInfo.td
      U    lib/Target/X86/X86RegisterInfo.cpp
      U    lib/Target/X86/X86ISelLowering.cpp
      U    lib/Target/X86/X86CodeEmitter.cpp
      U    lib/Target/X86/X86FastISel.cpp
      U    lib/Target/X86/X86InstrInfo.h
      U    lib/Target/X86/X86ISelDAGToDAG.cpp
      U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.cpp
      U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.cpp
      U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.h
      U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.h
      U    lib/Target/X86/X86ISelLowering.h
      U    lib/Target/X86/X86InstrInfo.cpp
      U    lib/Target/X86/X86InstrBuilder.h
      U    lib/Target/X86/X86RegisterInfo.td
      
      llvm-svn: 68560
      4aa25b79
  14. Apr 07, 2009
  15. Mar 30, 2009
  16. Mar 18, 2009
    • Chris Lattner's avatar
      Disable the "call to immediate" optimization on x86-64. It is · a6bed3e9
      Chris Lattner authored
      not safe in general because the immediate could be an arbitrary
      value that does not fit in a 32-bit pcrel displacement.  
      Conservatively fall back to loading the value into a register
      and calling through it.
      
      We still do the optzn on X86-32.
      
      llvm-svn: 67142
      a6bed3e9
  17. Mar 12, 2009
  18. Mar 05, 2009
  19. Mar 04, 2009
  20. Mar 03, 2009
  21. Feb 10, 2009
  22. Feb 05, 2009
  23. Jan 26, 2009
  24. Jan 21, 2009
  25. Jan 14, 2009
  26. Jan 13, 2009
  27. Jan 07, 2009
  28. Dec 25, 2008
Loading