Skip to content
  1. Apr 16, 2008
  2. Apr 15, 2008
  3. Apr 14, 2008
  4. Apr 13, 2008
  5. Apr 12, 2008
    • Arnold Schwaighofer's avatar
      This patch corrects the handling of byval arguments for tailcall · 634fc9a3
      Arnold Schwaighofer authored
      optimized x86-64 (and x86) calls so that they work (... at least for
      my test cases).
      
      Should fix the following problems:
      
      Problem 1: When i introduced the optimized handling of arguments for
      tail called functions (using a sequence of copyto/copyfrom virtual
      registers instead of always lowering to top of the stack) i did not
      handle byval arguments correctly e.g they did not work at all :).
      
      Problem 2: On x86-64 after the arguments of the tail called function
      are moved to their registers (which include ESI/RSI etc), tail call
      optimization performs byval lowering which causes xSI,xDI, xCX
      registers to be overwritten. This is handled in this patch by moving
      the arguments to virtual registers first and after the byval lowering
      the arguments are moved from those virtual registers back to
      RSI/RDI/RCX.
      
      llvm-svn: 49584
      634fc9a3
    • Duncan Sands's avatar
      Factor some libcall code. · 844d55a4
      Duncan Sands authored
      llvm-svn: 49583
      844d55a4
    • Dan Gohman's avatar
      Drop ISD::MEMSET, ISD::MEMMOVE, and ISD::MEMCPY, which are not Legal · 544ab2c5
      Dan Gohman authored
      on any current target and aren't optimized in DAGCombiner. Instead
      of using intermediate nodes, expand the operations, choosing between
      simple loads/stores, target-specific code, and library calls,
      immediately.
      
      Previously, the code to emit optimized code for these operations
      was only used at initial SelectionDAG construction time; now it is
      used at all times. This fixes some cases where rep;movs was being
      used for small copies where simple loads/stores would be better.
      
      This also cleans up code that checks for alignments less than 4;
      let the targets make that decision instead of doing it in
      target-independent code. This allows x86 to use rep;movs in
      low-alignment cases.
      
      Also, this fixes a bug that resulted in the use of rep;stos for
      memsets of 0 with non-constant memory size when the alignment was
      at least 4. It's better to use the library in this case, which
      can be significantly faster when the size is large.
      
      This also preserves more SourceValue information when memory
      intrinsics are lowered into simple loads/stores.
      
      llvm-svn: 49572
      544ab2c5
    • Dan Gohman's avatar
      Fix a bug that prevented x86-64 from using rep.movsq for · 8c7cf88f
      Dan Gohman authored
      8-byte-aligned data.
      
      llvm-svn: 49571
      8c7cf88f
    • Nate Begeman's avatar
      80 col fix · 7417348a
      Nate Begeman authored
      llvm-svn: 49569
      7417348a
    • Nate Begeman's avatar
      Restore code to disable crash catcher on older OS X systems · 4840515d
      Nate Begeman authored
      llvm-svn: 49568
      4840515d
Loading