Skip to content
  1. Feb 14, 2008
  2. Feb 13, 2008
  3. Feb 12, 2008
  4. Feb 11, 2008
  5. Feb 10, 2008
  6. Feb 08, 2008
  7. Feb 07, 2008
  8. Feb 06, 2008
  9. Feb 05, 2008
  10. Feb 02, 2008
    • Nick Lewycky's avatar
      Don't use uninitialized values. Fixes vec_align.ll on X86 Linux. · f5b9938e
      Nick Lewycky authored
      llvm-svn: 46666
      f5b9938e
    • Evan Cheng's avatar
      SDIsel processes llvm.dbg.declare by recording the variable debug information... · efd142a9
      Evan Cheng authored
      SDIsel processes llvm.dbg.declare by recording the variable debug information descriptor and its corresponding stack frame index in MachineModuleInfo. This only works if the local variable is "homed" in the stack frame. It does not work for byval parameter, etc.
      Added ISD::DECLARE node type to represent llvm.dbg.declare intrinsic. Now the intrinsic calls are lowered into a SDNode and lives on through out the codegen passes.
      For now, since all the debugging information recording is done at isel time, when a ISD::DECLARE node is selected, it has the side effect of also recording the variable. This is a short term solution that should be fixed in time.
      
      llvm-svn: 46659
      efd142a9
  11. Jan 31, 2008
  12. Jan 30, 2008
  13. Jan 29, 2008
  14. Jan 27, 2008
  15. Jan 26, 2008
  16. Jan 25, 2008
  17. Jan 24, 2008
    • Chris Lattner's avatar
      Significantly simplify and improve handling of FP function results on x86-32. · a91f77ea
      Chris Lattner authored
      This case returns the value in ST(0) and then has to convert it to an SSE
      register.  This causes significant codegen ugliness in some cases.  For 
      example in the trivial fp-stack-direct-ret.ll testcase we used to generate:
      
      _bar:
      	subl	$28, %esp
      	call	L_foo$stub
      	fstpl	16(%esp)
      	movsd	16(%esp), %xmm0
      	movsd	%xmm0, 8(%esp)
      	fldl	8(%esp)
      	addl	$28, %esp
      	ret
      
      because we move the result of foo() into an XMM register, then have to
      move it back for the return of bar.
      
      Instead of hacking ever-more special cases into the call result lowering code
      we take a much simpler approach: on x86-32, fp return is modeled as always 
      returning into an f80 register which is then truncated to f32 or f64 as needed.
      Similarly for a result, we model it as an extension to f80 + return.
      
      This exposes the truncate and extensions to the dag combiner, allowing target
      independent code to hack on them, eliminating them in this case.  This gives 
      us this code for the example above:
      
      _bar:
      	subl	$12, %esp
      	call	L_foo$stub
      	addl	$12, %esp
      	ret
      
      The nasty aspect of this is that these conversions are not legal, but we want
      the second pass of dag combiner (post-legalize) to be able to hack on them.
      To handle this, we lie to legalize and say they are legal, then custom expand
      them on entry to the isel pass (PreprocessForFPConvert).  This is gross, but
      less gross than the code it is replacing :)
      
      This also allows us to generate better code in several other cases.  For 
      example on fp-stack-ret-conv.ll, we now generate:
      
      _test:
      	subl	$12, %esp
      	call	L_foo$stub
      	fstps	8(%esp)
      	movl	16(%esp), %eax
      	cvtss2sd	8(%esp), %xmm0
      	movsd	%xmm0, (%eax)
      	addl	$12, %esp
      	ret
      
      where before we produced (incidentally, the old bad code is identical to what
      gcc produces):
      
      _test:
      	subl	$12, %esp
      	call	L_foo$stub
      	fstpl	(%esp)
      	cvtsd2ss	(%esp), %xmm0
      	cvtss2sd	%xmm0, %xmm0
      	movl	16(%esp), %eax
      	movsd	%xmm0, (%eax)
      	addl	$12, %esp
      	ret
      
      Note that we generate slightly worse code on pr1505b.ll due to a scheduling 
      deficiency that is unrelated to this patch.
      
      llvm-svn: 46307
      a91f77ea
    • Evan Cheng's avatar
      Let each target decide byval alignment. For X86, it's 4-byte unless the... · 35abd840
      Evan Cheng authored
      Let each target decide byval alignment. For X86, it's 4-byte unless the aggregare contains SSE vector(s). For x86-64, it's max of 8 or alignment of the type.
      
      llvm-svn: 46286
      35abd840
  18. Jan 23, 2008
    • Duncan Sands's avatar
      The last pieces needed for loading arbitrary · 95d46ef8
      Duncan Sands authored
      precision integers.  This won't actually work
      (and most of the code is dead) unless the new
      legalization machinery is turned on.  While
      there, I rationalized the handling of i1, and
      removed some bogus (and unused) sextload patterns.
      For i1, this could result in microscopically
      better code for some architectures (not X86).
      It might also result in worse code if annotating
      with AssertZExt nodes turns out to be more harmful
      than helpful.
      
      llvm-svn: 46280
      95d46ef8
  19. Jan 17, 2008
    • Chris Lattner's avatar
      This commit changes: · 1ea55cf8
      Chris Lattner authored
      1. Legalize now always promotes truncstore of i1 to i8. 
      2. Remove patterns and gunk related to truncstore i1 from targets.
      3. Rename the StoreXAction stuff to TruncStoreAction in TLI.
      4. Make the TLI TruncStoreAction table a 2d table to handle from/to conversions.
      5. Mark a wide variety of invalid truncstores as such in various targets, e.g.
         X86 currently doesn't support truncstore of any of its integer types.
      6. Add legalize support for truncstores with invalid value input types.
      7. Add a dag combine transform to turn store(truncate) into truncstore when
         safe.
      
      The later allows us to compile CodeGen/X86/storetrunc-fp.ll to:
      
      _foo:
      	fldt	20(%esp)
      	fldt	4(%esp)
      	faddp	%st(1)
      	movl	36(%esp), %eax
      	fstps	(%eax)
      	ret
      
      instead of:
      
      _foo:
      	subl	$4, %esp
      	fldt	24(%esp)
      	fldt	8(%esp)
      	faddp	%st(1)
      	fstps	(%esp)
      	movl	40(%esp), %eax
      	movss	(%esp), %xmm0
      	movss	%xmm0, (%eax)
      	addl	$4, %esp
      	ret
      
      llvm-svn: 46140
      1ea55cf8
    • Chris Lattner's avatar
      * Introduce a new SelectionDAG::getIntPtrConstant method · 72733e57
      Chris Lattner authored
        and switch various codegen pieces and the X86 backend over
        to using it.
      
      * Add some comments to SelectionDAGNodes.h
      
      * Introduce a second argument to FP_ROUND, which indicates
        whether the FP_ROUND changes the value of its input. If
        not it is safe to xform things like fp_extend(fp_round(x)) -> x.
      
      llvm-svn: 46125
      72733e57
  20. Jan 16, 2008
  21. Jan 15, 2008
  22. Jan 13, 2008
Loading