Skip to content
  1. Apr 28, 2010
  2. Apr 27, 2010
  3. Apr 26, 2010
  4. Apr 25, 2010
  5. Apr 24, 2010
  6. Apr 23, 2010
  7. Apr 22, 2010
  8. Apr 21, 2010
    • Evan Cheng's avatar
      Implement -disable-non-leaf-fp-elim which disable frame pointer elimination · 4158a0ff
      Evan Cheng authored
      optimization for non-leaf functions. This will be hooked up to gcc's
      -momit-leaf-frame-pointer option. rdar://7886181
      
      llvm-svn: 101984
      4158a0ff
    • Evan Cheng's avatar
      9c8cd8c0
    • Evan Cheng's avatar
      Trim include. · 873310f6
      Evan Cheng authored
      llvm-svn: 101978
      873310f6
    • Bill Wendling's avatar
      Handle a displacement location in 64-bit as an RIP-relative displacement. It · 11740305
      Bill Wendling authored
      fixes a bug (<rdar://problem/7880900>) in the JIT. This code wouldn't work:
      
      target triple = "x86_64-apple-darwin"
      
      define double @func(double %a) {
        %tmp1 = fmul double %a, 5.000000e-01            ; <double> [#uses=1]
        ret double %tmp1
      }
      
      define i32 @main() nounwind {
        %1 = call double @func(double 4.770000e-04) ; <i64> [#uses=0]
        ret i32 0
      }
      
      llvm-svn: 101965
      11740305
    • Chris Lattner's avatar
      teach the x86 address matching stuff to handle · 84776786
      Chris Lattner authored
      (shl (or x,c), 3) the same as (shl (add x, c), 3)
      when x doesn't have any bits from c set.
      
      This finishes off PR1135.  Before we compiled the block to:
      to:
      
      LBB0_3:                                 ## %bb
      	cmpb	$4, %dl
      	sete	%dl
      	addb	%dl, %cl
      	movb	%cl, %dl
      	shlb	$2, %dl
      	addb	%r8b, %dl
      	shlb	$2, %dl
      	movzbl	%dl, %edx
      	movl	%esi, (%rdi,%rdx,4)
      	leaq	2(%rdx), %r9
      	movl	%esi, (%rdi,%r9,4)
      	leaq	1(%rdx), %r9
      	movl	%esi, (%rdi,%r9,4)
      	addq	$3, %rdx
      	movl	%esi, (%rdi,%rdx,4)
      	incb	%r8b
      	decb	%al
      	movb	%r8b, %dl
      	jne	LBB0_1
      
      Now we produce:
      
      LBB0_3:                                 ## %bb
      	cmpb	$4, %dl
      	sete	%dl
      	addb	%dl, %cl
      	movb	%cl, %dl
      	shlb	$2, %dl
      	addb	%r8b, %dl
      	shlb	$2, %dl
      	movzbl	%dl, %edx
      	movl	%esi, (%rdi,%rdx,4)
      	movl	%esi, 8(%rdi,%rdx,4)
      	movl	%esi, 4(%rdi,%rdx,4)
      	movl	%esi, 12(%rdi,%rdx,4)
      	incb	%r8b
      	decb	%al
      	movb	%r8b, %dl
      	jne	LBB0_1
      
      llvm-svn: 101958
      84776786
    • Dale Johannesen's avatar
      Because of the EMMS problem, right now we have to support · 0522b90c
      Dale Johannesen authored
      user-defined operations that use MMX register types, but
      the compiler shouldn't generate them on its own.  This adds
      a Synthesizable abstraction to represent this, and changes
      the vector widening computation so it won't produce MMX types.
      (The motivation is to remove noise from the ABI compatibility
      part of the gcc test suite, which has some breakage right now.)
      
      llvm-svn: 101951
      0522b90c
  9. Apr 20, 2010
  10. Apr 19, 2010
  11. Apr 17, 2010
  12. Apr 16, 2010
Loading