Skip to content
  1. Apr 21, 2010
    • Evan Cheng's avatar
      9c8cd8c0
    • Evan Cheng's avatar
      Trim include. · 873310f6
      Evan Cheng authored
      llvm-svn: 101978
      873310f6
    • Dan Gohman's avatar
      Add more const qualifiers on TargetMachine and friends. · 57c732b0
      Dan Gohman authored
      llvm-svn: 101977
      57c732b0
    • Johnny Chen's avatar
      Thumb instructions which have reglist operands at the end and predicate operands · dd56c405
      Johnny Chen authored
      before reglist were not properly handled with respect to IT Block.  Fix that by
      creating a new method ARMBasicMCBuilder::DoPredicateOperands() used by those
      instructions for disassembly.  Add a test case.
      
      llvm-svn: 101974
      dd56c405
    • Bill Wendling's avatar
      Handle a displacement location in 64-bit as an RIP-relative displacement. It · 11740305
      Bill Wendling authored
      fixes a bug (<rdar://problem/7880900>) in the JIT. This code wouldn't work:
      
      target triple = "x86_64-apple-darwin"
      
      define double @func(double %a) {
        %tmp1 = fmul double %a, 5.000000e-01            ; <double> [#uses=1]
        ret double %tmp1
      }
      
      define i32 @main() nounwind {
        %1 = call double @func(double 4.770000e-04) ; <i64> [#uses=0]
        ret i32 0
      }
      
      llvm-svn: 101965
      11740305
    • Chris Lattner's avatar
      teach the x86 address matching stuff to handle · 84776786
      Chris Lattner authored
      (shl (or x,c), 3) the same as (shl (add x, c), 3)
      when x doesn't have any bits from c set.
      
      This finishes off PR1135.  Before we compiled the block to:
      to:
      
      LBB0_3:                                 ## %bb
      	cmpb	$4, %dl
      	sete	%dl
      	addb	%dl, %cl
      	movb	%cl, %dl
      	shlb	$2, %dl
      	addb	%r8b, %dl
      	shlb	$2, %dl
      	movzbl	%dl, %edx
      	movl	%esi, (%rdi,%rdx,4)
      	leaq	2(%rdx), %r9
      	movl	%esi, (%rdi,%r9,4)
      	leaq	1(%rdx), %r9
      	movl	%esi, (%rdi,%r9,4)
      	addq	$3, %rdx
      	movl	%esi, (%rdi,%rdx,4)
      	incb	%r8b
      	decb	%al
      	movb	%r8b, %dl
      	jne	LBB0_1
      
      Now we produce:
      
      LBB0_3:                                 ## %bb
      	cmpb	$4, %dl
      	sete	%dl
      	addb	%dl, %cl
      	movb	%cl, %dl
      	shlb	$2, %dl
      	addb	%r8b, %dl
      	shlb	$2, %dl
      	movzbl	%dl, %edx
      	movl	%esi, (%rdi,%rdx,4)
      	movl	%esi, 8(%rdi,%rdx,4)
      	movl	%esi, 4(%rdi,%rdx,4)
      	movl	%esi, 12(%rdi,%rdx,4)
      	incb	%r8b
      	decb	%al
      	movb	%r8b, %dl
      	jne	LBB0_1
      
      llvm-svn: 101958
      84776786
    • Dale Johannesen's avatar
      Because of the EMMS problem, right now we have to support · 0522b90c
      Dale Johannesen authored
      user-defined operations that use MMX register types, but
      the compiler shouldn't generate them on its own.  This adds
      a Synthesizable abstraction to represent this, and changes
      the vector widening computation so it won't produce MMX types.
      (The motivation is to remove noise from the ABI compatibility
      part of the gcc test suite, which has some breakage right now.)
      
      llvm-svn: 101951
      0522b90c
  2. Apr 20, 2010
  3. Apr 19, 2010
  4. Apr 18, 2010
  5. Apr 17, 2010
Loading