Skip to content
  1. Apr 18, 2011
  2. Apr 17, 2011
  3. Apr 14, 2011
  4. Feb 22, 2011
  5. Feb 19, 2011
  6. Feb 18, 2011
  7. Jan 26, 2011
  8. Dec 30, 2010
  9. Dec 23, 2010
  10. Dec 20, 2010
    • Chris Lattner's avatar
      Change the X86 backend to stop using the evil ADDC/ADDE/SUBC/SUBE nodes (which · 846c20d4
      Chris Lattner authored
      their carry depenedencies with MVT::Flag operands) and use clean and beautiful
      EFLAGS dependences instead.
      
      We do this by changing the modelling of SBB/ADC to have EFLAGS input and outputs
      (which is what requires the previous scheduler change) and change X86 ISelLowering
      to custom lower ADDC and friends down to X86ISD::ADD/ADC/SUB/SBB nodes.
      
      With the previous series of changes, this causes no changes in the testsuite, woo.
      
      llvm-svn: 122213
      846c20d4
  11. Dec 10, 2010
    • Nate Begeman's avatar
      Formalize the notion that AVX and SSE are non-overlapping extensions from the... · 8b08f523
      Nate Begeman authored
      Formalize the notion that AVX and SSE are non-overlapping extensions from the compiler's point of view.  Per email discussion, we either want to always use VEX-prefixed instructions or never use them, and are taking "HasAVX" to mean "Always use VEX".  Passing -mattr=-avx,+sse42 should serve to restore legacy SSE support when desirable.
      
      llvm-svn: 121439
      8b08f523
  12. Dec 09, 2010
  13. Dec 05, 2010
    • Chris Lattner's avatar
      it turns out that when ".with.overflow" intrinsics were added to the X86 · 364bb0a0
      Chris Lattner authored
      backend that they were all implemented except umul.  This one fell back
      to the default implementation that did a hi/lo multiply and compared the
      top.  Fix this to check the overflow flag that the 'mul' instruction
      sets, so we can avoid an explicit test.  Now we compile:
      
      void *func(long count) {
            return new int[count];
      }
      
      into:
      
      __Z4funcl:                              ## @_Z4funcl
      	movl	$4, %ecx                ## encoding: [0xb9,0x04,0x00,0x00,0x00]
      	movq	%rdi, %rax              ## encoding: [0x48,0x89,0xf8]
      	mulq	%rcx                    ## encoding: [0x48,0xf7,0xe1]
      	seto	%cl                     ## encoding: [0x0f,0x90,0xc1]
      	testb	%cl, %cl                ## encoding: [0x84,0xc9]
      	movq	$-1, %rdi               ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff]
      	cmoveq	%rax, %rdi              ## encoding: [0x48,0x0f,0x44,0xf8]
      	jmp	__Znam                  ## TAILCALL
      
      instead of:
      
      __Z4funcl:                              ## @_Z4funcl
      	movl	$4, %ecx                ## encoding: [0xb9,0x04,0x00,0x00,0x00]
      	movq	%rdi, %rax              ## encoding: [0x48,0x89,0xf8]
      	mulq	%rcx                    ## encoding: [0x48,0xf7,0xe1]
      	testq	%rdx, %rdx              ## encoding: [0x48,0x85,0xd2]
      	movq	$-1, %rdi               ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff]
      	cmoveq	%rax, %rdi              ## encoding: [0x48,0x0f,0x44,0xf8]
      	jmp	__Znam                  ## TAILCALL
      
      Other than the silly seto+test, this is using the o bit directly, so it's going in the right
      direction.
      
      llvm-svn: 120935
      364bb0a0
  14. Dec 03, 2010
  15. Nov 27, 2010
  16. Nov 23, 2010
  17. Nov 21, 2010
  18. Nov 12, 2010
  19. Nov 06, 2010
Loading