Skip to content
  1. Jul 28, 2010
    • Dan Gohman's avatar
      Do GEP offset calculations with unsigned math rather than signed math · 32f889e5
      Dan Gohman authored
      to avoid undefined behavior on overflow, noticed by John Regehr.
      
      llvm-svn: 109594
      32f889e5
    • Nate Begeman's avatar
      Implement a vectorized algorithm for <16 x i8> << <16 x i8> · 53afc8f0
      Nate Begeman authored
      This is about 4x faster and smaller than the existing scalarization.
      
      llvm-svn: 109566
      53afc8f0
    • Nate Begeman's avatar
      ~40% faster vector shl <4 x i32> on SSE 4.1 Larger improvements for smaller... · 269a6da0
      Nate Begeman authored
      ~40% faster vector shl <4 x i32> on SSE 4.1  Larger improvements for smaller types coming in future patches.
      
      For:
      
      define <2 x i64> @shl(<4 x i32> %r, <4 x i32> %a) nounwind readnone ssp {
      entry:
        %shl = shl <4 x i32> %r, %a                     ; <<4 x i32>> [#uses=1]
        %tmp2 = bitcast <4 x i32> %shl to <2 x i64>     ; <<2 x i64>> [#uses=1]
        ret <2 x i64> %tmp2
      }
      
      We get:
      
      _shl:                                   ## @shl
      	pslld	$23, %xmm1
      	paddd	LCPI0_0, %xmm1
      	cvttps2dq	%xmm1, %xmm1
      	pmulld	%xmm1, %xmm0
      	ret
      
      Instead of:
      
      _shl:                                   ## @shl
      	pshufd	$3, %xmm0, %xmm2
      	movd	%xmm2, %eax
      	pshufd	$3, %xmm1, %xmm2
      	movd	%xmm2, %ecx
      	shll	%cl, %eax
      	movd	%eax, %xmm2
      	pshufd	$1, %xmm0, %xmm3
      	movd	%xmm3, %eax
      	pshufd	$1, %xmm1, %xmm3
      	movd	%xmm3, %ecx
      	shll	%cl, %eax
      	movd	%eax, %xmm3
      	punpckldq	%xmm2, %xmm3
      	movd	%xmm0, %eax
      	movd	%xmm1, %ecx
      	shll	%cl, %eax
      	movd	%eax, %xmm2
      	movhlps	%xmm0, %xmm0
      	movd	%xmm0, %eax
      	movhlps	%xmm1, %xmm1
      	movd	%xmm1, %ecx
      	shll	%cl, %eax
      	movd	%eax, %xmm0
      	punpckldq	%xmm0, %xmm2
      	movdqa	%xmm2, %xmm0
      	punpckldq	%xmm3, %xmm0
      	ret
      
      llvm-svn: 109549
      269a6da0
  2. Jul 27, 2010
  3. Jul 26, 2010
  4. Jul 25, 2010
  5. Jul 24, 2010
  6. Jul 23, 2010
  7. Jul 22, 2010
Loading