Skip to content
  1. Apr 22, 2013
    • Eric Christopher's avatar
      Tidy. · 44c6aa67
      Eric Christopher authored
      llvm-svn: 180000
      44c6aa67
    • Eric Christopher's avatar
      Update comment. Whitespace. · 25e3509c
      Eric Christopher authored
      llvm-svn: 179999
      25e3509c
    • David Blaikie's avatar
      Revert "Revert "PR14606: Debug info for using directives/DW_TAG_imported_module"" · 9f88fe86
      David Blaikie authored
      This reverts commit 179839 now that the corresponding LLVM patch has been fixed.
      
      llvm-svn: 179997
      9f88fe86
    • David Blaikie's avatar
      Revert "Revert "PR14606: debug info imported_module support"" · f55abeaf
      David Blaikie authored
      This reverts commit r179840 with a fix to test/DebugInfo/two-cus-from-same-file.ll
      
      I'm not sure why that test only failed on ARM & MIPS and not X86 Linux, even
      though the debug info was clearly invalid on all of them, but this ought to fix
      it.
      
      llvm-svn: 179996
      f55abeaf
    • Craig Topper's avatar
      Convert windows line endings to linux/unix line endings. · 7af39d7d
      Craig Topper authored
      llvm-svn: 179995
      7af39d7d
    • Craig Topper's avatar
      Fix indentation. No functional change. · 2172ad64
      Craig Topper authored
      llvm-svn: 179994
      2172ad64
    • Craig Topper's avatar
    • David Blaikie's avatar
      Add a triple to make a test resilient to non-TLS hosts (eg: darwin10) · 8ddc2b50
      David Blaikie authored
      Making the test introduced in r179962 resilient to being run on darwin10 hosts.
      
      llvm-svn: 179992
      8ddc2b50
    • Craig Topper's avatar
      Remove an unreachable 'break' following a 'return'. · b5ba3d3b
      Craig Topper authored
      llvm-svn: 179991
      b5ba3d3b
    • Bill Wendling's avatar
      Improve performance of file I/O. · 9a9141ae
      Bill Wendling authored
      The fread / fwrite calls were happening for each timer. However, that could be
      pretty expensive for a large number of timers. Instead, read and write the
      timers in one call.
      
      This gives ~10% speedup in compilation time.
      
      llvm-svn: 179990
      9a9141ae
    • Jim Grosbach's avatar
      Legalize vector truncates by parts rather than just splitting. · 563983c8
      Jim Grosbach authored
      Rather than just splitting the input type and hoping for the best, apply
      a bit more cleverness. Just splitting the types until the source is
      legal often leads to an illegal result time, which is then widened and a
      scalarization step is introduced which leads to truly horrible code
      generation. With the loop vectorizer, these sorts of operations are much
      more common, and so it's worth extra effort to do them well.
      
      Add a legalization hook for the operands of a TRUNCATE node, which will
      be encountered after the result type has been legalized, but if the
      operand type is still illegal. If simple splitting of both types
      ends up with the result type of each half still being legal, just
      do that (v16i16 -> v16i8 on ARM, for example). If, however, that would
      result in an illegal result type (v8i32 -> v8i8 on ARM, for example),
      we can get more clever with power-two vectors. Specifically,
      split the input type, but also widen the result element size, then
      concatenate the halves and truncate again.  For example on ARM,
      To perform a "%res = v8i8 trunc v8i32 %in" we transform to:
        %inlo = v4i32 extract_subvector %in, 0
        %inhi = v4i32 extract_subvector %in, 4
        %lo16 = v4i16 trunc v4i32 %inlo
        %hi16 = v4i16 trunc v4i32 %inhi
        %in16 = v8i16 concat_vectors v4i16 %lo16, v4i16 %hi16
        %res = v8i8 trunc v8i16 %in16
      
      This allows instruction selection to generate three VMOVN instructions
      instead of a sequences of moves, stores and loads.
      
      Update the ARMTargetTransformInfo to take this improved legalization
      into account.
      
      Consider the simplified IR:
      
      define <16 x i8> @test1(<16 x i32>* %ap) {
        %a = load <16 x i32>* %ap
        %tmp = trunc <16 x i32> %a to <16 x i8>
        ret <16 x i8> %tmp
      }
      
      define <8 x i8> @test2(<8 x i32>* %ap) {
        %a = load <8 x i32>* %ap
        %tmp = trunc <8 x i32> %a to <8 x i8>
        ret <8 x i8> %tmp
      }
      
      Previously, we would generate the truly hideous:
      	.syntax unified
      	.section	__TEXT,__text,regular,pure_instructions
      	.globl	_test1
      	.align	2
      _test1:                                 @ @test1
      @ BB#0:
      	push	{r7}
      	mov	r7, sp
      	sub	sp, sp, #20
      	bic	sp, sp, #7
      	add	r1, r0, #48
      	add	r2, r0, #32
      	vld1.64	{d24, d25}, [r0:128]
      	vld1.64	{d16, d17}, [r1:128]
      	vld1.64	{d18, d19}, [r2:128]
      	add	r1, r0, #16
      	vmovn.i32	d22, q8
      	vld1.64	{d16, d17}, [r1:128]
      	vmovn.i32	d20, q9
      	vmovn.i32	d18, q12
      	vmov.u16	r0, d22[3]
      	strb	r0, [sp, #15]
      	vmov.u16	r0, d22[2]
      	strb	r0, [sp, #14]
      	vmov.u16	r0, d22[1]
      	strb	r0, [sp, #13]
      	vmov.u16	r0, d22[0]
      	vmovn.i32	d16, q8
      	strb	r0, [sp, #12]
      	vmov.u16	r0, d20[3]
      	strb	r0, [sp, #11]
      	vmov.u16	r0, d20[2]
      	strb	r0, [sp, #10]
      	vmov.u16	r0, d20[1]
      	strb	r0, [sp, #9]
      	vmov.u16	r0, d20[0]
      	strb	r0, [sp, #8]
      	vmov.u16	r0, d18[3]
      	strb	r0, [sp, #3]
      	vmov.u16	r0, d18[2]
      	strb	r0, [sp, #2]
      	vmov.u16	r0, d18[1]
      	strb	r0, [sp, #1]
      	vmov.u16	r0, d18[0]
      	strb	r0, [sp]
      	vmov.u16	r0, d16[3]
      	strb	r0, [sp, #7]
      	vmov.u16	r0, d16[2]
      	strb	r0, [sp, #6]
      	vmov.u16	r0, d16[1]
      	strb	r0, [sp, #5]
      	vmov.u16	r0, d16[0]
      	strb	r0, [sp, #4]
      	vldmia	sp, {d16, d17}
      	vmov	r0, r1, d16
      	vmov	r2, r3, d17
      	mov	sp, r7
      	pop	{r7}
      	bx	lr
      
      	.globl	_test2
      	.align	2
      _test2:                                 @ @test2
      @ BB#0:
      	push	{r7}
      	mov	r7, sp
      	sub	sp, sp, #12
      	bic	sp, sp, #7
      	vld1.64	{d16, d17}, [r0:128]
      	add	r0, r0, #16
      	vld1.64	{d20, d21}, [r0:128]
      	vmovn.i32	d18, q8
      	vmov.u16	r0, d18[3]
      	vmovn.i32	d16, q10
      	strb	r0, [sp, #3]
      	vmov.u16	r0, d18[2]
      	strb	r0, [sp, #2]
      	vmov.u16	r0, d18[1]
      	strb	r0, [sp, #1]
      	vmov.u16	r0, d18[0]
      	strb	r0, [sp]
      	vmov.u16	r0, d16[3]
      	strb	r0, [sp, #7]
      	vmov.u16	r0, d16[2]
      	strb	r0, [sp, #6]
      	vmov.u16	r0, d16[1]
      	strb	r0, [sp, #5]
      	vmov.u16	r0, d16[0]
      	strb	r0, [sp, #4]
      	ldm	sp, {r0, r1}
      	mov	sp, r7
      	pop	{r7}
      	bx	lr
      
      Now, however, we generate the much more straightforward:
      	.syntax unified
      	.section	__TEXT,__text,regular,pure_instructions
      	.globl	_test1
      	.align	2
      _test1:                                 @ @test1
      @ BB#0:
      	add	r1, r0, #48
      	add	r2, r0, #32
      	vld1.64	{d20, d21}, [r0:128]
      	vld1.64	{d16, d17}, [r1:128]
      	add	r1, r0, #16
      	vld1.64	{d18, d19}, [r2:128]
      	vld1.64	{d22, d23}, [r1:128]
      	vmovn.i32	d17, q8
      	vmovn.i32	d16, q9
      	vmovn.i32	d18, q10
      	vmovn.i32	d19, q11
      	vmovn.i16	d17, q8
      	vmovn.i16	d16, q9
      	vmov	r0, r1, d16
      	vmov	r2, r3, d17
      	bx	lr
      
      	.globl	_test2
      	.align	2
      _test2:                                 @ @test2
      @ BB#0:
      	vld1.64	{d16, d17}, [r0:128]
      	add	r0, r0, #16
      	vld1.64	{d18, d19}, [r0:128]
      	vmovn.i32	d16, q8
      	vmovn.i32	d17, q9
      	vmovn.i16	d16, q8
      	vmov	r0, r1, d16
      	bx	lr
      
      llvm-svn: 179989
      563983c8
    • Jim Grosbach's avatar
      ARM: Split out cost model vcvt testcases. · fb08e55c
      Jim Grosbach authored
      They had a separate RUN line already, so may as well be in a separate file.
      
      llvm-svn: 179988
      fb08e55c
  2. Apr 21, 2013
Loading