Skip to content
  1. Apr 23, 2013
    • Owen Anderson's avatar
      DAGCombine should not aggressively fold SEXT(VSETCC(...)) into a wider VSETCC... · 2d4cca35
      Owen Anderson authored
      DAGCombine should not aggressively fold SEXT(VSETCC(...)) into a wider VSETCC without first checking the target's vector boolean contents.
      This exposed an issue with PowerPC AltiVec where it appears it was setting the wrong vector boolean contents.  The included change
      fixes the PowerPC tests, and was OK'd by Hal.
      
      llvm-svn: 180129
      2d4cca35
    • Stephen Lin's avatar
      Add some constraints to use of 'returned': · 6c70dc78
      Stephen Lin authored
      1) Disallow 'returned' on parameter that is also 'sret' (no sensible semantics, as far as I can tell).
      2) Conservatively disallow tail calls through 'returned' parameters that also are 'zext' or 'sext' (for consistency with treatment of other zero-extending and sign-extending operations in tail call position detection...can be revised later to handle situations that can be determined to be safe).
      
      This is a new attribute that is not yet used, so there is no impact.
      
      llvm-svn: 180118
      6c70dc78
    • Matt Arsenault's avatar
      Remove unused DwarfSectionOffsetDirective string · 034ca0fe
      Matt Arsenault authored
      The value isn't actually used, and setting it emits a COFF specific
      directive.
      
      llvm-svn: 180064
      034ca0fe
    • Eric Christopher's avatar
      Move C++ code out of the C headers and into either C++ headers · 04d4e931
      Eric Christopher authored
      or the C++ files themselves. This enables people to use
      just a C compiler to interoperate with LLVM.
      
      llvm-svn: 180063
      04d4e931
  2. Apr 22, 2013
    • Eli Bendersky's avatar
      Optimize MachineBasicBlock::getSymbol by caching the symbol. Since the symbol · 58b04b7e
      Eli Bendersky authored
      name computation is expensive, this helps save about 25% of the time spent in
      this function.
      
      llvm-svn: 180049
      58b04b7e
    • Rafael Espindola's avatar
      Clarify that llvm.used can contain aliases. · 74f2e46e
      Rafael Espindola authored
      Also add a check for llvm.used in the verifier and simplify clients now that
      they can assume they have a ConstantArray.
      
      llvm-svn: 180019
      74f2e46e
    • Eric Christopher's avatar
      Tidy. · 44c6aa67
      Eric Christopher authored
      llvm-svn: 180000
      44c6aa67
    • Eric Christopher's avatar
      Update comment. Whitespace. · 25e3509c
      Eric Christopher authored
      llvm-svn: 179999
      25e3509c
    • David Blaikie's avatar
      Revert "Revert "PR14606: debug info imported_module support"" · f55abeaf
      David Blaikie authored
      This reverts commit r179840 with a fix to test/DebugInfo/two-cus-from-same-file.ll
      
      I'm not sure why that test only failed on ARM & MIPS and not X86 Linux, even
      though the debug info was clearly invalid on all of them, but this ought to fix
      it.
      
      llvm-svn: 179996
      f55abeaf
    • Jim Grosbach's avatar
      Legalize vector truncates by parts rather than just splitting. · 563983c8
      Jim Grosbach authored
      Rather than just splitting the input type and hoping for the best, apply
      a bit more cleverness. Just splitting the types until the source is
      legal often leads to an illegal result time, which is then widened and a
      scalarization step is introduced which leads to truly horrible code
      generation. With the loop vectorizer, these sorts of operations are much
      more common, and so it's worth extra effort to do them well.
      
      Add a legalization hook for the operands of a TRUNCATE node, which will
      be encountered after the result type has been legalized, but if the
      operand type is still illegal. If simple splitting of both types
      ends up with the result type of each half still being legal, just
      do that (v16i16 -> v16i8 on ARM, for example). If, however, that would
      result in an illegal result type (v8i32 -> v8i8 on ARM, for example),
      we can get more clever with power-two vectors. Specifically,
      split the input type, but also widen the result element size, then
      concatenate the halves and truncate again.  For example on ARM,
      To perform a "%res = v8i8 trunc v8i32 %in" we transform to:
        %inlo = v4i32 extract_subvector %in, 0
        %inhi = v4i32 extract_subvector %in, 4
        %lo16 = v4i16 trunc v4i32 %inlo
        %hi16 = v4i16 trunc v4i32 %inhi
        %in16 = v8i16 concat_vectors v4i16 %lo16, v4i16 %hi16
        %res = v8i8 trunc v8i16 %in16
      
      This allows instruction selection to generate three VMOVN instructions
      instead of a sequences of moves, stores and loads.
      
      Update the ARMTargetTransformInfo to take this improved legalization
      into account.
      
      Consider the simplified IR:
      
      define <16 x i8> @test1(<16 x i32>* %ap) {
        %a = load <16 x i32>* %ap
        %tmp = trunc <16 x i32> %a to <16 x i8>
        ret <16 x i8> %tmp
      }
      
      define <8 x i8> @test2(<8 x i32>* %ap) {
        %a = load <8 x i32>* %ap
        %tmp = trunc <8 x i32> %a to <8 x i8>
        ret <8 x i8> %tmp
      }
      
      Previously, we would generate the truly hideous:
      	.syntax unified
      	.section	__TEXT,__text,regular,pure_instructions
      	.globl	_test1
      	.align	2
      _test1:                                 @ @test1
      @ BB#0:
      	push	{r7}
      	mov	r7, sp
      	sub	sp, sp, #20
      	bic	sp, sp, #7
      	add	r1, r0, #48
      	add	r2, r0, #32
      	vld1.64	{d24, d25}, [r0:128]
      	vld1.64	{d16, d17}, [r1:128]
      	vld1.64	{d18, d19}, [r2:128]
      	add	r1, r0, #16
      	vmovn.i32	d22, q8
      	vld1.64	{d16, d17}, [r1:128]
      	vmovn.i32	d20, q9
      	vmovn.i32	d18, q12
      	vmov.u16	r0, d22[3]
      	strb	r0, [sp, #15]
      	vmov.u16	r0, d22[2]
      	strb	r0, [sp, #14]
      	vmov.u16	r0, d22[1]
      	strb	r0, [sp, #13]
      	vmov.u16	r0, d22[0]
      	vmovn.i32	d16, q8
      	strb	r0, [sp, #12]
      	vmov.u16	r0, d20[3]
      	strb	r0, [sp, #11]
      	vmov.u16	r0, d20[2]
      	strb	r0, [sp, #10]
      	vmov.u16	r0, d20[1]
      	strb	r0, [sp, #9]
      	vmov.u16	r0, d20[0]
      	strb	r0, [sp, #8]
      	vmov.u16	r0, d18[3]
      	strb	r0, [sp, #3]
      	vmov.u16	r0, d18[2]
      	strb	r0, [sp, #2]
      	vmov.u16	r0, d18[1]
      	strb	r0, [sp, #1]
      	vmov.u16	r0, d18[0]
      	strb	r0, [sp]
      	vmov.u16	r0, d16[3]
      	strb	r0, [sp, #7]
      	vmov.u16	r0, d16[2]
      	strb	r0, [sp, #6]
      	vmov.u16	r0, d16[1]
      	strb	r0, [sp, #5]
      	vmov.u16	r0, d16[0]
      	strb	r0, [sp, #4]
      	vldmia	sp, {d16, d17}
      	vmov	r0, r1, d16
      	vmov	r2, r3, d17
      	mov	sp, r7
      	pop	{r7}
      	bx	lr
      
      	.globl	_test2
      	.align	2
      _test2:                                 @ @test2
      @ BB#0:
      	push	{r7}
      	mov	r7, sp
      	sub	sp, sp, #12
      	bic	sp, sp, #7
      	vld1.64	{d16, d17}, [r0:128]
      	add	r0, r0, #16
      	vld1.64	{d20, d21}, [r0:128]
      	vmovn.i32	d18, q8
      	vmov.u16	r0, d18[3]
      	vmovn.i32	d16, q10
      	strb	r0, [sp, #3]
      	vmov.u16	r0, d18[2]
      	strb	r0, [sp, #2]
      	vmov.u16	r0, d18[1]
      	strb	r0, [sp, #1]
      	vmov.u16	r0, d18[0]
      	strb	r0, [sp]
      	vmov.u16	r0, d16[3]
      	strb	r0, [sp, #7]
      	vmov.u16	r0, d16[2]
      	strb	r0, [sp, #6]
      	vmov.u16	r0, d16[1]
      	strb	r0, [sp, #5]
      	vmov.u16	r0, d16[0]
      	strb	r0, [sp, #4]
      	ldm	sp, {r0, r1}
      	mov	sp, r7
      	pop	{r7}
      	bx	lr
      
      Now, however, we generate the much more straightforward:
      	.syntax unified
      	.section	__TEXT,__text,regular,pure_instructions
      	.globl	_test1
      	.align	2
      _test1:                                 @ @test1
      @ BB#0:
      	add	r1, r0, #48
      	add	r2, r0, #32
      	vld1.64	{d20, d21}, [r0:128]
      	vld1.64	{d16, d17}, [r1:128]
      	add	r1, r0, #16
      	vld1.64	{d18, d19}, [r2:128]
      	vld1.64	{d22, d23}, [r1:128]
      	vmovn.i32	d17, q8
      	vmovn.i32	d16, q9
      	vmovn.i32	d18, q10
      	vmovn.i32	d19, q11
      	vmovn.i16	d17, q8
      	vmovn.i16	d16, q9
      	vmov	r0, r1, d16
      	vmov	r2, r3, d17
      	bx	lr
      
      	.globl	_test2
      	.align	2
      _test2:                                 @ @test2
      @ BB#0:
      	vld1.64	{d16, d17}, [r0:128]
      	add	r0, r0, #16
      	vld1.64	{d18, d19}, [r0:128]
      	vmovn.i32	d16, q8
      	vmovn.i32	d17, q9
      	vmovn.i16	d16, q8
      	vmov	r0, r1, d16
      	bx	lr
      
      llvm-svn: 179989
      563983c8
  3. Apr 21, 2013
  4. Apr 20, 2013
  5. Apr 19, 2013
  6. Apr 17, 2013
  7. Apr 15, 2013
  8. Apr 14, 2013
  9. Apr 13, 2013
  10. Apr 12, 2013
  11. Apr 11, 2013
    • Benjamin Kramer's avatar
      Add braces around || in && to pacify GCC. · e7c45bc6
      Benjamin Kramer authored
      llvm-svn: 179275
      e7c45bc6
    • Hal Finkel's avatar
      Manually remove successors in if conversion when CopyAndPredicateBlock is used · 95081bff
      Hal Finkel authored
      In the simple and triangle if-conversion cases, when CopyAndPredicateBlock is
      used because the to-be-predicated block has other predecessors, we need to
      explicitly remove the old copied block from the successors list. Normally if
      conversion relies on TII->AnalyzeBranch combined with BB->CorrectExtraCFGEdges
      to cleanup the successors list, but if the predicated block contained an
      un-analyzable branch (such as a now-predicated return), then this will fail.
      
      These extra successors were causing a problem on PPC because it was causing
      later passes (such as PPCEarlyReturm) to leave dead return-only basic blocks in
      the code.
      
      llvm-svn: 179227
      95081bff
  12. Apr 10, 2013
    • Andrew Trick's avatar
      Generalize the PassConfig API and remove addFinalizeRegAlloc(). · e220323c
      Andrew Trick authored
      The target hooks are getting out of hand. What does it mean to run
      before or after regalloc anyway? Allowing either Pass* or AnalysisID
      pass identification should make it much easier for targets to use the
      substitutePass and insertPass APIs, and create less need for badly
      named target hooks.
      
      llvm-svn: 179140
      e220323c
  13. Apr 09, 2013
  14. Apr 07, 2013
Loading