Skip to content
  1. Aug 29, 2010
  2. Aug 28, 2010
    • Chris Lattner's avatar
      remove unions from LLVM IR. They are severely buggy and not · 13ee795c
      Chris Lattner authored
      being actively maintained, improved, or extended.
      
      llvm-svn: 112356
      13ee795c
    • Chris Lattner's avatar
      remove the ABCD and SSI passes. They don't have any clients that · 504e5100
      Chris Lattner authored
      I'm aware of, aren't maintained, and LVI will be replacing their value.
      nlewycky approved this on irc.
      
      llvm-svn: 112355
      504e5100
    • Chris Lattner's avatar
      for completeness, allow undef also. · 50df36ac
      Chris Lattner authored
      llvm-svn: 112351
      50df36ac
    • Chris Lattner's avatar
      squish dead code. · 95bb297c
      Chris Lattner authored
      llvm-svn: 112350
      95bb297c
    • Chris Lattner's avatar
      handle the constant case of vector insertion. For something · d0214f3e
      Chris Lattner authored
      like this:
      
      struct S { float A, B, C, D; };
      
      struct S g;
      struct S bar() { 
        struct S A = g;
        ++A.B;
        A.A = 42;
        return A;
      }
      
      we now generate:
      
      _bar:                                   ## @bar
      ## BB#0:                                ## %entry
      	movq	_g@GOTPCREL(%rip), %rax
      	movss	12(%rax), %xmm0
      	pshufd	$16, %xmm0, %xmm0
      	movss	4(%rax), %xmm2
      	movss	8(%rax), %xmm1
      	pshufd	$16, %xmm1, %xmm1
      	unpcklps	%xmm0, %xmm1
      	addss	LCPI1_0(%rip), %xmm2
      	pshufd	$16, %xmm2, %xmm2
      	movss	LCPI1_1(%rip), %xmm0
      	pshufd	$16, %xmm0, %xmm0
      	unpcklps	%xmm2, %xmm0
      	ret
      
      instead of:
      
      _bar:                                   ## @bar
      ## BB#0:                                ## %entry
      	movq	_g@GOTPCREL(%rip), %rax
      	movss	12(%rax), %xmm0
      	pshufd	$16, %xmm0, %xmm0
      	movss	4(%rax), %xmm2
      	movss	8(%rax), %xmm1
      	pshufd	$16, %xmm1, %xmm1
      	unpcklps	%xmm0, %xmm1
      	addss	LCPI1_0(%rip), %xmm2
      	movd	%xmm2, %eax
      	shlq	$32, %rax
      	addq	$1109917696, %rax       ## imm = 0x42280000
      	movd	%rax, %xmm0
      	ret
      
      llvm-svn: 112345
      d0214f3e
    • Chris Lattner's avatar
      optimize bitcasts from large integers to vector into vector · dd660104
      Chris Lattner authored
      element insertion from the pieces that feed into the vector.
      This handles a pattern that occurs frequently due to code
      generated for the x86-64 abi.  We now compile something like
      this:
      
      struct S { float A, B, C, D; };
      struct S g;
      struct S bar() { 
        struct S A = g;
        ++A.A;
        ++A.C;
        return A;
      }
      
      into all nice vector operations:
      
      _bar:                                   ## @bar
      ## BB#0:                                ## %entry
      	movq	_g@GOTPCREL(%rip), %rax
      	movss	LCPI1_0(%rip), %xmm1
      	movss	(%rax), %xmm0
      	addss	%xmm1, %xmm0
      	pshufd	$16, %xmm0, %xmm0
      	movss	4(%rax), %xmm2
      	movss	12(%rax), %xmm3
      	pshufd	$16, %xmm2, %xmm2
      	unpcklps	%xmm2, %xmm0
      	addss	8(%rax), %xmm1
      	pshufd	$16, %xmm1, %xmm1
      	pshufd	$16, %xmm3, %xmm2
      	unpcklps	%xmm2, %xmm1
      	ret
      
      instead of icky integer operations:
      
      _bar:                                   ## @bar
      	movq	_g@GOTPCREL(%rip), %rax
      	movss	LCPI1_0(%rip), %xmm1
      	movss	(%rax), %xmm0
      	addss	%xmm1, %xmm0
      	movd	%xmm0, %ecx
      	movl	4(%rax), %edx
      	movl	12(%rax), %esi
      	shlq	$32, %rdx
      	addq	%rcx, %rdx
      	movd	%rdx, %xmm0
      	addss	8(%rax), %xmm1
      	movd	%xmm1, %eax
      	shlq	$32, %rsi
      	addq	%rax, %rsi
      	movd	%rsi, %xmm1
      	ret
      
      This resolves rdar://8360454
      
      llvm-svn: 112343
      dd660104
    • Benjamin Kramer's avatar
      Update CMake build. Add newline at end of file. · 83f9ff04
      Benjamin Kramer authored
      llvm-svn: 112332
      83f9ff04
    • Owen Anderson's avatar
      Add a prototype of a new peephole optimizing pass that uses LazyValue info to... · cf7f9411
      Owen Anderson authored
      Add a prototype of a new peephole optimizing pass that uses LazyValue info to simplify PHIs and select's.
      This pass addresses the missed optimizations from PR2581 and PR4420.
      
      llvm-svn: 112325
      cf7f9411
    • Chris Lattner's avatar
      Enhance the shift propagator to handle the case when you have: · 6c1395f6
      Chris Lattner authored
      A = shl x, 42
      ...
      B = lshr ..., 38
      
      which can be transformed into:
      A = shl x, 4
      ...
      
      iff we can prove that the would-be-shifted-in bits
      are already zero.  This eliminates two shifts in the testcase
      and allows eliminate of the whole i128 chain in the real example.
      
      llvm-svn: 112314
      6c1395f6
    • Chris Lattner's avatar
      Implement a pretty general logical shift propagation · 18d7fc8f
      Chris Lattner authored
      framework, which is good at ripping through bitfield
      operations.  This generalize a bunch of the existing
      xforms that instcombine does, such as 
        (x << c) >> c -> and
      to handle intermediate logical nodes.  This is useful for
      ripping up the "promote to large integer" code produced by
      SRoA.
      
      llvm-svn: 112304
      18d7fc8f
  3. Aug 27, 2010
  4. Aug 26, 2010
  5. Aug 25, 2010
  6. Aug 24, 2010
  7. Aug 23, 2010
  8. Aug 20, 2010
  9. Aug 19, 2010
Loading