Skip to content
  1. Apr 20, 2008
  2. Apr 19, 2008
  3. Apr 17, 2008
  4. Apr 15, 2008
  5. Apr 14, 2008
  6. Apr 13, 2008
  7. Apr 11, 2008
  8. Apr 10, 2008
    • Dan Gohman's avatar
      Teach InstCombine's ComputeMaskedBits to handle pointer expressions · 99b7b3f0
      Dan Gohman authored
      in addition to integer expressions. Rewrite GetOrEnforceKnownAlignment
      as a ComputeMaskedBits problem, moving all of its special alignment
      knowledge to ComputeMaskedBits as low-zero-bits knowledge.
      
      Also, teach ComputeMaskedBits a few basic things about Mul and PHI
      instructions.
      
      This improves ComputeMaskedBits-based simplifications in a few cases,
      but more noticeably it significantly improves instcombine's alignment
      detection for loads, stores, and memory intrinsics.
      
      llvm-svn: 49492
      99b7b3f0
    • Chris Lattner's avatar
      Disable an xform we've had for a long time, pow(x,0.5) -> sqrt. · a29d2536
      Chris Lattner authored
      This is not safe for all inputs.
      
      llvm-svn: 49458
      a29d2536
  9. Apr 09, 2008
  10. Apr 08, 2008
  11. Apr 07, 2008
  12. Apr 06, 2008
  13. Apr 02, 2008
    • David Greene's avatar
      · 586740f4
      David Greene authored
      Iterators folloring a SmallVector erased element are invalidated so
      don't access cached iterators from after the erased element.
      
      Re-apply 49056 with SmallVector support.
      
      llvm-svn: 49106
      586740f4
    • Evan Cheng's avatar
      1. Drop default inline threshold back down to 200. · ac38d444
      Evan Cheng authored
      2. Do not use # of basic blocks as part of the cost computation since it doesn't really figure into function size.
      3. More aggressively inline function with vector code.
      
      llvm-svn: 49061
      ac38d444
    • Tanya Lattner's avatar
      Reverting 49056 due to the build being broken. · 052838c5
      Tanya Lattner authored
      llvm-svn: 49060
      052838c5
    • David Greene's avatar
      · 7f7edc38
      David Greene authored
      Iterators folloring a SmallVector erased element are invalidated so
      don't access cached iterators from after the erased element.
      
      llvm-svn: 49056
      7f7edc38
  14. Apr 01, 2008
  15. Mar 31, 2008
  16. Mar 30, 2008
  17. Mar 29, 2008
    • Chris Lattner's avatar
      change iterator invalidation avoidance to just move the iterator backward · 4311ad2d
      Chris Lattner authored
      when something changes, instead of moving forward.  This allows us to 
      simplify memset lowering, inserting the memset at the end of the range of 
      stuff we're touching instead of at the start.
      
      This, in turn, allows us to make use of the addressing instructions already
      used in the function instead of inserting our own.  For example, we now
      codegen:
      
      	%tmp41 = getelementptr [8 x i8]* %ref_idx, i32 0, i32 0		; <i8*> [#uses=2]
      	call void @llvm.memset.i64( i8* %tmp41, i8 -1, i64 8, i32 1 )
      
      instead of:
      
      	%tmp20 = getelementptr [8 x i8]* %ref_idx, i32 0, i32 7		; <i8*> [#uses=1]
      	%ptroffset = getelementptr i8* %tmp20, i64 -7		; <i8*> [#uses=1]
      	call void @llvm.memset.i64( i8* %ptroffset, i8 -1, i64 8, i32 1 )
      
      llvm-svn: 48940
      4311ad2d
    • Chris Lattner's avatar
      make the common case of a single store (which clearly shouldn't be turned · ac955157
      Chris Lattner authored
      into a memset!) faster by avoiding an allocation of an std::list node.
      
      llvm-svn: 48939
      ac955157
Loading