Skip to content
  1. Apr 06, 2006
  2. Apr 02, 2006
    • Chris Lattner's avatar
      vector casts of casts are eliminable. Transform this: · caba72b6
      Chris Lattner authored
              %tmp = cast <4 x uint> %tmp to <4 x int>                ; <<4 x int>> [#uses=1]
              %tmp = cast <4 x int> %tmp to <4 x float>               ; <<4 x float>> [#uses=1]
      
      into:
      
              %tmp = cast <4 x uint> %tmp to <4 x float>              ; <<4 x float>> [#uses=1]
      
      llvm-svn: 27355
      caba72b6
    • Chris Lattner's avatar
      Allow transforming this: · ebca476b
      Chris Lattner authored
              %tmp = cast <4 x uint>* %testData to <4 x int>*         ; <<4 x int>*> [#uses=1]
              %tmp = load <4 x int>* %tmp             ; <<4 x int>> [#uses=1]
      
      to this:
      
              %tmp = load <4 x uint>* %testData               ; <<4 x uint>> [#uses=1]
              %tmp = cast <4 x uint> %tmp to <4 x int>                ; <<4 x int>> [#uses=1]
      
      llvm-svn: 27353
      ebca476b
    • Chris Lattner's avatar
      Turn altivec lvx/stvx intrinsics into loads and stores. This allows the · f42d0aed
      Chris Lattner authored
      elimination of one load from this:
      
      int AreSecondAndThirdElementsBothNegative( vector float *in ) {
      #define QNaN 0x7FC00000
      const vector unsigned int testData = (vector unsigned int)( QNaN, 0, 0, QNaN );
      vector float test = vec_ld( 0, (float*) &testData );
      return ! vec_any_ge( test, *in );
      }
      
      Now generating:
      
      _AreSecondAndThirdElementsBothNegative:
              mfspr r2, 256
              oris r4, r2, 49152
              mtspr 256, r4
              li r4, lo16(LCPI1_0)
              lis r5, ha16(LCPI1_0)
              addi r6, r1, -16
              lvx v0, r5, r4
              stvx v0, 0, r6
              lvx v1, 0, r3
              vcmpgefp. v0, v0, v1
              mfcr r3, 2
              rlwinm r3, r3, 27, 31, 31
              xori r3, r3, 1
              cntlzw r3, r3
              srwi r3, r3, 5
              mtspr 256, r2
              blr
      
      llvm-svn: 27352
      f42d0aed
    • Chris Lattner's avatar
      Fix InstCombine/2006-04-01-InfLoop.ll · 6cf4914f
      Chris Lattner authored
      llvm-svn: 27330
      6cf4914f
  3. Apr 01, 2006
  4. Mar 31, 2006
  5. Mar 25, 2006
  6. Mar 24, 2006
  7. Mar 23, 2006
  8. Mar 22, 2006
  9. Mar 19, 2006
  10. Mar 18, 2006
  11. Mar 17, 2006
  12. Mar 16, 2006
    • Evan Cheng's avatar
      For each loop, keep track of all the IV expressions inserted indexed by · 3df447d3
      Evan Cheng authored
      stride. For a set of uses of the IV of a stride which is a multiple
      of another stride, do not insert a new IV expression. Rather, reuse the
      previous IV and rewrite the uses as uses of IV expression multiplied by
      the factor.
      
      e.g.
      x = 0 ...; x ++
      y = 0 ...; y += 4
      then use of y can be rewritten as use of 4*x for x86.
      
      llvm-svn: 26803
      3df447d3
  13. Mar 14, 2006
  14. Mar 08, 2006
  15. Mar 07, 2006
  16. Mar 06, 2006
  17. Mar 05, 2006
  18. Mar 04, 2006
  19. Mar 03, 2006
  20. Mar 02, 2006
  21. Feb 28, 2006
  22. Feb 27, 2006
    • Chris Lattner's avatar
      Merge two almost-identical pieces of code. · c7bfed0f
      Chris Lattner authored
      Make this code more powerful by using ComputeMaskedBits instead of looking
      for an AND operand.  This lets us fold this:
      
      int %test23(int %a) {
              %tmp.1 = and int %a, 1
              %tmp.2 = seteq int %tmp.1, 0
              %tmp.3 = cast bool %tmp.2 to int  ;; xor tmp1, 1
              ret int %tmp.3
      }
      
      into: xor (and a, 1), 1
      llvm-svn: 26396
      c7bfed0f
    • Chris Lattner's avatar
      Fold (A^B) == A -> B == 0 · f5c8a0b8
      Chris Lattner authored
      and  (A-B) == A  ->  B == 0
      
      llvm-svn: 26394
      f5c8a0b8
  23. Feb 26, 2006
  24. Feb 24, 2006
    • Chris Lattner's avatar
      Fix a problem that Nate noticed that boils down to an over conservative check · b580d26e
      Chris Lattner authored
      in the code that does "select C, (X+Y), (X-Y) --> (X+(select C, Y, (-Y)))".
      We now compile this loop:
      
      LBB1_1: ; no_exit
              add r6, r2, r3
              subf r3, r2, r3
              cmpwi cr0, r2, 0
              addi r7, r5, 4
              lwz r2, 0(r5)
              addi r4, r4, 1
              blt cr0, LBB1_4 ; no_exit
      LBB1_3: ; no_exit
              mr r3, r6
      LBB1_4: ; no_exit
              cmpwi cr0, r4, 16
              mr r5, r7
              bne cr0, LBB1_1 ; no_exit
      
      into this instead:
      
      LBB1_1: ; no_exit
              srawi r6, r2, 31
              add r2, r2, r6
              xor r6, r2, r6
              addi r7, r5, 4
              lwz r2, 0(r5)
              addi r4, r4, 1
              add r3, r3, r6
              cmpwi cr0, r4, 16
              mr r5, r7
              bne cr0, LBB1_1 ; no_exit
      
      llvm-svn: 26356
      b580d26e
  25. Feb 23, 2006
Loading