Skip to content
  1. Apr 28, 2006
  2. Apr 27, 2006
  3. Apr 20, 2006
  4. Apr 18, 2006
  5. Apr 16, 2006
  6. Apr 15, 2006
  7. Apr 14, 2006
  8. Apr 12, 2006
  9. Apr 11, 2006
  10. Apr 08, 2006
  11. Apr 06, 2006
  12. Apr 02, 2006
    • Chris Lattner's avatar
      vector casts of casts are eliminable. Transform this: · caba72b6
      Chris Lattner authored
              %tmp = cast <4 x uint> %tmp to <4 x int>                ; <<4 x int>> [#uses=1]
              %tmp = cast <4 x int> %tmp to <4 x float>               ; <<4 x float>> [#uses=1]
      
      into:
      
              %tmp = cast <4 x uint> %tmp to <4 x float>              ; <<4 x float>> [#uses=1]
      
      llvm-svn: 27355
      caba72b6
    • Chris Lattner's avatar
      Allow transforming this: · ebca476b
      Chris Lattner authored
              %tmp = cast <4 x uint>* %testData to <4 x int>*         ; <<4 x int>*> [#uses=1]
              %tmp = load <4 x int>* %tmp             ; <<4 x int>> [#uses=1]
      
      to this:
      
              %tmp = load <4 x uint>* %testData               ; <<4 x uint>> [#uses=1]
              %tmp = cast <4 x uint> %tmp to <4 x int>                ; <<4 x int>> [#uses=1]
      
      llvm-svn: 27353
      ebca476b
    • Chris Lattner's avatar
      Turn altivec lvx/stvx intrinsics into loads and stores. This allows the · f42d0aed
      Chris Lattner authored
      elimination of one load from this:
      
      int AreSecondAndThirdElementsBothNegative( vector float *in ) {
      #define QNaN 0x7FC00000
      const vector unsigned int testData = (vector unsigned int)( QNaN, 0, 0, QNaN );
      vector float test = vec_ld( 0, (float*) &testData );
      return ! vec_any_ge( test, *in );
      }
      
      Now generating:
      
      _AreSecondAndThirdElementsBothNegative:
              mfspr r2, 256
              oris r4, r2, 49152
              mtspr 256, r4
              li r4, lo16(LCPI1_0)
              lis r5, ha16(LCPI1_0)
              addi r6, r1, -16
              lvx v0, r5, r4
              stvx v0, 0, r6
              lvx v1, 0, r3
              vcmpgefp. v0, v0, v1
              mfcr r3, 2
              rlwinm r3, r3, 27, 31, 31
              xori r3, r3, 1
              cntlzw r3, r3
              srwi r3, r3, 5
              mtspr 256, r2
              blr
      
      llvm-svn: 27352
      f42d0aed
    • Chris Lattner's avatar
      Fix InstCombine/2006-04-01-InfLoop.ll · 6cf4914f
      Chris Lattner authored
      llvm-svn: 27330
      6cf4914f
  13. Apr 01, 2006
  14. Mar 31, 2006
  15. Mar 25, 2006
  16. Mar 24, 2006
  17. Mar 23, 2006
  18. Mar 22, 2006
  19. Mar 19, 2006
  20. Mar 18, 2006
  21. Mar 17, 2006
  22. Mar 16, 2006
    • Evan Cheng's avatar
      For each loop, keep track of all the IV expressions inserted indexed by · 3df447d3
      Evan Cheng authored
      stride. For a set of uses of the IV of a stride which is a multiple
      of another stride, do not insert a new IV expression. Rather, reuse the
      previous IV and rewrite the uses as uses of IV expression multiplied by
      the factor.
      
      e.g.
      x = 0 ...; x ++
      y = 0 ...; y += 4
      then use of y can be rewritten as use of 4*x for x86.
      
      llvm-svn: 26803
      3df447d3
  23. Mar 14, 2006
  24. Mar 08, 2006
  25. Mar 07, 2006
Loading