Skip to content
  1. Jan 08, 2007
  2. Jan 07, 2007
  3. Dec 31, 2006
    • Reid Spencer's avatar
      For PR950: · c635f47d
      Reid Spencer authored
      This patch replaces signed integer types with signless ones:
      1. [US]Byte -> Int8
      2. [U]Short -> Int16
      3. [U]Int   -> Int32
      4. [U]Long  -> Int64.
      5. Removal of isSigned, isUnsigned, getSignedVersion, getUnsignedVersion
         and other methods related to signedness. In a few places this warranted
         identifying the signedness information from other sources.
      
      llvm-svn: 32785
      c635f47d
  4. Dec 23, 2006
    • Reid Spencer's avatar
      For PR950: · 266e42b3
      Reid Spencer authored
      This patch removes the SetCC instructions and replaces them with the ICmp
      and FCmp instructions. The SetCondInst instruction has been removed and
      been replaced with ICmpInst and FCmpInst.
      
      llvm-svn: 32751
      266e42b3
  5. Dec 21, 2006
  6. Dec 19, 2006
  7. Dec 17, 2006
  8. Dec 13, 2006
  9. Dec 12, 2006
    • Reid Spencer's avatar
      Get even more accurate on the casting. · 1ac0ab08
      Reid Spencer authored
      llvm-svn: 32478
      1ac0ab08
    • Reid Spencer's avatar
      Change inferred getCast into specific getCast. Passes all tests. · b341b086
      Reid Spencer authored
      llvm-svn: 32469
      b341b086
    • Chris Lattner's avatar
      teach scev to analyze X*4|1 like X*4+c. This allows us to produce: · 49b090ed
      Chris Lattner authored
      LBB1_1: #bb
              movdqa (%esi), %xmm2
              movaps %xmm2, %xmm3
              punpcklbw %xmm0, %xmm3
              movaps %xmm3, %xmm4
              punpcklwd %xmm0, %xmm4
              cvtdq2ps %xmm4, %xmm4
              mulps %xmm1, %xmm4
              movaps %xmm4, (%edi)
              leal 1(,%eax,4), %ebx
              shll $4, %ebx
              punpckhwd %xmm0, %xmm3
              cvtdq2ps %xmm3, %xmm3
              mulps %xmm1, %xmm3
              movaps %xmm3, (%edx,%ebx)
              leal 2(,%eax,4), %ebx
              shll $4, %ebx
              punpckhbw %xmm0, %xmm2
              movaps %xmm2, %xmm3
              punpcklwd %xmm0, %xmm3
              cvtdq2ps %xmm3, %xmm3
              mulps %xmm1, %xmm3
              movaps %xmm3, (%edx,%ebx)
              leal 3(,%eax,4), %ebx
              shll $4, %ebx
              punpckhwd %xmm0, %xmm2
              cvtdq2ps %xmm2, %xmm2
              mulps %xmm1, %xmm2
              movaps %xmm2, (%edx,%ebx)
              addl $64, %edi
              incl %eax
              addl $16, %esi
              cmpl %ecx, %eax
              jne LBB1_1      #bb
      
      instead of:
      
      LBB1_1: #bb
              movdqa (%esi), %xmm2
              movaps %xmm2, %xmm3
              punpcklbw %xmm0, %xmm3
              movaps %xmm3, %xmm4
              punpcklwd %xmm0, %xmm4
              cvtdq2ps %xmm4, %xmm4
              mulps %xmm1, %xmm4
              movaps %xmm4, (%edi)
              leal 1(,%eax,4), %ebx
              shll $4, %ebx
              punpckhwd %xmm0, %xmm3
              cvtdq2ps %xmm3, %xmm3
              mulps %xmm1, %xmm3
              movaps %xmm3, (%edx,%ebx)
              leal 2(,%eax,4), %ebx
              shll $4, %ebx
              punpckhbw %xmm0, %xmm2
              movaps %xmm2, %xmm3
              punpcklwd %xmm0, %xmm3
              cvtdq2ps %xmm3, %xmm3
              mulps %xmm1, %xmm3
              movaps %xmm3, (%edx,%ebx)
              leal 3(,%eax,4), %ebx
              shll $4, %ebx
              punpckhwd %xmm0, %xmm2
              cvtdq2ps %xmm2, %xmm2
              mulps %xmm1, %xmm2
              movaps %xmm2, (%edx,%ebx)
              addl $64, %edi
              incl %eax
              addl $16, %esi
              cmpl %ecx, %eax
              jne LBB1_1      #bb
      
      for a testcase.
      
      llvm-svn: 32463
      49b090ed
  10. Dec 11, 2006
  11. Dec 07, 2006
  12. Dec 06, 2006
  13. Dec 05, 2006
  14. Dec 04, 2006
  15. Dec 02, 2006
  16. Nov 29, 2006
  17. Nov 28, 2006
Loading