Skip to content
  1. Apr 04, 2005
  2. Apr 02, 2005
  3. Apr 01, 2005
  4. Mar 31, 2005
  5. Mar 30, 2005
  6. Mar 29, 2005
  7. Mar 28, 2005
  8. Mar 26, 2005
  9. Mar 17, 2005
  10. Mar 15, 2005
  11. Mar 10, 2005
    • Chris Lattner's avatar
      I didn't mean to check this in. :( · 6f6ecad9
      Chris Lattner authored
      llvm-svn: 20555
      6f6ecad9
    • Chris Lattner's avatar
      Fix a bug where we would incorrectly do a sign ext instead of a zero ext · 85e71639
      Chris Lattner authored
      because we were checking the wrong thing.  Thanks to andrew for pointing
      this out!
      
      llvm-svn: 20554
      85e71639
    • Chris Lattner's avatar
      Allow the live interval analysis pass to be a bit more aggressive about · 76aa8e07
      Chris Lattner authored
      numbering values in live ranges for physical registers.
      
      The alpha backend currently generates code that looks like this:
      
        vreg = preg
      ...
        preg = vreg
        use preg
      ...
        preg = vreg
        use preg
      
      etc.  Because vreg contains the value of preg coming in, each of the
      copies back into preg contain that initial value as well.
      
      In the case of the Alpha, this allows this testcase:
      
      void "foo"(int %blah) {
              store int 5, int *%MyVar
              store int 12, int* %MyVar2
              ret void
      }
      
      to compile to:
      
      foo:
              ldgp $29, 0($27)
              ldiq $0,5
              stl $0,MyVar
              ldiq $0,12
              stl $0,MyVar2
              ret $31,($26),1
      
      instead of:
      
      foo:
              ldgp $29, 0($27)
              bis $29,$29,$0
              ldiq $1,5
              bis $0,$0,$29
              stl $1,MyVar
              ldiq $1,12
              bis $0,$0,$29
              stl $1,MyVar2
              ret $31,($26),1
      
      This does not seem to have any noticable effect on X86 code.
      
      This fixes PR535.
      
      llvm-svn: 20536
      76aa8e07
  12. Mar 09, 2005
    • Chris Lattner's avatar
      constant fold FP_ROUND_INREG, ZERO_EXTEND_INREG, and SIGN_EXTEND_INREG · 7f269467
      Chris Lattner authored
      This allows the alpha backend to compile:
      
      bool %test(uint %P) {
              %c = seteq uint %P, 0
              ret bool %c
      }
      
      into:
      
      test:
              ldgp $29, 0($27)
              ZAP $16,240,$0
              CMPEQ $0,0,$0
              AND $0,1,$0
              ret $31,($26),1
      
      instead of:
      
      test:
              ldgp $29, 0($27)
              ZAP $16,240,$0
              ldiq $1,0
              ZAP $1,240,$1
              CMPEQ $0,$1,$0
              AND $0,1,$0
              ret $31,($26),1
      
      ... and fixes PR534.
      
      llvm-svn: 20534
      7f269467
  13. Mar 01, 2005
  14. Feb 28, 2005
  15. Feb 22, 2005
    • Chris Lattner's avatar
      Fix a bug in the 'store fpimm, ptr' -> 'store intimm, ptr' handling code. · a4743139
      Chris Lattner authored
      Changing 'op' here caused us to not enter the store into a map, causing
      reemission of the code!!  In practice, a simple loop like this:
      
      no_exit:                ; preds = %no_exit, %entry
              %indvar = phi uint [ %indvar.next, %no_exit ], [ 0, %entry ]            ; <uint> [#uses=3]
              %tmp.4 = getelementptr "complex long double"* %P, uint %indvar, uint 0          ; <double*> [#uses=1]
              store double 0.000000e+00, double* %tmp.4
              %indvar.next = add uint %indvar, 1              ; <uint> [#uses=2]
              %exitcond = seteq uint %indvar.next, %N         ; <bool> [#uses=1]
              br bool %exitcond, label %return, label %no_exit
      
      was being code gen'd to:
      
      .LBBtest_1:     # no_exit
              movl %edx, %esi
              shll $4, %esi
              movl $0, 4(%eax,%esi)
              movl $0, (%eax,%esi)
              incl %edx
              movl $0, (%eax,%esi)
              movl $0, 4(%eax,%esi)
              cmpl %ecx, %edx
              jne .LBBtest_1  # no_exit
      
      Note that we are doing 4 32-bit stores instead of 2.  Now we generate:
      
      .LBBtest_1:     # no_exit
              movl %edx, %esi
              incl %esi
              shll $4, %edx
              movl $0, (%eax,%edx)
              movl $0, 4(%eax,%edx)
              cmpl %ecx, %esi
              movl %esi, %edx
              jne .LBBtest_1  # no_exit
      
      This is much happier, though it would be even better if the increment of ESI
      was scheduled after the compare :-/
      
      llvm-svn: 20265
      a4743139
  16. Feb 17, 2005
  17. Feb 14, 2005
  18. Feb 04, 2005
  19. Feb 02, 2005
  20. Feb 01, 2005
  21. Jan 30, 2005
  22. Jan 29, 2005
Loading