Skip to content
  1. May 27, 2006
  2. May 26, 2006
  3. May 25, 2006
  4. May 24, 2006
  5. May 21, 2006
  6. May 19, 2006
  7. May 17, 2006
  8. May 14, 2006
  9. May 13, 2006
    • Chris Lattner's avatar
      Add/Sub/Mul are safe to promote here as well. Incrementing a single-bit · 3987a853
      Chris Lattner authored
      bitfield now gives this code:
      
      _plus:
              lwz r2, 0(r3)
              rlwimi r2, r2, 0, 1, 31
              xoris r2, r2, 32768
              stw r2, 0(r3)
              blr
      
      instead of this:
      
      _plus:
              lwz r2, 0(r3)
              srwi r4, r2, 31
              slwi r4, r4, 31
              addis r4, r4, -32768
              rlwimi r2, r4, 0, 0, 0
              stw r2, 0(r3)
              blr
      
      this can obviously still be improved.
      
      llvm-svn: 28275
      3987a853
    • Chris Lattner's avatar
      Implement simple promotion for cast elimination in instcombine. This is · 1ebbe6a2
      Chris Lattner authored
      currently very limited, but can be extended in the future.  For example,
      we now compile:
      
      uint %test30(uint %c1) {
              %c2 = cast uint %c1 to ubyte
              %c3 = xor ubyte %c2, 1
              %c4 = cast ubyte %c3 to uint
              ret uint %c4
      }
      
      to:
      
      _xor:
              movzbl 4(%esp), %eax
              xorl $1, %eax
              ret
      
      instead of:
      
      _xor:
              movb $1, %al
              xorb 4(%esp), %al
              movzbl %al, %eax
              ret
      
      More impressively, we now compile:
      
      struct B { unsigned bit : 1; };
      void xor(struct B *b) { b->bit = b->bit ^ 1; }
      
      To (X86/PPC):
      
      _xor:
              movl 4(%esp), %eax
              xorl $-2147483648, (%eax)
              ret
      _xor:
              lwz r2, 0(r3)
              xoris r2, r2, 32768
              stw r2, 0(r3)
              blr
      
      instead of (X86/PPC):
      
      _xor:
              movl 4(%esp), %eax
              movl (%eax), %ecx
              movl %ecx, %edx
              shrl $31, %edx
              # TRUNCATE movb %dl, %dl
              xorb $1, %dl
              movzbl %dl, %edx
              andl $2147483647, %ecx
              shll $31, %edx
              orl %ecx, %edx
              movl %edx, (%eax)
              ret
      
      _xor:
              lwz r2, 0(r3)
              srwi r4, r2, 31
              xori r4, r4, 1
              rlwimi r2, r4, 31, 0, 0
              stw r2, 0(r3)
              blr
      
      This implements InstCombine/cast.ll:test30.
      
      llvm-svn: 28273
      1ebbe6a2
    • Chris Lattner's avatar
      Remove some dead variables. · cd60d38b
      Chris Lattner authored
      Fix a nasty bug in the memcmp optimizer where we used the wrong variable!
      
      llvm-svn: 28269
      cd60d38b
    • Chris Lattner's avatar
      Remove dead stuff · 94acc476
      Chris Lattner authored
      llvm-svn: 28268
      94acc476
  10. May 11, 2006
    • Chris Lattner's avatar
      Refactor some code, making it simpler. · 1443bc52
      Chris Lattner authored
      When doing the initial pass of constant folding, if we get a constantexpr,
      simplify the constant expr like we would do if the constant is folded in the
      normal loop.
      
      This fixes the missed-optimization regression in
      Transforms/InstCombine/getelementptr.ll last night.
      
      llvm-svn: 28224
      1443bc52
  11. May 10, 2006
    • Chris Lattner's avatar
      Two changes: · a36ee4ea
      Chris Lattner authored
      1. Implement InstCombine/deadcode.ll by not adding instructions in unreachable
         blocks (due to constants in conditional branches/switches) to the worklist.
         This causes them to be deleted before instcombine starts up, leading to
         better optimization.
      
      2. In the prepass over instructions, do trivial constprop/dce as we go.  This
         has the effect of improving the effectiveness of #1.  In addition, it
         *significantly* speeds up instcombine on test cases with large amounts of
         constant folding code (for example, that produced by code specialization
         or partial evaluation).  In one example, it speeds up instcombine from
         0.0589s to 0.0224s with a release build (a 2.6x speedup).
      
      llvm-svn: 28215
      a36ee4ea
  12. May 09, 2006
  13. May 06, 2006
    • Chris Lattner's avatar
      Move some code around. · 1d441adf
      Chris Lattner authored
      Make the "fold (and (cast A), (cast B)) -> (cast (and A, B))" transformation
      only apply when both casts really will cause code to be generated.  If one or
      both doesn't, then this xform doesn't remove a cast.
      
      This fixes Transforms/InstCombine/2006-05-06-Infloop.ll
      
      llvm-svn: 28141
      1d441adf
  14. May 05, 2006
  15. May 04, 2006
  16. May 02, 2006
  17. Apr 29, 2006
  18. Apr 28, 2006
  19. Apr 27, 2006
  20. Apr 20, 2006
  21. Apr 18, 2006
Loading