Skip to content
  1. Apr 13, 2005
  2. Apr 09, 2005
  3. Apr 02, 2005
  4. Mar 31, 2005
  5. Mar 30, 2005
  6. Mar 29, 2005
  7. Mar 26, 2005
  8. Mar 15, 2005
  9. Feb 17, 2005
  10. Jan 23, 2005
  11. Jan 19, 2005
  12. Jan 18, 2005
  13. Jan 17, 2005
    • Chris Lattner's avatar
      Non-volatile loads can be freely reordered against each other. This fixes · 4d9651c7
      Chris Lattner authored
      X86/reg-pressure.ll again, and allows us to do nice things in other cases.
      For example, we now codegen this sort of thing:
      
      int %loadload(int *%X, int* %Y) {
        %Z = load int* %Y
        %Y = load int* %X      ;; load between %Z and store
        %Q = add int %Z, 1
        store int %Q, int* %Y
        ret int %Y
      }
      
      Into this:
      
      loadload:
              mov %EAX, DWORD PTR [%ESP + 4]
              mov %EAX, DWORD PTR [%EAX]
              mov %ECX, DWORD PTR [%ESP + 8]
              inc DWORD PTR [%ECX]
              ret
      
      where we weren't able to form the 'inc [mem]' before.  This also lets the
      instruction selector emit loads in any order it wants to, which can be good
      for register pressure as well.
      
      llvm-svn: 19644
      4d9651c7
    • Chris Lattner's avatar
      4108bb01
    • Chris Lattner's avatar
      Implement a target independent optimization to codegen arguments only into · e3c2cf48
      Chris Lattner authored
      the basic block that uses them if possible.  This is a big win on X86, as it
      lets us fold the argument loads into instructions and reduce register pressure
      (by not loading all of the arguments in the entry block).
      
      For this (contrived to show the optimization) testcase:
      
      int %argtest(int %A, int %B) {
              %X = sub int 12345, %A
              br label %L
      L:
              %Y = add int %X, %B
              ret int %Y
      }
      
      we used to produce:
      
      argtest:
              mov %ECX, DWORD PTR [%ESP + 4]
              mov %EAX, 12345
              sub %EAX, %ECX
              mov %EDX, DWORD PTR [%ESP + 8]
      .LBBargtest_1:  # L
              add %EAX, %EDX
              ret
      
      
      now we produce:
      
      argtest:
              mov %EAX, 12345
              sub %EAX, DWORD PTR [%ESP + 4]
      .LBBargtest_1:  # L
              add %EAX, DWORD PTR [%ESP + 8]
              ret
      
      This also fixes the FIXME in the code.
      
      BTW, this occurs in real code.  164.gzip shrinks from 8623 to 8608 lines of
      .s file.  The stack frame in huft_build shrinks from 1644->1628 bytes,
      inflate_codes shrinks from 116->108 bytes, and inflate_block from 2620->2612,
      due to fewer spills.
      
      Take that alkis. :-)
      
      llvm-svn: 19639
      e3c2cf48
    • Chris Lattner's avatar
      Refactor code into a new method. · 16f64df9
      Chris Lattner authored
      llvm-svn: 19635
      16f64df9
  14. Jan 16, 2005
  15. Jan 15, 2005
  16. Jan 14, 2005
  17. Jan 13, 2005
  18. Jan 12, 2005
  19. Jan 11, 2005
  20. Jan 09, 2005
  21. Jan 08, 2005
  22. Jan 07, 2005
Loading