Skip to content
  1. Nov 14, 2007
  2. Nov 13, 2007
  3. Nov 12, 2007
  4. Nov 11, 2007
  5. Nov 10, 2007
  6. Nov 09, 2007
    • Evan Cheng's avatar
      Unbreak x86-64 jumptable. · fb13fd6f
      Evan Cheng authored
      llvm-svn: 43955
      fb13fd6f
    • Dale Johannesen's avatar
      Revert previous rewrite per chris's comments. · dfb85c78
      Dale Johannesen authored
      llvm-svn: 43950
      dfb85c78
    • Evan Cheng's avatar
      Much improved pic jumptable codegen: · 797d56ff
      Evan Cheng authored
      Then:
              call    "L1$pb"
      "L1$pb":
              popl    %eax
      		...
      LBB1_1: # entry
              imull   $4, %ecx, %ecx
              leal    LJTI1_0-"L1$pb"(%eax), %edx
              addl    LJTI1_0-"L1$pb"(%ecx,%eax), %edx
              jmpl    *%edx
      
              .align  2
              .set L1_0_set_3,LBB1_3-LJTI1_0
              .set L1_0_set_2,LBB1_2-LJTI1_0
              .set L1_0_set_5,LBB1_5-LJTI1_0
              .set L1_0_set_4,LBB1_4-LJTI1_0
      LJTI1_0:
              .long    L1_0_set_3
              .long    L1_0_set_2
      
      Now:
              call    "L1$pb"
      "L1$pb":
              popl    %eax
      		...
      LBB1_1: # entry
              addl    LJTI1_0-"L1$pb"(%eax,%ecx,4), %eax
              jmpl    *%eax
      
      		.align  2
      		.set L1_0_set_3,LBB1_3-"L1$pb"
      		.set L1_0_set_2,LBB1_2-"L1$pb"
      		.set L1_0_set_5,LBB1_5-"L1$pb"
      		.set L1_0_set_4,LBB1_4-"L1$pb"
      LJTI1_0:
              .long    L1_0_set_3
              .long    L1_0_set_2
      
      llvm-svn: 43924
      797d56ff
    • Dale Johannesen's avatar
      Rewrite Dwarf number handling per review comments. · 04fd8208
      Dale Johannesen authored
      llvm-svn: 43918
      04fd8208
  7. Nov 07, 2007
  8. Nov 06, 2007
  9. Nov 05, 2007
    • Evan Cheng's avatar
      Use movups to spill / restore SSE registers on targets where stacks alignment is · 9337929a
      Evan Cheng authored
      less than 16. This is a temporary solution until dynamic stack alignment is
      implemented.
      
      llvm-svn: 43703
      9337929a
    • Duncan Sands's avatar
      Eliminate the remaining uses of getTypeSize. This · 283207a7
      Duncan Sands authored
      should only effect x86 when using long double.  Now
      12/16 bytes are output for long double globals (the
      exact amount depends on the alignment).  This brings
      globals in line with the rest of LLVM: the space
      reserved for an object is now always the ABI size.
      One tricky point is that only 10 bytes should be
      output for long double if it is a field in a packed
      struct, which is the reason for the additional
      argument to EmitGlobalConstant.
      
      llvm-svn: 43688
      283207a7
  10. Nov 04, 2007
  11. Nov 02, 2007
  12. Nov 01, 2007
  13. Oct 31, 2007
  14. Oct 30, 2007
  15. Oct 29, 2007
  16. Oct 28, 2007
  17. Oct 26, 2007
Loading