Skip to content
  1. Feb 29, 2008
  2. Feb 28, 2008
  3. Feb 27, 2008
    • Anton Korobeynikov's avatar
    • Dale Johannesen's avatar
      Handle load/store of misaligned vectors that are the · bf76a08e
      Dale Johannesen authored
      same size as an int type by doing a bitconvert of
      load/store of the int type (same algorithm as floating point).
      This makes them work for ppc Altivec.  There was some
      code that purported to handle loads of (some) vectors
      by splitting them into two smaller vectors, but getExtLoad
      rejects subvector loads, so this could never have worked;
      the patch removes it.
      
      llvm-svn: 47696
      bf76a08e
    • Nick Kledzik's avatar
      fixes from review of first commit · 91a6dcff
      Nick Kledzik authored
      llvm-svn: 47695
      91a6dcff
    • Nick Kledzik's avatar
      use PROJ_SRC_DIR so this builds with Apple style builds · 5f1db0a8
      Nick Kledzik authored
      llvm-svn: 47694
      5f1db0a8
    • Dan Gohman's avatar
      Don't hard-code the mask size to be 32, which is incorrect on ppc64 · 26854f24
      Dan Gohman authored
      and was causing aborts with the new APInt changes. This may also be
      fixing an obscure ppc64 bug.
      
      llvm-svn: 47692
      26854f24
    • Evan Cheng's avatar
      This is done. · 3d17e4c4
      Evan Cheng authored
      llvm-svn: 47688
      3d17e4c4
    • Evan Cheng's avatar
      Fix a bug in dead spill slot elimination. · fdc732ab
      Evan Cheng authored
      llvm-svn: 47687
      fdc732ab
    • Dan Gohman's avatar
      Remove the `else', at Evan's insistence. · e5e32ec8
      Dan Gohman authored
      llvm-svn: 47686
      e5e32ec8
    • Dan Gohman's avatar
      Add -analyze support to postdomtree. · 61377a3d
      Dan Gohman authored
      llvm-svn: 47680
      61377a3d
    • Chris Lattner's avatar
      actually run llc, thanks Dan :) · 3df31ba4
      Chris Lattner authored
      llvm-svn: 47677
      3df31ba4
    • Duncan Sands's avatar
      Add a FIXME about the VECTOR_SHUFFLE evil hack. · ef40c5b2
      Duncan Sands authored
      llvm-svn: 47676
      ef40c5b2
    • Lauro Ramos Venancio's avatar
      Emit an error when a library is not found. It is the GNU ld behavior and it is... · 14241b29
      Lauro Ramos Venancio authored
      Emit an error when a library is not found. It is the GNU ld behavior and it is expected by the configure scripts.
      
      llvm-svn: 47674
      14241b29
    • Duncan Sands's avatar
      LegalizeTypes support for EXTRACT_VECTOR_ELT. The · e158a82f
      Duncan Sands authored
      approach taken is different to that in LegalizeDAG
      when it is a question of expanding or promoting the
      result type: for example, if extracting an i64 from
      a <2 x i64>, when i64 needs expanding, it bitcasts
      the vector to <4 x i32>, extracts the appropriate
      two i32's, and uses those for the Lo and Hi parts.
      Likewise, when extracting an i16 from a <4 x i16>,
      and i16 needs promoting, it bitcasts the vector to
      <2 x i32>, extracts the appropriate i32, twiddles
      the bits if necessary, and uses that as the promoted
      value.  This puts more pressure on bitcast legalization,
      and I've added the appropriate cases.  They needed to
      be added anyway since users can generate such bitcasts
      too if they want to.  Also, when considering various
      cases (Legal, Promote, Expand, Scalarize, Split) it is
      a pain that expand can correspond to Expand, Scalarize
      or Split, so I've changed the LegalizeTypes enum so it
      lists those different cases - now Expand only means
      splitting a scalar in two.
      The code produced is the same as by LegalizeDAG for
      all relevant testcases, except for
      2007-10-31-extractelement-i64.ll, where the code seems
      to have improved (see below; can an expert please tell
      me if it is better or not).
      Before < vs after >.
      
      <       subl    $92, %esp
      <       movaps  %xmm0, 64(%esp)
      <       movaps  %xmm0, (%esp)
      <       movl    4(%esp), %eax
      <       movl    %eax, 28(%esp)
      <       movl    (%esp), %eax
      <       movl    %eax, 24(%esp)
      <       movq    24(%esp), %mm0
      <       movq    %mm0, 56(%esp)
      ---
      >       subl    $44, %esp
      >       movaps  %xmm0, 16(%esp)
      >       pshufd  $1, %xmm0, %xmm1
      >       movd    %xmm1, 4(%esp)
      >       movd    %xmm0, (%esp)
      >       movq    (%esp), %mm0
      >       movq    %mm0, 8(%esp)
      
      <       subl    $92, %esp
      <       movaps  %xmm0, 64(%esp)
      <       movaps  %xmm0, (%esp)
      <       movl    12(%esp), %eax
      <       movl    %eax, 28(%esp)
      <       movl    8(%esp), %eax
      <       movl    %eax, 24(%esp)
      <       movq    24(%esp), %mm0
      <       movq    %mm0, 56(%esp)
      ---
      >       subl    $44, %esp
      >       movaps  %xmm0, 16(%esp)
      >       pshufd  $3, %xmm0, %xmm1
      >       movd    %xmm1, 4(%esp)
      >       movhlps %xmm0, %xmm0
      >       movd    %xmm0, (%esp)
      >       movq    (%esp), %mm0
      >       movq    %mm0, 8(%esp)
      
      <       subl    $92, %esp
      <       movaps  %xmm0, 64(%esp)
      ---
      >       subl    $44, %esp
      
      <       movl    16(%esp), %eax
      <       movl    %eax, 48(%esp)
      <       movl    20(%esp), %eax
      <       movl    %eax, 52(%esp)
      <       movaps  %xmm0, (%esp)
      <       movl    4(%esp), %eax
      <       movl    %eax, 60(%esp)
      <       movl    (%esp), %eax
      <       movl    %eax, 56(%esp)
      ---
      >       pshufd  $1, %xmm0, %xmm1
      >       movd    %xmm1, 4(%esp)
      >       movd    %xmm0, (%esp)
      >       movd    %xmm1, 12(%esp)
      >       movd    %xmm0, 8(%esp)
      
      <       subl    $92, %esp
      <       movaps  %xmm0, 64(%esp)
      ---
      >       subl    $44, %esp
      
      <       movl    24(%esp), %eax
      <       movl    %eax, 48(%esp)
      <       movl    28(%esp), %eax
      <       movl    %eax, 52(%esp)
      <       movaps  %xmm0, (%esp)
      <       movl    12(%esp), %eax
      <       movl    %eax, 60(%esp)
      <       movl    8(%esp), %eax
      <       movl    %eax, 56(%esp)
      ---
      >       pshufd  $3, %xmm0, %xmm1
      >       movd    %xmm1, 4(%esp)
      >       movhlps %xmm0, %xmm0
      >       movd    %xmm0, (%esp)
      >       movd    %xmm1, 12(%esp)
      >       movd    %xmm0, 8(%esp)
      
      llvm-svn: 47672
      e158a82f
Loading