- Mar 09, 2008
-
-
Nate Begeman authored
instructions. llvm-svn: 48077
-
Chris Lattner authored
llvm-svn: 48076
-
Chris Lattner authored
MacroArgs.cpp/h llvm-svn: 48075
-
Chris Lattner authored
llvm-svn: 48074
-
Chris Lattner authored
llvm-svn: 48073
-
Chris Lattner authored
llvm-svn: 48072
-
Chris Lattner authored
token streams and macro lexing, so a more generic name is useful. llvm-svn: 48071
-
Chris Lattner authored
involved. llvm-svn: 48070
-
Nate Begeman authored
llvm-svn: 48069
-
Chris Lattner authored
llvm-svn: 48068
-
Chris Lattner authored
llvm-svn: 48067
-
Chris Lattner authored
llvm-svn: 48066
-
Chris Lattner authored
llvm-svn: 48065
-
Chris Lattner authored
llvm-svn: 48064
-
Chris Lattner authored
#include <xmmintrin.h> __m128i doload64(short x) {return _mm_set_epi16(0,0,0,0,0,0,0,1);} into: movl $1, %eax movd %eax, %xmm0 ret instead of a constant pool load. llvm-svn: 48063
-
Chris Lattner authored
llvm-svn: 48062
-
Chris Lattner authored
llvm-svn: 48061
-
Chris Lattner authored
of BUILD_VECTORS that only have two unique elements: 1. The previous code was nondeterminstic, because it walked a map in SDOperand order, which isn't determinstic. 2. The previous code didn't handle the case when one element was undef very well. Now we ensure that the generated shuffle mask has the undef vector on the RHS (instead of potentially being on the LHS) and that any elements that refer to it are themselves undef. This allows us to compile CodeGen/X86/vec_set-9.ll into: _test3: movd %rdi, %xmm0 punpcklqdq %xmm0, %xmm0 ret instead of: _test3: movd %rdi, %xmm1 #IMPLICIT_DEF %xmm0 punpcklqdq %xmm1, %xmm0 ret ... saving a register. llvm-svn: 48060
-
Chris Lattner authored
_test3: movd %rdi, %xmm1 #IMPLICIT_DEF %xmm0 punpcklqdq %xmm1, %xmm0 ret instead of: _test3: #IMPLICIT_DEF %rax movd %rax, %xmm0 movd %rdi, %xmm1 punpcklqdq %xmm1, %xmm0 ret This is still not ideal. There is no reason to two xmm regs. llvm-svn: 48058
-
- Mar 08, 2008
-
-
Chris Lattner authored
2) Don't try to insert an i64 value into the low part of a vector with movq on an x86-32 target. This allows us to compile: __m128i doload64(short x) {return _mm_set_epi16(0,0,0,0,0,0,0,1);} into: _doload64: movaps LCPI1_0, %xmm0 ret instead of: _doload64: subl $28, %esp movl $0, 4(%esp) movl $1, (%esp) movq (%esp), %xmm0 addl $28, %esp ret llvm-svn: 48057
-
Chris Lattner authored
SCALAR_TO_VECTOR on paths that end up not using it. llvm-svn: 48056
-
Chris Lattner authored
llvm-svn: 48055
-
Chris Lattner authored
llvm-svn: 48054
-
Chris Lattner authored
llvm-svn: 48053
-
Chris Lattner authored
llvm-svn: 48052
-
Chris Lattner authored
which is simpler to use and provide. llvm-svn: 48051
-
Chris Lattner authored
different widths. Start simplifying TargetInfo accessor methods. llvm-svn: 48050
-
Chris Lattner authored
llvm-svn: 48049
-
Chris Lattner authored
llvm-svn: 48048
-
Nick Lewycky authored
llvm-svn: 48047
-
Nick Lewycky authored
it tries to initialize them. llvm-svn: 48046
-
Andrew Lenharth authored
llvm-svn: 48045
-
Dan Gohman authored
llvm-svn: 48044
-
Dale Johannesen authored
are looking pretty good now. llvm-svn: 48043
-
Evan Cheng authored
Implement x86 support for @llvm.prefetch. It corresponds to prefetcht{0|1|2} and prefetchnta instructions. llvm-svn: 48042
-
Dan Gohman authored
llvm-svn: 48041
-
Bill Wendling authored
kills the sub-register. llvm-svn: 48038
-
- Mar 07, 2008
-
-
Ted Kremenek authored
llvm-svn: 48037
-
Ted Kremenek authored
that are not related to error nodes. Fixed bug where we did not detect some NULL dereferences. Added "ExplodedGraph::Trim" to trim all nodes that cannot transitively reach a set of provided nodes. Fixed subtle bug in ExplodedNodeImpl where we could create predecessor iterators that included the mangled "sink" bit. The better fix is to integrate this bit into the void* for the wrapped State, not the NodeGroups representing a node's predecessors and successors. llvm-svn: 48036
-
Evan Cheng authored
llvm-svn: 48035
-