- Apr 24, 2009
-
-
Rafael Espindola authored
llvm-svn: 69967
-
Nate Begeman authored
ISD::VECTOR_SHUFFLE now stores an array of integers representing the shuffle mask internal to the node, rather than taking a BUILD_VECTOR of ConstantSDNodes as the shuffle mask. A value of -1 represents UNDEF. In addition to eliminating the creation of illegal BUILD_VECTORS just to represent shuffle masks, we are better about canonicalizing the shuffle mask, resulting in substantially better code for some classes of shuffles. A clean up of x86 shuffle code, and some canonicalizing in DAGCombiner is next. llvm-svn: 69952
-
- Apr 08, 2009
-
-
Rafael Espindola authored
Tested by bootstrapping llvm-gcc and using that to build llvm. llvm-svn: 68645
-
Bill Wendling authored
builds. --- Reverse-merging (from foreign repository) r68552 into '.': U test/CodeGen/X86/tls8.ll U test/CodeGen/X86/tls10.ll U test/CodeGen/X86/tls2.ll U test/CodeGen/X86/tls6.ll U lib/Target/X86/X86Instr64bit.td U lib/Target/X86/X86InstrSSE.td U lib/Target/X86/X86InstrInfo.td U lib/Target/X86/X86RegisterInfo.cpp U lib/Target/X86/X86ISelLowering.cpp U lib/Target/X86/X86CodeEmitter.cpp U lib/Target/X86/X86FastISel.cpp U lib/Target/X86/X86InstrInfo.h U lib/Target/X86/X86ISelDAGToDAG.cpp U lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.cpp U lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.cpp U lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.h U lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.h U lib/Target/X86/X86ISelLowering.h U lib/Target/X86/X86InstrInfo.cpp U lib/Target/X86/X86InstrBuilder.h U lib/Target/X86/X86RegisterInfo.td llvm-svn: 68560
-
- Apr 07, 2009
-
-
Rafael Espindola authored
This introduces a small regression on the generated code quality in the case we are just computing addresses, not loading values. Will work on it and on X86-64 support. llvm-svn: 68552
-
- Feb 26, 2009
-
-
Evan Cheng authored
ADDS{D|S}rr_Int and MULS{D|S}rr_Int are not commutable. The users of these intrinsics expect the high bits will not be modified. llvm-svn: 65499
-
- Feb 23, 2009
-
-
Nate Begeman authored
Generate better code for v16i8 shuffles on SSE2 (avoids stack) Generate pshufb for v8i16 and v16i8 shuffles on SSSE3 where it is fewer uops. Document the shuffle matching logic and add some FIXMEs for later further cleanups. New tests that test the above. Examples: New: _shuf2: pextrw $7, %xmm0, %eax punpcklqdq %xmm1, %xmm0 pshuflw $128, %xmm0, %xmm0 pinsrw $2, %eax, %xmm0 Old: _shuf2: pextrw $2, %xmm0, %eax pextrw $7, %xmm0, %ecx pinsrw $2, %ecx, %xmm0 pinsrw $3, %eax, %xmm0 movd %xmm1, %eax pinsrw $4, %eax, %xmm0 ret ========= New: _shuf4: punpcklqdq %xmm1, %xmm0 pshufb LCPI1_0, %xmm0 Old: _shuf4: pextrw $3, %xmm0, %eax movsd %xmm1, %xmm0 pextrw $3, %xmm1, %ecx pinsrw $4, %ecx, %xmm0 pinsrw $5, %eax, %xmm0 ======== New: _shuf1: pushl %ebx pushl %edi pushl %esi pextrw $1, %xmm0, %eax rolw $8, %ax movd %xmm0, %ecx rolw $8, %cx pextrw $5, %xmm0, %edx pextrw $4, %xmm0, %esi pextrw $3, %xmm0, %edi pextrw $2, %xmm0, %ebx movaps %xmm0, %xmm1 pinsrw $0, %ecx, %xmm1 pinsrw $1, %eax, %xmm1 rolw $8, %bx pinsrw $2, %ebx, %xmm1 rolw $8, %di pinsrw $3, %edi, %xmm1 rolw $8, %si pinsrw $4, %esi, %xmm1 rolw $8, %dx pinsrw $5, %edx, %xmm1 pextrw $7, %xmm0, %eax rolw $8, %ax movaps %xmm1, %xmm0 pinsrw $7, %eax, %xmm0 popl %esi popl %edi popl %ebx ret Old: _shuf1: subl $252, %esp movaps %xmm0, (%esp) movaps %xmm0, 16(%esp) movaps %xmm0, 32(%esp) movaps %xmm0, 48(%esp) movaps %xmm0, 64(%esp) movaps %xmm0, 80(%esp) movaps %xmm0, 96(%esp) movaps %xmm0, 224(%esp) movaps %xmm0, 208(%esp) movaps %xmm0, 192(%esp) movaps %xmm0, 176(%esp) movaps %xmm0, 160(%esp) movaps %xmm0, 144(%esp) movaps %xmm0, 128(%esp) movaps %xmm0, 112(%esp) movzbl 14(%esp), %eax movd %eax, %xmm1 movzbl 22(%esp), %eax movd %eax, %xmm2 punpcklbw %xmm1, %xmm2 movzbl 42(%esp), %eax movd %eax, %xmm1 movzbl 50(%esp), %eax movd %eax, %xmm3 punpcklbw %xmm1, %xmm3 punpcklbw %xmm2, %xmm3 movzbl 77(%esp), %eax movd %eax, %xmm1 movzbl 84(%esp), %eax movd %eax, %xmm2 punpcklbw %xmm1, %xmm2 movzbl 104(%esp), %eax movd %eax, %xmm1 punpcklbw %xmm1, %xmm0 punpcklbw %xmm2, %xmm0 movaps %xmm0, %xmm1 punpcklbw %xmm3, %xmm1 movzbl 127(%esp), %eax movd %eax, %xmm0 movzbl 135(%esp), %eax movd %eax, %xmm2 punpcklbw %xmm0, %xmm2 movzbl 155(%esp), %eax movd %eax, %xmm0 movzbl 163(%esp), %eax movd %eax, %xmm3 punpcklbw %xmm0, %xmm3 punpcklbw %xmm2, %xmm3 movzbl 188(%esp), %eax movd %eax, %xmm0 movzbl 197(%esp), %eax movd %eax, %xmm2 punpcklbw %xmm0, %xmm2 movzbl 217(%esp), %eax movd %eax, %xmm4 movzbl 225(%esp), %eax movd %eax, %xmm0 punpcklbw %xmm4, %xmm0 punpcklbw %xmm2, %xmm0 punpcklbw %xmm3, %xmm0 punpcklbw %xmm1, %xmm0 addl $252, %esp ret llvm-svn: 65311
-
- Feb 10, 2009
-
-
Evan Cheng authored
llvm-svn: 64240
-
- Feb 05, 2009
-
-
Evan Cheng authored
llvm-svn: 63852
-
- Jan 28, 2009
-
-
Evan Cheng authored
The memory alignment requirement on some of the mov{h|l}p{d|s} patterns are 16-byte. That is overly strict. These instructions read / write f64 memory locations without alignment requirement. llvm-svn: 63195
-
- Jan 09, 2009
-
-
Dan Gohman authored
the same formatting as their corresponding SSE2 instructions, for consistency. llvm-svn: 61971
-
- Dec 18, 2008
-
-
Mon P Wang authored
llvm-svn: 61211
-
- Dec 03, 2008
-
-
Dan Gohman authored
llvm-svn: 60487
-
Dan Gohman authored
foldMemoryOperand how to "fold" them, by converting them into constant-pool loads. When they aren't folded, they use xorps/cmpeqd, but for example when register pressure is high, they may now be folded as memory operands, which reduces register pressure. Also, mark V_SET0 isAsCheapAsAMove so that two-address-elimination will remat it instead of copying zeros around (V_SETALLONES was already marked). llvm-svn: 60461
-
- Oct 17, 2008
-
-
Evan Cheng authored
Fix lfence and mfence encoding. These look like MRM5r and MRM6r instructions except they do not have any operands. The RegModRM byte is encoded with register number 0. llvm-svn: 57692
-
- Oct 16, 2008
-
-
Dan Gohman authored
an unindexed load. llvm-svn: 57612
-
- Oct 15, 2008
-
-
Dan Gohman authored
the predicates by extending simple predicates to create more complex predicates instead of duplicating the logic for the simple predicates. This doesn't reduce much redundancy in DAGISelEmitter.cpp's generated source yet; that will require improvements to DAGISelEmitter.cpp's instruction sorting, to make it more effectively group nodes with similar predicates together. llvm-svn: 57565
-
- Oct 11, 2008
-
-
Dale Johannesen authored
the same pattern as roundpd/roundps, the Intel compiler builtins do not: rounds* has an extra operand. Fixes gcc.target/i386/sse4_1-rounds[sd]-[1234].c llvm-svn: 57370
-
- Oct 07, 2008
-
-
Anders Carlsson authored
Certain patterns involving the "movss" instruction were marked as requiring SSE2, when in reality movss is an SSE1 instruction. llvm-svn: 57246
-
- Oct 02, 2008
-
-
Bill Wendling authored
a constant vector ("{0x123, 0x456}" syntax). The fix is to simplify the _mm_srli_si128 macro, and move the "* 8" from the macro into the compiler back-end. I can't change the existing __builtins because so many people are using them :-(." Patch by Stuart Hastings! llvm-svn: 56944
-
- Sep 27, 2008
-
-
Evan Cheng authored
Implement "punpckldq %xmm0, $xmm0" as "pshufd $0x50, %xmm0, %xmm" unless optimizing for code size. llvm-svn: 56711
-
- Sep 26, 2008
-
-
Evan Cheng authored
llvm-svn: 56697
-
- Sep 25, 2008
-
-
Evan Cheng authored
With sse3 and when the source is a load or has multiple uses, favors movddup over shuffp*, pshufd, etc. Without sse3 or when the source is from a register, make use of movlhps llvm-svn: 56620
-
Evan Cheng authored
llvm-svn: 56600
-
Evan Cheng authored
Fix patterns for SSE4.1 move and sign extend instructions. Also add instructions which fold VZEXT_MOVL and VZEXT_LOAD. llvm-svn: 56594
-
- Sep 12, 2008
-
-
Dan Gohman authored
with ConstantInt. This led to fixing a bug in TargetLowering.cpp using getValue instead of getAPIntValue. llvm-svn: 56159
-
- Sep 06, 2008
-
-
Eli Friedman authored
i32>. This is a little messy, but it works. We should really get rid of the intrinsics, though, since they map perfectly well to standard LLVM instructions. llvm-svn: 55864
-
- Aug 28, 2008
-
-
Evan Cheng authored
llvm-svn: 55466
-
- Aug 20, 2008
-
-
Dan Gohman authored
necessary to use dyn_cast in these predicates. llvm-svn: 55055
-
- Aug 08, 2008
-
-
Dan Gohman authored
X86ISelLowering creates. llvm-svn: 54544
-
- Aug 06, 2008
-
-
Evan Cheng authored
llvm-svn: 54376
-
- Jul 17, 2008
-
-
Nate Begeman authored
llvm-svn: 53720
-
Nate Begeman authored
llvm-svn: 53719
-
- Jul 10, 2008
-
-
Evan Cheng authored
llvm-svn: 53386
-
- Jun 16, 2008
-
-
Evan Cheng authored
llvm-svn: 52363
-
Evan Cheng authored
llvm-svn: 52352
-
- Jun 13, 2008
-
-
Duncan Sands authored
wrong for volatile loads and stores. In fact this is almost all of them! There are three types of problems: (1) it is wrong to change the width of a volatile memory access. These may be used to do memory mapped i/o, in which case a load can have an effect even if the result is not used. Consider loading an i32 but only using the lower 8 bits. It is wrong to change this into a load of an i8, because you are no longer tickling the other three bytes. It is also unwise to make a load/store wider. For example, changing an i16 load into an i32 load is wrong no matter how aligned things are, since the fact of loading an additional 2 bytes can have i/o side-effects. (2) it is wrong to change the number of volatile load/stores: they may be counted by the hardware. (3) it is wrong to change a volatile load/store that requires one memory access into one that requires several. For example on x86-32, you can store a double in one processor operation, but to store an i64 requires two (two i32 stores). In a multi-threaded program you may want to bitcast an i64 to a double and store as a double because that will occur atomically, and be indivisible to other threads. So it would be wrong to convert the store-of-double into a store of an i64, because this will become two i32 stores - no longer atomic. My policy here is to say that the number of processor operations for an illegal operation is undefined. So it is alright to change a store of an i64 (requires at least two stores; but could be validly lowered to memcpy for example) into a store of double (one processor op). In short, if the new store is legal and has the same size then I say that the transform is ok. It would also be possible to say that transforms are always ok if before they were illegal, whether after they are illegal or not, but that's more awkward to do and I doubt it buys us anything much. However this exposed an interesting thing - on x86-32 a store of i64 is considered legal! That is because operations are marked legal by default, regardless of whether the type is legal or not. In some ways this is clever: before type legalization this means that operations on illegal types are considered legal; after type legalization there are no illegal types so now operations are only legal if they really are. But I consider this to be too cunning for mere mortals. Better to do things explicitly by testing AfterLegalize. So I have changed things so that operations with illegal types are considered illegal - indeed they can never map to a machine operation. However this means that the DAG combiner is more conservative because before it was "accidentally" performing transforms where the type was illegal because the operation was nonetheless marked legal. So in a few such places I added a check on AfterLegalize, which I suppose was actually just forgotten before. This causes the DAG combiner to do slightly more than it used to, which resulted in the X86 backend blowing up because it got a slightly surprising node it wasn't expecting, so I tweaked it. llvm-svn: 52254
-
- May 29, 2008
-
-
Evan Cheng authored
llvm-svn: 51667
-
- May 28, 2008
-
-
Dan Gohman authored
llvm-svn: 51630
-
Mon P Wang authored
is a memory location llvm-svn: 51626
-