- Jul 22, 2011
-
-
Jakub Staszak authored
llvm-svn: 135734
-
Bruno Cardoso Lopes authored
the way to go. Doing this here will prevent several node matches later, and would have to force looking all the way through several VINSERTF128/VEXTRACTF128 chains to optimize simple things. llvm-svn: 135730
-
Bruno Cardoso Lopes authored
and was actually very wrong, fix it and make it simpler. Also remove the ConcatVectors function, which is unused now. - Fix a introduction of useless nodes in r126664 and r126264. The VUNPCKL* should never be introduced cause we don't want duplicate nodes for 128 AVX and non-AVX modes, the actual instruction difference only exists during isel, but not for target specific DAG nodes. We only introduce V* target nodes when there is no 128-bit version already there. - Fix a fragile test and make it more useful. llvm-svn: 135729
-
Bruno Cardoso Lopes authored
vxorps + vinsertf128 pair of instructions llvm-svn: 135727
-
Bruno Cardoso Lopes authored
direclty supported and should be promoted and handled by smaller shuffles llvm-svn: 135726
-
Bruno Cardoso Lopes authored
llvm-svn: 135725
-
Jakub Staszak authored
llvm-svn: 135724
-
Owen Anderson authored
Get rid of the extraneous GPR operand on so_reg_imm operands, which in turn necessitates a lot of changes to related bits. llvm-svn: 135722
-
Dan Gohman authored
size but different element types, so that it filters out the cases that CreateShuffleVectorCast doesn't handle. This fixes rdar://9786827. llvm-svn: 135721
-
Jim Grosbach authored
llvm-svn: 135719
-
Jakub Staszak authored
llvm-svn: 135714
-
Jim Grosbach authored
Add two-operand instruction aliases. Add parsing and encoding tests for variants of the instruction. llvm-svn: 135713
-
Jim Grosbach authored
Add two-operand instruction aliases. Add parsing and encoding tests for variants of the instruction. llvm-svn: 135712
-
- Jul 21, 2011
-
-
Jim Grosbach authored
llvm-svn: 135706
-
Nicolas Geoffray authored
llvm-svn: 135704
-
Jim Grosbach authored
Aliases for LDM/STM. The single-register versions should encode to LDR/STR with writeback, but we don't (yet) get that correct. Neither does Darwin's system assembler, though, so that's not a deal-breaker of a limitation. llvm-svn: 135702
-
Oscar Fuentes authored
llvm-svn: 135698
-
Owen Anderson authored
Split up the ARM so_reg ComplexPattern into so_reg_reg and so_reg_imm, allowing us to distinguish the encodings that use shifted registers from those that use shifted immediates. This is necessary to allow the fixed-length decoder to distinguish things like BICS vs LDRH. llvm-svn: 135693
-
Andrew Trick authored
llvm-svn: 135684
-
Jim Grosbach authored
llvm-svn: 135682
-
Bruno Cardoso Lopes authored
Stefanovic. I removed the part that actually emits the instructions cause I want that to get in better shape first and in incremental steps. This also makes it easier to review the upcoming parts. llvm-svn: 135678
-
Jay Foad authored
llvm-svn: 135676
-
Jay Foad authored
ConstantExpr::getInBoundsGetElementPtr to use ArrayRef. llvm-svn: 135673
-
Chris Lattner authored
to for it to be an an anon namespace and be in a header. Eliminate some extraenous uses of tie. llvm-svn: 135669
-
Bruno Cardoso Lopes authored
- Add more bitcasts for v16i16 - Since 135661 and 135662 already added the splat logic, just add one more splat test for v16i16 llvm-svn: 135663
-
Bruno Cardoso Lopes authored
instruction introduced in AVX, which can operate on 128 and 256-bit vectors. It considers a 256-bit vector as two independent 128-bit lanes. It can permute any 32 or 64 elements inside a lane, and restricts the second lane to have the same permutation of the first one. With the improved splat support introduced early today, adding codegen for this instruction enable more efficient 256-bit code: Instead of: vextractf128 $0, %ymm0, %xmm0 punpcklbw %xmm0, %xmm0 punpckhbw %xmm0, %xmm0 vinsertf128 $0, %xmm0, %ymm0, %ymm1 vinsertf128 $1, %xmm0, %ymm1, %ymm0 vextractf128 $1, %ymm0, %xmm1 shufps $1, %xmm1, %xmm1 movss %xmm1, 28(%rsp) movss %xmm1, 24(%rsp) movss %xmm1, 20(%rsp) movss %xmm1, 16(%rsp) vextractf128 $0, %ymm0, %xmm0 shufps $1, %xmm0, %xmm0 movss %xmm0, 12(%rsp) movss %xmm0, 8(%rsp) movss %xmm0, 4(%rsp) movss %xmm0, (%rsp) vmovaps (%rsp), %ymm0 We get: vextractf128 $0, %ymm0, %xmm0 punpcklbw %xmm0, %xmm0 punpckhbw %xmm0, %xmm0 vinsertf128 $0, %xmm0, %ymm0, %ymm1 vinsertf128 $1, %xmm0, %ymm1, %ymm0 vpermilps $85, %ymm0, %ymm0 llvm-svn: 135662
-
Bruno Cardoso Lopes authored
refactor the code and add a bunch of comments. The final shuffle emitted by handling 256-bit types is suitable for the VPERM shuffle instruction which is going to be introduced in a next commit (with a testcase which cover this commit) llvm-svn: 135661
-
Bruno Cardoso Lopes authored
llvm-svn: 135660
-
Bruno Cardoso Lopes authored
llvm-svn: 135659
-
Bruno Cardoso Lopes authored
llvm-svn: 135658
-
Bruno Cardoso Lopes authored
llvm-svn: 135657
-
Bruno Cardoso Lopes authored
llvm-svn: 135656
-
-
Andrew Trick authored
rdar://9786536 llvm-svn: 135650
-
Bill Wendling authored
llvm-svn: 135645
-
Andrew Trick authored
rdar://9786536 llvm-svn: 135644
-
Evan Cheng authored
X86 is the only target that uses coff format. This should fixes test failures running on Windows, Cygwin, or MingW hosts. llvm-svn: 135639
-
Evan Cheng authored
Sink ARMMCExpr and ARMAddressingModes into MC layer. First step to separate ARM MC code from target. llvm-svn: 135636
-
Bill Wendling authored
llvm-svn: 135635
-
Bill Wendling authored
llvm-svn: 135634
-