- Dec 21, 2011
-
-
Craig Topper authored
Remove mode specific disassembler classes and just call X86GenericDisassembler constructor with appropriate argument in the creation functions. This removes a few tables that needed to be anchored. llvm-svn: 147046
-
Craig Topper authored
llvm-svn: 147045
-
Evan Cheng authored
llvm-svn: 147032
-
Jim Grosbach authored
llvm-svn: 147028
-
Jim Grosbach authored
llvm-svn: 147025
-
Akira Hatanaka authored
The patch and test case were originally written by Mans Rullgard. llvm-svn: 147024
-
Akira Hatanaka authored
case for DCLO and DCLZ. llvm-svn: 147022
-
Akira Hatanaka authored
llvm-svn: 147021
-
Akira Hatanaka authored
llvm-svn: 147019
-
Akira Hatanaka authored
DSHD (Double Swap Halfwords within Doublewords). Add a pattern which replaces 64-bit bswap with a DSBH and DSHD pair. llvm-svn: 147017
-
Akira Hatanaka authored
instruction supported by mips32r2, and add a pattern which replaces bswap with a ROTR and WSBH pair. WSBW is removed since it is not an instruction the current architectures support. llvm-svn: 147015
-
Akira Hatanaka authored
llvm-svn: 147014
-
Akira Hatanaka authored
llvm-svn: 147013
-
Akira Hatanaka authored
llvm-svn: 147012
-
Jim Grosbach authored
llvm-svn: 147009
-
Akira Hatanaka authored
nodes needed for multiplication. Add code for selecting 64-bit MULHS and MULHU nodes. llvm-svn: 147008
-
- Dec 20, 2011
-
-
Akira Hatanaka authored
llvm-svn: 147007
-
Akira Hatanaka authored
llvm-svn: 147005
-
Akira Hatanaka authored
llvm-svn: 147004
-
Akira Hatanaka authored
llvm-svn: 147003
-
Akira Hatanaka authored
only when the target ABI is N64. llvm-svn: 147001
-
Jim Grosbach authored
llvm-svn: 147000
-
Akira Hatanaka authored
MIPS64 can generate constant +0.0 with a single DMTC1 instruction. llvm-svn: 146999
-
Jakob Stoklund Olesen authored
Use the spill slot alignment as well as the local variable alignment to determine when the stack needs to be realigned. This works now that the ARM target can always realign the stack by using a base pointer. Still respect the ARMBaseRegisterInfo::canRealignStack() function vetoing a realigned stack. Don't use aligned spill code in that case. llvm-svn: 146997
-
Akira Hatanaka authored
llvm-svn: 146996
-
Akira Hatanaka authored
llvm-svn: 146995
-
Akira Hatanaka authored
only when the target ABI is N64. llvm-svn: 146992
-
Jim Grosbach authored
llvm-svn: 146990
-
Jim Grosbach authored
llvm-svn: 146983
-
Evan Cheng authored
llvm-svn: 146981
-
Jason W Kim authored
(Both used for Linux gnueabi) No behavioral change yet (no tests need so far) llvm-svn: 146977
-
Elena Demikhovsky authored
The failure that I see in the current version is: LLVM ERROR: Cannot select: 0x18b8f70: v4i64 = X86ISD::VZEXT_MOVL 0x18beee0 [ID=14] 0x18beee0: v4i64 = insert_subvector 0x18b8c70, 0x18b9170, 0x18b9570 [ID=13] 0x18b8c70: v4i64 = insert_subvector 0x18b9870, 0x18bf4e0, 0x18b9970 [ID=12] 0x18b9870: v4i64 = undef [ID=4] 0x18bf4e0: v2i64 = bitcast 0x18bf3e0 [ID=10] 0x18bf3e0: v4i32 = BUILD_VECTOR 0x18b9770, 0x18b9770, 0x18b9770, 0x18b9770 [ID=8] 0x18b9770: i32 = TargetConstant<0> [ID=6] 0x18b9770: i32 = TargetConstant<0> [ID=6] 0x18b9770: i32 = TargetConstant<0> [ID=6] 0x18b9770: i32 = TargetConstant<0> [ID=6] 0x18b9970: i32 = Constant<0> [ID=3] 0x18b9170: v2i64 = undef [ORD=1] [ID=1] 0x18b9570: i32 = Constant<2> [ID=5] llvm-svn: 146975
-
Chandler Carruth authored
use the zero-undefined variants of CTTZ and CTLZ. These are just simple patterns for now, there is more to be done to make real world code using these constructs be optimized and codegen'ed properly on X86. The existing tests are spiffed up to check that we no longer generate unnecessary cmov instructions, and that we generate the very important 'xor' to transform bsr which counts the index of the most significant one bit to the number of leading (most significant) zero bits. Also they now check that when the variant with defined zero result is used, the cmov is still produced. llvm-svn: 146974
-
Chandler Carruth authored
likely to stay either way that discussion ends up resolving itself. llvm-svn: 146966
-
-
Bob Wilson authored
We used to rely on the *eh_sjlj_setjmp instructions to mark that a function with setjmp/longjmp exception handling clobbers all the registers. But with the recent reorganization of ARM EH, those eh_sjlj_setjmp instructions are expanded away earlier, before PEI can see them to determine what registers to save and restore. Mark the dispatchsetup instruction in the same way, since that instruction cannot be expanded early. This also more accurately reflects when the registers are clobbered. llvm-svn: 146949
-
Jim Grosbach authored
"mov r1, r2, lsl #0" should assemble as "mov r1, r2" even though it's not strictly legal UAL syntax. It's a common extension and the friendly thing to do. rdar://10604663 llvm-svn: 146937
-
Dan Gohman authored
llvm-svn: 146927
-
Jim Grosbach authored
e.g., "vmov.i32 d4, #-118" can be assembled as "vmvn.i32 d4, #117" rdar://10603913 llvm-svn: 146925
-
Jim Grosbach authored
rdar://9932658 llvm-svn: 146921
-