- Mar 21, 2013
-
-
Jakob Stoklund Olesen authored
It's not yet clear if these instructions need a more careful model. llvm-svn: 177599
-
Jakob Stoklund Olesen authored
This is used for all the expensive system instructions. llvm-svn: 177598
-
- Mar 20, 2013
-
-
Jakob Stoklund Olesen authored
llvm-svn: 177592
-
Jakob Stoklund Olesen authored
llvm-svn: 177591
-
Michael Liao authored
- After moving logic recognizing vector shift with scalar amount from DAG combining into DAG lowering, we declare to customize all vector shifts even vector shift on AVX is legal. As a result, the cost model needs special tuning to identify these legal cases. llvm-svn: 177586
-
Jakob Stoklund Olesen authored
llvm-svn: 177540
-
Jakob Stoklund Olesen authored
llvm-svn: 177539
-
Michael Liao authored
- Move SRA/SRL/SHL lowering support from DAG combination to DAG lowering to support extended 256-bit integer in AVX but not AVX2. llvm-svn: 177478
-
Michael Liao authored
- Prepare moving logic from DAG combining into DAG lowering. There's no functionality change. llvm-svn: 177477
-
Michael Liao authored
- no functionality change llvm-svn: 177476
-
Chad Rosier authored
Patch by Stepan Dyatkovskiy <stpworld@narod.ru> rdar://13457826 llvm-svn: 177463
-
Jakob Stoklund Olesen authored
llvm-svn: 177461
-
Jakob Stoklund Olesen authored
llvm-svn: 177460
-
Jakob Stoklund Olesen authored
llvm-svn: 177459
-
- Mar 19, 2013
-
-
Chad Rosier authored
logic as a QOI cleanup. No functional change. Tests already in place. rdar://13456414 llvm-svn: 177446
-
Jakob Stoklund Olesen authored
Add a new WriteZero SchedWrite type for the common dependency-breaking instructions that clear a register. llvm-svn: 177442
-
Chad Rosier authored
an X86Operand, but also performs a Sema lookup and adds the sizing directive when appropriate. Use this when parsing a bracketed statement. This is necessary to get the instruction matching correct as well. Test case coming on clang side. rdar://13455408 llvm-svn: 177439
-
Ulrich Weigand authored
All pre-increment load patterns need to set the mayLoad flag (since they don't provide a DAG pattern). This was missing for LHAUX8 and LWAUX, which is added by this patch. llvm-svn: 177431
-
Ulrich Weigand authored
As opposed to to pre-increment store patterns, the pre-increment load patterns were already using standard memory operands, with the sole exception of LHAU8. As there's no real reason why LHAU8 should be different here, this patch simply rewrites the pattern to also use a memri operand, just like all the other patterns. llvm-svn: 177430
-
Ulrich Weigand authored
Currently, pre-increment store patterns are written to use two separate operands to represent address base and displacement: stwu $rS, $ptroff($ptrreg) This causes problems when implementing the assembler parser, so this commit changes the patterns to use standard (complex) memory operands like in all other memory access instruction patterns: stwu $rS, $dst To still match those instructions against the appropriate pre_store SelectionDAG nodes, the patch uses the new feature that allows a Pat to match multiple DAG operands against a single (complex) instruction operand. Approved by Hal Finkel. llvm-svn: 177429
-
Ulrich Weigand authored
The tocentry operand class refers to 64-bit values (it is only used in 64-bit, where iPTR is a 64-bit type), but its sole suboperand is designated as 32-bit type. This causes a mismatch to be detected at compile-time with the TableGen patch I'll check in shortly. To fix this, this commit changes the suboperand to a 64-bit type as well. llvm-svn: 177427
-
Ulrich Weigand authored
def : Pat<(load (i64 (X86Wrapper tglobaltlsaddr :$dst))), (MOV64rm tglobaltlsaddr :$dst)>; This pattern is invalid because the MOV64rm instruction expects a source operand of type "i64mem", which is a subclass of X86MemOperand and thus actually consists of five MI operands, but the Pat provides only a single MI operand ("tglobaltlsaddr" matches an SDnode of type ISD::TargetGlobalTLSAddress and provides a single output). Thus, if the pattern were ever matched, subsequent uses of the MOV64rm instruction pattern would access uninitialized memory. In addition, with the TableGen patch I'm about to check in, this would actually be reported as a build-time error. Fortunately, the pattern does in fact never match, for at least two independent reasons. First, the code generator actually never generates a pattern of the form (load (X86Wrapper (tglobaltlsaddr))). For most combinations of TLS and code models, (tglobaltlsaddr) represents just an offset that needs to be added to some base register, so it is never directly dereferenced. The only exception is the initial-exec model, where (tglobaltlsaddr) refers to the (pc-relative) address of a GOT slot, which *is* in fact directly dereferenced: but in that case, the X86WrapperRIP node is used, not X86Wrapper, so the Pat doesn't match. Second, even if some patterns along those lines *were* ever generated, we should not need an extra Pat pattern to match it. Instead, the original MOV64rm instruction pattern ought to match directly, since it uses an "addr" operand, which is implemented via the SelectAddr C++ routine; this routine is supposed to accept the full range of input DAGs that may be implemented by a single mov instruction, including those cases involving ISD::TargetGlobalTLSAddress (and actually does so e.g. in the initial-exec case as above). To avoid build breaks (due to the above-mentioned error) after the TableGen patch is checked in, I'm removing this Pat here. llvm-svn: 177426
-
Hal Finkel authored
Currently the PPC r0 register is unconditionally reserved. There are two reasons for this: 1. r0 is treated specially (as the constant 0) by certain instructions, and so cannot be used with those instructions as a regular register. 2. r0 is used as a temporary register in the CR-register spilling process (where, under some circumstances, we require two GPRs). This change addresses the first reason by introducing a restricted register class (without r0) for use by those instructions that treat r0 specially. These register classes have a new pseudo-register, ZERO, which represents the r0-as-0 use. This has the side benefit of making the existing target code simpler (and easier to understand), and will make it clear to the register allocator that uses of r0 as 0 don't conflict will real uses of the r0 register. Once the CR spilling code is improved, we'll be able to allocate r0. Adding these extra register classes, for some reason unclear to me, causes requests to the target to copy 32-bit registers to 64-bit registers. The resulting code seems correct (and causes no test-suite failures), and the new test case covers this new kind of asymmetric copy. As r0 is still reserved, no functionality change intended. llvm-svn: 177423
-
Nadav Rotem authored
Patch by Ahmad, Muhammad T <muhammad.t.ahmad@intel.com> llvm-svn: 177421
-
Jakob Stoklund Olesen authored
llvm-svn: 177418
-
Jakob Stoklund Olesen authored
llvm-svn: 177417
-
Chad Rosier authored
logic as a QOI cleanup. rdar://13445327 llvm-svn: 177413
-
Hal Finkel authored
Remove an accidentally-added instruction definition and add a comment in the test case. This is in response to a post-commit review by Bill Schmidt. No functionality change intended. llvm-svn: 177404
-
Renato Golin authored
The ARM backend currently has poor codegen for long sext/zext operations, such as v8i8 -> v8i32. This patch addresses this by performing a custom expansion in ARMISelLowering. It also adds/changes the cost of such lowering in ARMTTI. This partially addresses PR14867. Patch by Pete Couperus llvm-svn: 177380
-
Hal Finkel authored
llvm-svn: 177379
-
Hal Finkel authored
Don't sign extend the immediate value from the OR instruction in an LIS/OR pair. llvm-svn: 177361
-
Chad Rosier authored
parsed one. Test case coming shortly. rdar://13446980 llvm-svn: 177347
-
Hal Finkel authored
PPC64 supports unaligned loads and stores of 64-bit values, but in order to use the r+i forms, the offset must be a multiple of 4. Unfortunately, this cannot always be determined by examining the immediate itself because it might be available only via a TOC entry. In order to get around this issue, we additionally predicate the selection of the r+i form on the alignment of the load or store (forcing it to be at least 4 in order to select the r+i form). llvm-svn: 177338
-
- Mar 18, 2013
-
-
Arnold Schwaighofer authored
The default logic marks them as too expensive. For example, before this patch we estimated: cost of 16 for instruction: %r = uitofp <4 x i16> %v0 to <4 x float> While this translates to: vmovl.u16 q8, d16 vcvt.f32.u32 q8, q8 All other costs are left to the values assigned by the fallback logic. Theses costs are mostly reasonable in the sense that they get progressively more expensive as the instruction sequences emitted get longer. radar://13445992 llvm-svn: 177334
-
Arnold Schwaighofer authored
Fix cost of some "cheap" cast instructions. Before this patch we used to estimate for example: cost of 16 for instruction: %r = fptoui <4 x float> %v0 to <4 x i16> While we would emit: vcvt.s32.f32 q8, q8 vmovn.i32 d16, q8 vuzp.8 d16, d17 All other costs are left to the values assigned by the fallback logic. Theses costs are mostly reasonable in the sense that they get progressively more expensive as the instruction sequences emitted get longer. radar://13434072 llvm-svn: 177333
-
Jakob Stoklund Olesen authored
We hitch a ride with the existing OpndItins class that was used to add instruction itinerary classes in the many multiclasses in this file. Use the link provided by the X86FoldableSchedWrite.Folded to find the right SchedWrite for folded loads. llvm-svn: 177326
-
Jakob Stoklund Olesen authored
This new-style scheduling information is going to replace the instruction iteneraries. This also serves as a test case for Andy's fix in r177317. llvm-svn: 177323
-
Hal Finkel authored
llvm-svn: 177296
-
Hal Finkel authored
This commit fixes an assert that would occur on loops with large constant counts (like looping for ((uint32_t) -1) iterations on PPC64). The existing code did not handle counts that it computed to be negative (asserting instead), but these can be created with valid inputs. This bug was discovered by bugpoint while I was attempting to isolate a completely different problem. Also, in writing test cases for the negative-count problem, I discovered that the ori/lsi handling was broken (there was a typo which caused the logic that was supposed to detect these pairs and extract the iteration count to always fail). This has now also been corrected (and is covered by one of the new test cases). llvm-svn: 177295
-
Hal Finkel authored
Because the initial-value constants had not been added to the list of instructions considered for DCE the resulting code had redundant constant-materialization instructions. llvm-svn: 177294
-