- Mar 19, 2013
-
-
Chad Rosier authored
logic as a QOI cleanup. rdar://13445327 llvm-svn: 177413
-
Hal Finkel authored
Remove an accidentally-added instruction definition and add a comment in the test case. This is in response to a post-commit review by Bill Schmidt. No functionality change intended. llvm-svn: 177404
-
Renato Golin authored
The ARM backend currently has poor codegen for long sext/zext operations, such as v8i8 -> v8i32. This patch addresses this by performing a custom expansion in ARMISelLowering. It also adds/changes the cost of such lowering in ARMTTI. This partially addresses PR14867. Patch by Pete Couperus llvm-svn: 177380
-
Hal Finkel authored
llvm-svn: 177379
-
Hal Finkel authored
Don't sign extend the immediate value from the OR instruction in an LIS/OR pair. llvm-svn: 177361
-
Chad Rosier authored
parsed one. Test case coming shortly. rdar://13446980 llvm-svn: 177347
-
Hal Finkel authored
PPC64 supports unaligned loads and stores of 64-bit values, but in order to use the r+i forms, the offset must be a multiple of 4. Unfortunately, this cannot always be determined by examining the immediate itself because it might be available only via a TOC entry. In order to get around this issue, we additionally predicate the selection of the r+i form on the alignment of the load or store (forcing it to be at least 4 in order to select the r+i form). llvm-svn: 177338
-
- Mar 18, 2013
-
-
Arnold Schwaighofer authored
The default logic marks them as too expensive. For example, before this patch we estimated: cost of 16 for instruction: %r = uitofp <4 x i16> %v0 to <4 x float> While this translates to: vmovl.u16 q8, d16 vcvt.f32.u32 q8, q8 All other costs are left to the values assigned by the fallback logic. Theses costs are mostly reasonable in the sense that they get progressively more expensive as the instruction sequences emitted get longer. radar://13445992 llvm-svn: 177334
-
Arnold Schwaighofer authored
Fix cost of some "cheap" cast instructions. Before this patch we used to estimate for example: cost of 16 for instruction: %r = fptoui <4 x float> %v0 to <4 x i16> While we would emit: vcvt.s32.f32 q8, q8 vmovn.i32 d16, q8 vuzp.8 d16, d17 All other costs are left to the values assigned by the fallback logic. Theses costs are mostly reasonable in the sense that they get progressively more expensive as the instruction sequences emitted get longer. radar://13434072 llvm-svn: 177333
-
Jakob Stoklund Olesen authored
We hitch a ride with the existing OpndItins class that was used to add instruction itinerary classes in the many multiclasses in this file. Use the link provided by the X86FoldableSchedWrite.Folded to find the right SchedWrite for folded loads. llvm-svn: 177326
-
Jakob Stoklund Olesen authored
This new-style scheduling information is going to replace the instruction iteneraries. This also serves as a test case for Andy's fix in r177317. llvm-svn: 177323
-
Hal Finkel authored
llvm-svn: 177296
-
Hal Finkel authored
This commit fixes an assert that would occur on loops with large constant counts (like looping for ((uint32_t) -1) iterations on PPC64). The existing code did not handle counts that it computed to be negative (asserting instead), but these can be created with valid inputs. This bug was discovered by bugpoint while I was attempting to isolate a completely different problem. Also, in writing test cases for the negative-count problem, I discovered that the ori/lsi handling was broken (there was a typo which caused the logic that was supposed to detect these pairs and extract the iteration count to always fail). This has now also been corrected (and is covered by one of the new test cases). llvm-svn: 177295
-
Hal Finkel authored
Because the initial-value constants had not been added to the list of instructions considered for DCE the resulting code had redundant constant-materialization instructions. llvm-svn: 177294
-
Christian Konig authored
Signed-off-by:
Christian König <christian.koenig@amd.com> Reviewed-by:
Tom Stellard <thomas.stellard@amd.com> llvm-svn: 177277
-
Christian Konig authored
Signed-off-by:
Christian König <christian.koenig@amd.com> Reviewed-by:
Tom Stellard <thomas.stellard@amd.com> llvm-svn: 177276
-
Christian Konig authored
Signed-off-by:
Christian König <christian.koenig@amd.com> Reviewed-by:
Tom Stellard <thomas.stellard@amd.com> llvm-svn: 177275
-
Christian Konig authored
Signed-off-by:
Christian König <christian.koenig@amd.com> Reviewed-by:
Tom Stellard <thomas.stellard@amd.com> llvm-svn: 177274
-
Christian Konig authored
Signed-off-by:
Christian König <christian.koenig@amd.com> Reviewed-by:
Tom Stellard <thomas.stellard@amd.com> llvm-svn: 177273
-
Christian Konig authored
Signed-off-by:
Christian König <christian.koenig@amd.com> Reviewed-by:
Tom Stellard <thomas.stellard@amd.com> llvm-svn: 177272
-
Christian Konig authored
Unfortunately the previous fix for inserting waits for unordered defines wasn't sufficient, cause it's possible that even ordered defines are only partially used (or not used at all). Signed-off-by:
Christian König <christian.koenig@amd.com> Reviewed-by:
Tom Stellard <thomas.stellard@amd.com> llvm-svn: 177271
-
Anton Korobeynikov authored
MinGW is almost completely compatible to MSVC, with the exception of the _tls_array global not being available. Patch by David Nadlinger! llvm-svn: 177257
-
Craig Topper authored
llvm-svn: 177243
-
Craig Topper authored
llvm-svn: 177242
-
- Mar 17, 2013
-
-
Sylvestre Ledru authored
llvm-svn: 177234
-
Hal Finkel authored
This change cleans up two issues with Altivec register spilling: 1. The spilling code was inefficient (using two instructions, and add and a load, when just one would do) 2. The code assumed that r0 would always be available (true for now, but this will change) The new code handles VR spilling just like GPR spills but forced into r+r mode. As a result, when any VR spills are present, we must now always allocate the register-scavenger spill slot. llvm-svn: 177231
-
- Mar 16, 2013
-
-
Hal Finkel authored
As a follow-up to r158719, remove PPCRegisterInfo::avoidWriteAfterWrite. Jakob pointed out in response to r158719 that this callback is currently unused and so this has no effect (and the speedups that I thought that I had observed as a result of implementing this function must have been noise). llvm-svn: 177228
-
Craig Topper authored
Previously we weren't skipping the VVVV encoded register. Based on patch by Michael Liao. llvm-svn: 177221
-
Jakob Stoklund Olesen authored
Since almost all X86 instructions can fold loads, use a multiclass to define register/memory pairs of SchedWrites. An X86FoldableSchedWrite represents the register version of an instruction. It holds a reference to the SchedWrite to use when the instruction folds a load. This will be used inside multiclasses that define rr and rm instruction versions together. llvm-svn: 177210
-
- Mar 15, 2013
-
-
Arnold Schwaighofer authored
I was too pessimistic in r177105. Vector selects that fit into a legal register type lower just fine. I was mislead by the code fragment that I was using. The stores/loads that I saw in those cases came from lowering the conditional off an address. Changing the code fragment to: %T0_3 = type <8 x i18> %T1_3 = type <8 x i1> define void @func_blend3(%T0_3* %loadaddr, %T0_3* %loadaddr2, %T1_3* %blend, %T0_3* %storeaddr) { %v0 = load %T0_3* %loadaddr %v1 = load %T0_3* %loadaddr2 ==> FROM: ;%c = load %T1_3* %blend ==> TO: %c = icmp slt %T0_3 %v0, %v1 ==> USE: %r = select %T1_3 %c, %T0_3 %v0, %T0_3 %v1 store %T0_3 %r, %T0_3* %storeaddr ret void } revealed this mistake. radar://13403975 llvm-svn: 177170
-
Silviu Baranga authored
Adding an A15 specific optimization pass for interactions between S/D/Q registers. The pass handles all the required transformations pre-regalloc. llvm-svn: 177169
-
Benjamin Kramer authored
Fixes PR15520. llvm-svn: 177167
-
Hal Finkel authored
Unaligned access is supported on PPC for non-vector types, and is generally more efficient than manually expanding the loads and stores. A few of the existing test cases were using expanded unaligned loads and stores to test other features (like load/store with update), and for these test cases, unaligned access remains disabled. llvm-svn: 177160
-
Arnold Schwaighofer authored
A vector fptrunc and fpext simply gets split into scalar instructions. radar://13192358 llvm-svn: 177159
-
Hal Finkel authored
In preparation for the addition of other SIMD ISA extensions (such as QPX) we need to make sure that all Altivec patterns are properly predicated on having Altivec support. No functionality change intended (one test case needed to be updated b/c it assumed that Altivec intrinsics would be supported without enabling Altivec support). llvm-svn: 177152
-
Hal Finkel authored
For spills into a large stack frame, the FI-elimination code uses the register scavenger to obtain a free GPR for use with an r+r-addressed load or store. When there are no available GPRs, the scavenger gets one by using its spill slot. Previously, we were not always allocating that spill slot and the RS would assert when the spill slot was needed. I don't currently have a small test that triggered the assert, but I've created a small regression test that verifies that the spill slot is now added when the stack frame is sufficiently large. llvm-svn: 177140
-
Eric Christopher authored
llvm-svn: 177135
-
Nadav Rotem authored
llvm-svn: 177130
-
David Blaikie authored
(these were added in r177089) llvm-svn: 177129
-
Akira Hatanaka authored
llvm-svn: 177128
-