- Nov 14, 2016
-
-
Sean Fertile authored
add an intrinsic to expose the 'VSX Scalar Convert Half-Precision to Single-Precision' instruction. Differential review: https://reviews.llvm.org/D26536 llvm-svn: 286862
-
Changpeng Fang authored
Summary: Extend image intrinsics to support data types of V1F32 and V2F32. TODO: we should define a mapping table to change the opcode for data type of V2F32 but just one channel is active, even though such case should be very rare. Reviewers: tstellarAMD Differential Revision: http://reviews.llvm.org/D26472 llvm-svn: 286860
-
Bob Wilson authored
Darwin's backtrace() function does not work with sigaltstack (which was enabled when available with r270395) — it does a sanity check to make sure that the current frame pointer is within the expected stack area (which it is not when using an alternate stack) and gives up otherwise. The alternative of _Unwind_Backtrace seems to work fine on macOS, so use that when backtrace() fails. Note that we then use backtrace_symbols_fd() with the addresses from _Unwind_Backtrace, but I’ve tested that and it also seems to work fine. rdar://problem/28646552 llvm-svn: 286851
-
Adrian Prantl authored
llvm-svn: 286845
-
Teresa Johnson authored
This restores the rest of r286297 (part was restored in r286475). Specifically, it restores the part requiring adding a dependency from the Analysis to Object library (downstream use changed to correctly model split BitReader vs BitWriter libraries). Original description of this part of patch follows: Module level asm may also contain defs of values. We need to prevent export of any refs to local values defined in module level asm (e.g. a ref in normal IR), since that also requires renaming/promotion of the local. To do that, the summary index builder looks at all values in the module level asm string that are not marked Weak or Global, which is exactly the set of locals that are defined. A summary is created for each of these local defs and flagged as NoRename. This required adding handling to the BitcodeWriter to look at GV declarations to see if they have a summary (rather than skipping them all). Finally, added an assert to IRObjectFile::CollectAsmUndefinedRefs to ensure that an MCAsmParser is available, otherwise the module asm parse would silently fail. Initialized the asm parser in the opt tool for use in testing this fix. Fixes PR30610. llvm-svn: 286844
-
Sumanth Gundapaneni authored
The Stack slot coloring pass removes a store that is followed by a load that deal with the same stack slot. The function isLoadFromStackSlot is supposed to consider the loads that have no side-effects. This patch fixed the issue by removing the unsafe loads from this function Eg: %vreg0<def> = L2_loadruh_io <fi#15>, 0 S2_storeri_io <fi#15>, 0, %vreg0 In this case, we load an unsigned extended half word and store this in to the same stack slot. The Stack slot coloring pass considers safe to remove the store. This patch marked all the non-vector byte and half word loads as unsafe. llvm-svn: 286843
-
Teresa Johnson authored
Summary: The change in r285513 to prevent exporting of locals used in inline asm added all locals in the llvm.used set to the reference set of functions containing inline asm. Since these locals were marked NoRename, this automatically prevented importing of the function. Unfortunately, this caused an explosion in the summary reference lists in some cases. In my particular example, it happened for a large protocol buffer generated C++ file, where many of the generated functions contained an inline asm call. It was exacerbated when doing a ThinLTO PGO instrumentation build, where the PGO instrumentation included thousands of private __profd_* values that were added to llvm.used. We really only need to include a single llvm.used local (NoRename) value in the reference list of a function containing inline asm to block it being imported. However, it seems cleaner to add a flag to the summary that explicitly describes this situation, which is what this patch does. Reviewers: mehdi_amini Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D26402 llvm-svn: 286840
-
Simon Pilgrim authored
More realistic v16i8/v32i8/v64i8 MUL costs - we have to extend to vXi16, use PMULLW and then truncate the result llvm-svn: 286838
-
Simon Pilgrim authored
Add explicit v16i16/v32i8 ADD/SUB costs, matching the costs of v4i64/v8i32 - they were missing for some reason. This has side effects on the LV max bandwidth tests (AVX1 now prefers 128-bit vectors vs AVX2 which still prefers 256-bit) llvm-svn: 286832
-
Sean Fertile authored
Differential Revision: https://reviews.llvm.org/D26272 llvm-svn: 286829
-
Simon Pilgrim authored
We were already testing is the op was not a leaf, so need to then test if it was a leaf (added it to the assert instead). llvm-svn: 286817
-
James Molloy authored
When calculating the cost of a call instruction we were applying a heuristic penalty as well as the cost of the instruction itself. However, when calculating the benefit from inlining we weren't discounting the equivalent penalty for the call instruction that would be removed! This caused skew in the calculation and meant we wouldn't inline in the following, trivial case: int g() { h(); } int f() { g(); } llvm-svn: 286814
-
Simon Pilgrim authored
'A || (!A && B)' is equivalent to 'A || B': (LoopCycle > DefCycle) || (LoopCycle <= DefCycle && LoopStage <= DefStage) --> (LoopCycle > DefCycle) || (LoopStage <= DefStage) llvm-svn: 286811
-
Diana Picus authored
llvm-svn: 286808
-
Pablo Barrio authored
Summary: Unfolding selects was previously done with the help of a vector of pointers that was then sorted to be able to remove duplicates. As this sorting depends on the memory addresses, it was non-deterministic. A SetVector is used now so that duplicates are removed without the need of sorting first. Reviewers: mgrang, efriedma Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D26450 llvm-svn: 286807
-
Eric Fiselier authored
Summary: This patch adds explicit `(void)` casts to discarded `release()` calls to suppress -Wunused-result. This patch fixes *all* warnings are generated as a result of [applying `[[nodiscard]]` within libc++](https://reviews.llvm.org/D26596). Similar fixes were applied to Clang in r286796. Reviewers: chandlerc, dberris Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D26598 llvm-svn: 286797
-
Saleem Abdulrasool authored
Only attempt to demangle symbols which have the itanium C++ prefix of `_Z`. This ensures that we do not treat any symbol name as a managled named. We would previously treat a C function `f` as a mangled name and decode that to `float` incorrectly. While it is easy to add tests for this, Mehdi recommended against introducing tests for the demangler as libc++abi should cover the testing. llvm-svn: 286795
-
Craig Topper authored
[AVX-512] Add suffixless aliases for EVEX encoded vcvtsi2ss/vcvtsi2sd/vcvtusi2ss/vcvtusi2sd. This matches the VEX behavior. Fixes another problem from PR28850. llvm-svn: 286790
-
Craig Topper authored
[X86] Cleanup 'x' and 'y' mnemonic suffixes for vcvtpd2dq/vcvttpd2dq/vcvtpd2ps and similar instructions. -Don't print the 'x' suffix for the 128-bit reg/mem VEX encoded instructions in Intel syntax. This is consistent with the EVEX versions. -Don't print the 'y' suffix for the 256-bit reg/reg VEX encoded instructions in Intel or AT&T syntax. This is consistent with the EVEX versions. -Allow the 'x' and 'y' suffixes to be used for the reg/mem forms when we're assembling using Intel syntax. -Allow the 'x' and 'y' suffixes on the reg/reg EVEX encoded instructions in Intel or AT&T syntax. This is consistent with what VEX was already allowing. This should fix at least some of PR28850. llvm-svn: 286787
-
Craig Topper authored
[AVX-512] Remove and autoupgrade masked dword/qword variable shift intrinsics to the new unmasked versions and selects. llvm-svn: 286786
-
- Nov 13, 2016
-
-
Sanjay Patel authored
Similar to: https://reviews.llvm.org/rL285499 https://reviews.llvm.org/rL286318 We can't minimally expose this in IR tests because we don't have min/max intrinsics, but the difference is visible in codegen because SelectionDAGBuilder::visitSelect() uses matchSelectPattern(). We're not canonicalizing these patterns in IR (yet), so I don't expect there to be any regressions as noted here: http://lists.llvm.org/pipermail/llvm-dev/2016-November/106868.html llvm-svn: 286776
-
Craig Topper authored
[AVX-512] Fix a disassembler failure for AVX-512 vcmpss/vcmpsd with an immediate larger than 32. Fix the same bug with VLX vcmpps/vcmppd. Fixes PR24941. llvm-svn: 286775
-
Sanjay Patel authored
llvm-svn: 286772
-
Craig Topper authored
[X86][IR] Reduce the number of full string comparisons in the code that autoupgrades masked shift intrinsics. llvm-svn: 286768
-
Matt Arsenault authored
nThis avoids the nasty problems caused by using memory instructions that read the exec mask while spilling / restoring registers used for control flow masking, but only for VI when these were added. This always uses the scalar stores when enabled currently, but it may be better to still try to spill to a VGPR and use this on the fallback memory path. The cache also needs to be flushed before wave termination if a scalar store is used. llvm-svn: 286766
-
Igor Breger authored
llvm-svn: 286765
-
Ayman Musa authored
Differential Revision: https://reviews.llvm.org/D26128 llvm-svn: 286761
-
Ayman Musa authored
Differential Revision: https://reviews.llvm.org/D26022 llvm-svn: 286758
-
Craig Topper authored
[InstCombine][AVX-512] Teach InstCombineCalls to handle the new unmasked AVX-512 variable shift intrinsics. llvm-svn: 286755
-
Craig Topper authored
These will be used to replace the masked intrinsics so that InstCombineCalls can optimize the AVX-512 variable shifts the same way it does for AVX2. llvm-svn: 286754
-
Konstantin Zhuravlyov authored
Differential Revision: https://reviews.llvm.org/D25975 llvm-svn: 286753
-
Peter Collingbourne authored
Differential Revision: https://reviews.llvm.org/D26562 llvm-svn: 286752
-
Peter Collingbourne authored
All existing callers were manually extracting information out of an existing GEP instruction and passing it to getGEPExpr(). Simplify the interface by changing it to take a GEPOperator instead. llvm-svn: 286751
-
Peter Collingbourne authored
llvm-svn: 286750
-
Peter Collingbourne authored
IR: Change the Type::get{Array,Vector,Pointer}ElementType() functions to perform the correct type assertion. Previously we were only asserting that the type was a sequential type. llvm-svn: 286749
-
Craig Topper authored
[InstCombine][AVX-512] Expand vector shift handling to work on the AVX-512 shift by immediate and shift by single value. This does not include support for the AVX-512 variable shifts. That will be coming in a future patch. llvm-svn: 286739
-
- Nov 12, 2016
-
-
Craig Topper authored
[AVX-512] Remove the remaining masked shift by immediate or by single value. Autoupgrade them to recently introduced unmasked versions and a select. After this I'll add the unmasked intrinsics to InstCombineCalls to finish making our handling of these types of shuffles consistent between AVX-512 and the legacy intrinsics. llvm-svn: 286725
-
Zachary Turner authored
Differential Revision: https://reviews.llvm.org/D25299 llvm-svn: 286724
-
Craig Topper authored
Summary: This is the first step towards being able to add the avx512 shift by immediate intrinsics to InstCombineCalls where we aleady support the sse2 and avx2 intrinsics. We need to the unmasked versions so we can avoid having to teach InstCombineCalls that it would need to insert selects sometimes. Instead we'll just add the selects around the new instrinsics in the frontend. This change should also enable the shift by i32 intrinsics to take a non-constant shift value just like the avx2 and sse intrinsics. This will enable us to fix PR30691 once we update clang. Next I'll switch clang to use the new builtins. Then we'll come back to the backend and remove/autoupgrade the old intrinsics. Then I'll work on the same series for variable shifts. Reviewers: RKSimon, zvi, delena Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D26333 llvm-svn: 286711
-
Craig Topper authored
Summary: VALIGND and VALIGNQ are similar to PALIGNR but instead of working on a 128-bit lane they work on the entire vector register. This change leverages the shuffle rotate detection code used for PALIGNR to detect these cases. Reviewers: delena, RKSimon Subscribers: Farhana, llvm-commits Differential Revision: https://reviews.llvm.org/D26297 llvm-svn: 286709
-