- Jul 09, 2018
-
-
Roman Lebedev authored
Summary: This adds a reverse transform for the instcombine canonicalizations that were added in D47980, D47981. As discussed later, that was worse at least for the code size, and potentially for the performance, too. https://rise4fun.com/Alive/Zmpl Reviewers: craig.topper, RKSimon, spatel Reviewed By: spatel Subscribers: reames, llvm-commits Differential Revision: https://reviews.llvm.org/D48768 llvm-svn: 336585
-
Eric Liu authored
llvm-svn: 336584
-
Craig Topper authored
[Builtins][Attributes][X86] Tag all X86 builtins with their required vector width. Add a min_vector_width function attribute and tag all x86 instrinsics with it This is part of an ongoing attempt at making 512 bit vectors illegal in the X86 backend type legalizer due to CPU frequency penalties associated with wide vectors on Skylake Server CPUs. We want the loop vectorizer to be able to emit IR containing wide vectors as intermediate operations in vectorized code and allow these wide vectors to be legalized to 256 bits by the X86 backend even though we are targetting a CPU that supports 512 bit vectors. This is similar to what happens with an AVX2 CPU, the vectorizer can emit wide vectors and the backend will split them. We want this splitting behavior, but still be able to use new Skylake instructions that work on 256-bit vectors and support things like masking and gather/scatter. Of course if the user uses explicit vector code in their source code we need to not split those operations. Especially if they have used any of the 512-bit vector intrinsics from immintrin.h. And we need to make it so that merely using the intrinsics produces the expected code in order to be backwards compatible. To support this goal, this patch adds a new IR function attribute "min-legal-vector-width" that can indicate the need for a minimum vector width to be legal in the backend. We need to ensure this attribute is set to the largest vector width needed by any intrinsics from immintrin.h that the function uses. The inliner will be reponsible for merging this attribute when a function is inlined. We may also need a way to limit inlining in the future as well, but we can discuss that in the future. To make things more complicated, there are two different ways intrinsics are implemented in immintrin.h. Either as an always_inline function containing calls to builtins(can be target specific or target independent) or vector extension code. Or as a macro wrapper around a taget specific builtin. I believe I've removed all cases where the macro was around a target independent builtin. To support the always_inline function case this patch adds attribute((min_vector_width(128))) that can be used to tag these functions with their vector width. All x86 intrinsic functions that operate on vectors have been tagged with this attribute. To support the macro case, all x86 specific builtins have also been tagged with the vector width that they require. Use of any builtin with this property will implicitly increase the min_vector_width of the function that calls it. I've done this as a new property in the attribute string for the builtin rather than basing it on the type string so that we can opt into it on a per builtin basis and avoid any impact to target independent builtins. There will be future work to support vectors passed as function arguments and supporting inline assembly. And whatever else we can find that isn't covered by this patch. Special thanks to Chandler who suggested this direction and reviewed a preview version of this patch. And thanks to Eric Christopher who has had many conversations with me about this issue. Differential Revision: https://reviews.llvm.org/D48617 llvm-svn: 336583
-
Raphael Isemann authored
Summary: If we have an xvalue here, we will always hit the `err_typecheck_invalid_lvalue_addrof` error in 'Sema::CheckAddressOfOperand' when trying to take the address of the result. This patch uses the fallback code path where we store the result in a local variable instead when we hit this case. Fixes rdar://problem/40613277 Reviewers: jingham, vsk Reviewed By: vsk Subscribers: vsk, friss, lldb-commits Differential Revision: https://reviews.llvm.org/D48303 llvm-svn: 336582
-
Eric Liu authored
llvm-svn: 336581
-
Philip Pfaffe authored
Reiterate D23202 for container printers added after the change landed. Differential Revision: https://reviews.llvm.org/D46578 llvm-svn: 336580
-
Stefan Pintilie authored
Add a number of builtins for __float128 Round To Odd. This is the Clang portion of the builtins work. Differential Revision: https://reviews.llvm.org/D47548 llvm-svn: 336579
-
Stefan Pintilie authored
GCC has builtins for these round to odd instructions: __float128 __builtin_sqrtf128_round_to_odd (__float128) __float128 __builtin_{add,sub,mul,div}f128_round_to_odd (__float128, __float128) __float128 __builtin_fmaf128_round_to_odd (__float128, __float128, __float128) Differential Revision: https://reviews.llvm.org/D47550 llvm-svn: 336578
-
Maksim Panchenko authored
Summary: If the encoding is not specified in CIE augmentation string, then it should be DW_EH_PE_absptr instead of DW_EH_PE_omit. Reviewers: ruiu, MaskRay, plotfi, rafauler Reviewed By: MaskRay Subscribers: rafauler, JDevlieghere, llvm-commits Differential Revision: https://reviews.llvm.org/D49000 llvm-svn: 336577
-
Craig Topper authored
This is similar to what is done for binops. I don't know if this would have helped us catch the bug fixed in r336566 earlier or not, but I figured it couldn't hurt. llvm-svn: 336576
-
Jonathan Peyton authored
llvm-svn: 336575
-
Steven Wu authored
Summary: Add bitcode compatibility test for 6.0. On top of the normal disassemble test, also runs the verifier to make sure simple 6.0 bitcode can pass the current IR verifier. Reviewers: vsk Reviewed By: vsk Subscribers: dexonsmith, llvm-commits Differential Revision: https://reviews.llvm.org/D49086 llvm-svn: 336574
-
Alex Shlyapnikov authored
Summary: - use proper Error() decorator for error messages - refactor ASan thread id and name reporting Reviewers: eugenis Subscribers: kubamracek, delcypher, #sanitizers, llvm-commits Differential Revision: https://reviews.llvm.org/D49044 llvm-svn: 336573
-
Diego Caballero authored
This patch ports hasDedicatedExits, getUniqueExitBlocks and getUniqueExitBlock in Loop to LoopBase so that they can be used from other LoopBase sub-classes. Reviewers: chandlerc, sanjoy, hfinkel, fhahn Reviewed By: chandlerc Differential Revision: https://reviews.llvm.org/D48817 llvm-svn: 336572
-
Jonathan Peyton authored
This patch introduces the logic implementing hierarchical scheduling. First and foremost, hierarchical scheduling is off by default To enable, use -DLIBOMP_USE_HIER_SCHED=On during CMake's configure stage. This work is based off if the IWOMP paper: "Workstealing and Nested Parallelism in SMP Systems" Hierarchical scheduling is the layering of OpenMP schedules for different layers of the memory hierarchy. One can have multiple layers between the threads and the global iterations space. The threads will go up the hierarchy to grab iterations, using possibly a different schedule & chunk for each layer. [ Global iteration space (0-999) ] (use static) [ L1 | L1 | L1 | L1 ] (use dynamic,1) [ T0 T1 | T2 T3 | T4 T5 | T6 T7 ] In the example shown above, there are 8 threads and 4 L1 caches begin targeted. If the topology indicates that there are two threads per core, then two consecutive threads will share the data of one L1 cache unit. This example would have the iteration space (0-999) split statically across the four L1 caches (so the first L1 would get (0-249), the second would get (250-499), etc). Then the threads will use a dynamic,1 schedule to grab iterations from the L1 cache units. There are currently four supported layers: L1, L2, L3, NUMA OMP_SCHEDULE can now read a hierarchical schedule with this syntax: OMP_SCHEDULE='EXPERIMENTAL LAYER,SCHED[,CHUNK][:LAYER,SCHED[,CHUNK]...]:SCHED,CHUNK And OMP_SCHEDULE can still read the normal SCHED,CHUNK syntax from before I've kept most of the hierarchical scheduling logic inside kmp_dispatch_hier.h to try to keep it separate from the rest of the code. Differential Revision: https://reviews.llvm.org/D47962 llvm-svn: 336571
-
Sanjay Patel authored
llvm-svn: 336570
-
Alexey Bataev authored
Summary: Currently Cuda plugin supports loading of the single image, though we may have the executable with the several images, if it has target regions inside of the dynamically loaded library. Patch allows to load multiple images. Reviewers: grokos Subscribers: guansong, openmp-commits, kkwli0 Differential Revision: https://reviews.llvm.org/D49036 llvm-svn: 336569
-
Jonathan Peyton authored
This patch reorganizes the loop scheduling code in order to allow hierarchical scheduling to use it more effectively. In particular, the goal of this patch is to separate the algorithmic parts of the scheduling from the thread logistics code. Moves declarations & structures to kmp_dispatch.h for easier access in other files. Extracts the algorithmic part of __kmp_dispatch_init() and __kmp_dispatch_next() into __kmp_dispatch_init_algorithm() and __kmp_dispatch_next_algorithm(). The thread bookkeeping logic is still kept in __kmp_dispatch_init() and __kmp_dispatch_next(). This is done because the hierarchical scheduler needs to access the scheduling logic without the bookkeeping logic. To prepare for new pointer in dispatch_private_info_t, a new flags variable is created which stores the ordered and nomerge flags instead of them being in two separate variables. This will keep the dispatch_private_info_t structure the same size. Differential Revision: https://reviews.llvm.org/D47961 llvm-svn: 336568
-
Alexey Bataev authored
In generic data-sharing mode we are allowed to not globalize local variables that escape their declaration context iff they are declared inside of the parallel region. We can do this because L2 parallel regions are executed sequentially and, thus, we do not need to put shared local variables in the global memory. llvm-svn: 336567
-
Craig Topper authored
[X86] In combineFMA, make sure we bitcast the result of isFNEG back the expected type before creating the new FMA node. Previously, we were creating malformed SDNodes, but nothing noticed because the type constraints prevented isel from noticing. llvm-svn: 336566
-
Simon Pilgrim authored
Let the update script merge 32/64 tests where possible llvm-svn: 336565
-
Stella Stamenova authored
Summary: This patch fixes a problem with retrieving a function symbol by an address in a nested block. In the current implementation of ResolveSymbolContext function it retrieves a symbol with PDB_SymType::None and then checks if found symbol's tag equals to PDB_SymType::Function. So, if nested block's symbol was found, ResolveSymbolContext does not resolve a function. It is very simple to reproduce this. For example, in the next program ``` int main() { auto r = 0; for (auto i = 1; i <= 10; i++) { r += i & 1 + (i - 1) & 1 - 1; } return r; } ``` if we will stop inside the cycle and will do a backtrace, the top element will be broken. But how we can test this? I thought to add an option to lldb-test to allow search a function by address, but the address may change when the compiler will be changed. Patch by: Aleksandr Urakov Reviewers: asmith, labath, zturner Reviewed By: asmith, labath Subscribers: stella.stamenova, llvm-commits Differential Revision: https://reviews.llvm.org/D47939 llvm-svn: 336564
-
Jonathan Peyton authored
These are preliminary changes that attempt to use C++11 Atomics in the runtime. We are expecting better portability with this change across architectures/OSes. Here is the summary of the changes. Most variables that need synchronization operation were converted to generic atomic variables (std::atomic<T>). Variables that are updated with combined CAS are packed into a single atomic variable, and partial read/write is done through unpacking/packing Patch by Hansang Bae Differential Revision: https://reviews.llvm.org/D47903 llvm-svn: 336563
-
Sanjay Patel authored
As discussed in D49047 / D48987, shift-by-undef produces poison, so we can't use undef vector elements in that case.. Note that we need to extend this for poison-generating flags, and there's a proposal to create poison from FMF in D47963, llvm-svn: 336562
-
Jonas Devlieghere authored
When implementing the DWARF accelerator tables in dsymutil I ran into an assertion in the assembler. Debugging these kind of issues is a lot easier when looking at the assembly instead of debugging the assembler itself. Since it's only a matter of creating an AsmStreamer instead of a MCObjectStreamer it made sense to turn this into a (hidden) dsymutil feature. Differential revision: https://reviews.llvm.org/D49079 llvm-svn: 336561
-
Steven Wu authored
Summary: To allow bitcode built by old compiler to pass the current verifer, BitcodeReader needs to auto infer the correct runtime preemption from linkage and visibility for GlobalValues. Since llvm-6.0 bitcode already contains the new field but can be incorrect in some cases, the attribute needs to be recomputed all the time in BitcodeReader. This will make all the GVs has dso_local marked correctly if read from bitcode, and it should still allow the verifier to catch mistakes in optimization passes. This should fix PR38009. Reviewers: sfertile, vsk Reviewed By: vsk Subscribers: dexonsmith, llvm-commits Differential Revision: https://reviews.llvm.org/D49039 llvm-svn: 336560
-
Zaara Syeda authored
This patch adds the target call back relaxTlsLdToLe to support TLS relaxation from local dynamic to local exec model. Differential Revision: https://reviews.llvm.org/D48293 llvm-svn: 336559
-
Sanjay Patel authored
This is almost NFC, but there could be some case where the original code had undefs in the constants (rather than just the shuffle mask), and we'll use safe constants rather than undefs now. The FIXME noted in foldShuffledBinop() is already visible in existing tests, so correcting that is the next step. llvm-svn: 336558
-
Craig Topper authored
DAG combine should have converted the type of the load. llvm-svn: 336557
-
Craig Topper authored
These patterns mapped (v2f64 (X86vzmovl (v2f64 (scalar_to_vector FR64:$src)))) to a MOVSD and an zeroing XOR. But the complexity of a pattern for (v2f64 (X86vzmovl (v2f64))) that selects MOVQ is artificially and hides this MOVSD pattern. Weirder still, the SSE version of the pattern was explicitly blocked on SSE41, but yet we had copied it to AVX and AVX512. llvm-svn: 336556
-
Craig Topper authored
Looking at the generated tables this didn't seem to make an obvious difference in pattern priority. llvm-svn: 336555
-
Diego Caballero authored
This patch introduces a VPValue in VPBlockBase to represent the condition bit that is used as successor selector when a block has multiple successors. This information wasn't necessary until now, when we are about to introduce outer loop vectorization support in VPlan code gen. Reviewers: fhahn, rengolin, mkuper, hfinkel, mssimpso Reviewed By: fhahn Differential Revision: https://reviews.llvm.org/D48814 llvm-svn: 336554
-
Eric Liu authored
Summary: This is not enabled in the global-symbol-builder or dynamic index yet. Reviewers: sammccall Reviewed By: sammccall Subscribers: ilya-biryukov, MaskRay, jkorous, cfe-commits Differential Revision: https://reviews.llvm.org/D49028 llvm-svn: 336553
-
Sander de Smalen authored
This patch adds support for the following instructions: CNTB CNTH - Determine the number of active elements implied by CNTW CNTD the named predicate constant, multiplied by an immediate, e.g. cnth x0, vl8, #16 CNTP - Count active predicate elements, e.g. cntp x0, p0, p1.b counts the number of active elements in p1, predicated by p0, and stores the result in x0. llvm-svn: 336552
-
Xin Tong authored
Summary: Tests: 10 Metric: compile_time Program unpatch-result patch-result diff Bullet/bullet 32.39 30.54 -5.7% SPASS/SPASS 18.14 17.25 -4.9% mafft/pairlocalalign 12.10 11.64 -3.8% ClamAV/clamscan 19.21 19.63 2.2% 7zip/7zip-benchmark 49.55 48.85 -1.4% kimwitu++/kc 15.68 15.87 1.2% lencod/lencod 21.13 21.34 1.0% consumer-typeset/consumer-typeset 13.65 13.62 -0.2% tramp3d-v4/tramp3d-v4 29.88 29.92 0.1% sqlite3/sqlite3 18.48 18.46 -0.1% unpatch-result patch-result diff count 10.000000 10.000000 10.000000 mean 23.022000 22.712400 -0.011671 std 11.362831 11.094183 0.027338 min 12.104000 11.640000 -0.057298 25% 16.299000 16.214000 -0.032282 50% 18.844000 19.048000 -0.001350 75% 27.689000 27.774000 0.007752 max 49.552000 48.852000 0.021861 I also tested only this pass by concatenating all the code from the llvm/lib/Analysis/ folder and do clang -g followed by opt. I get close to 20% speedup for the pass. I expect a majority of the gain come from skipping the dbg intrinsics. Before patch (opt -time-passes -called-value-propagation): ============ ===-------------------------------------------------------------------------=== ... Pass execution timing report ... ===-------------------------------------------------------------------------=== Total Execution Time: 3.8303 seconds (3.8279 wall clock) ---User Time--- --System Time-- --User+System-- ---Wall Time--- --- Name --- 2.0768 ( 57.3%) 0.0990 ( 48.0%) 2.1757 ( 56.8%) 2.1757 ( 56.8%) Bitcode Writer 0.8444 ( 23.3%) 0.0600 ( 29.1%) 0.9044 ( 23.6%) 0.9044 ( 23.6%) Called Value Propagation 0.7031 ( 19.4%) 0.0472 ( 22.9%) 0.7502 ( 19.6%) 0.7478 ( 19.5%) Module Verifier 3.6242 (100.0%) 0.2062 (100.0%) 3.8303 (100.0%) 3.8279 (100.0%) Total After patch (opt -time-passes -called-value-propagation): ============ ===-------------------------------------------------------------------------=== ... Pass execution timing report ... ===-------------------------------------------------------------------------=== Total Execution Time: 3.6605 seconds (3.6579 wall clock) ---User Time--- --System Time-- --User+System-- ---Wall Time--- --- Name --- 2.0716 ( 59.7%) 0.0990 ( 52.5%) 2.1705 ( 59.3%) 2.1706 ( 59.3%) Bitcode Writer 0.7144 ( 20.6%) 0.0300 ( 15.9%) 0.7444 ( 20.3%) 0.7444 ( 20.4%) Called Value Propagation 0.6859 ( 19.8%) 0.0596 ( 31.6%) 0.7455 ( 20.4%) 0.7429 ( 20.3%) Module Verifier 3.4719 (100.0%) 0.1886 (100.0%) 3.6605 (100.0%) 3.6579 (100.0%) Total Reviewers: davide, mssimpso Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D49078 llvm-svn: 336551
-
Marc-Andre Laperle authored
Summary: Signed-off-by: Marc-Andre Laperle <marc-andre.laperle@ericsson.com> Subscribers: ilya-biryukov, ioeric, MaskRay, jkorous, cfe-commits Differential Revision: https://reviews.llvm.org/D48996 llvm-svn: 336550
-
Sam McCall authored
Summary: The library has graduated from clangd to llvm/Support. This is a mechanical change to move to the new API and remove the old one. Main API changes: - namespace clang::clangd::json --> llvm::json - json::Expr --> json::Value - Expr::asString() etc --> Value::getAsString() etc - unsigned longs need a cast (due to r336541 adding lossless integer support) Reviewers: ilya-biryukov Subscribers: mgorny, ioeric, MaskRay, jkorous, omtcyfz, cfe-commits Differential Revision: https://reviews.llvm.org/D49077 llvm-svn: 336549
-
Stefan Pintilie authored
Added handling for the select f128. Differential Revision: https://reviews.llvm.org/D48294 llvm-svn: 336548
-
Sander de Smalen authored
This patch completes support for shifts, which include: - LSL - Logical Shift Left - LSLR - Logical Shift Left, Reversed form - LSR - Logical Shift Right - LSRR - Logical Shift Right, Reversed form - ASR - Arithmetic Shift Right - ASRR - Arithmetic Shift Right, Reversed form - ASRD - Arithmetic Shift Right for Divide In the following variants: - Predicated shift by immediate - ASR, LSL, LSR, ASRD e.g. asr z0.h, p0/m, z0.h, #1 (active lanes of z0 shifted by #1) - Unpredicated shift by immediate - ASR, LSL*, LSR* e.g. asr z0.h, z1.h, #1 (all lanes of z1 shifted by #1, stored in z0) - Predicated shift by vector - ASR, LSL*, LSR* e.g. asr z0.h, p0/m, z0.h, z1.h (active lanes of z0 shifted by z1, stored in z0) - Predicated shift by vector, reversed form - ASRR, LSLR, LSRR e.g. lslr z0.h, p0/m, z0.h, z1.h (active lanes of z1 shifted by z0, stored in z0) - Predicated shift left/right by wide vector - ASR, LSL, LSR e.g. lsl z0.h, p0/m, z0.h, z1.d (active lanes of z0 shifted by wide elements of vector z1) - Unpredicated shift left/right by wide vector - ASR, LSL, LSR e.g. lsl z0.h, z1.h, z2.d (all lanes of z1 shifted by wide elements of z2, stored in z0) *Variants added in previous patches. llvm-svn: 336547
-
Sanjay Patel authored
As noted in D48987, there are many different ways for this transform to go wrong. In particular, the poison potential for shifts means we have to more careful with those ops. I added tests to make that behavior visible for all of the different cases that I could find. This is a partial fix. To make this review easier, I did not make changes for the single binop pattern (handled in foldSelectShuffleWith1Binop()). I also left out some potential optimizations noted with TODO comments. I'll follow-up once we're confident that things are correct here. The goal is to correct all marked FIXME tests to either avoid the shuffle transform or do it safely. Note that distinguishing when the shuffle mask contains undefs and using getBinOpIdentity() allows for some improvements to div/rem patterns, so there are wins along with the missed opportunities and fixes. Differential Revision: https://reviews.llvm.org/D49047 llvm-svn: 336546
-