Skip to content
  1. Jul 09, 2018
    • Stefan Pintilie's avatar
      [Power9] [LLVM] Add __float128 support for trunc to double round to odd · 58e3e0a8
      Stefan Pintilie authored
      Add support for this builtin:
      double builtin_truncf128_round_to_odd(float128)
      
      Differential Revision: https://reviews.llvm.org/D48483
      
      llvm-svn: 336595
      58e3e0a8
    • Rui Ueyama's avatar
      lld: add experimental support for SHT_RELR sections. · 11479daf
      Rui Ueyama authored
      Patch by Rahul Chaudhry!
      
      This change adds experimental support for SHT_RELR sections, proposed
      here: https://groups.google.com/forum/#!topic/generic-abi/bX460iggiKg
      
      Pass '--pack-dyn-relocs=relr' to enable generation of SHT_RELR section
      and DT_RELR, DT_RELRSZ, and DT_RELRENT dynamic tags.
      
      Definitions for the new ELF section type and dynamic array tags, as well
      as the encoding used in the new section are all under discussion and are
      subject to change. Use with caution!
      
      Pass '--use-android-relr-tags' with '--pack-dyn-relocs=relr' to use
      SHT_ANDROID_RELR section type instead of SHT_RELR, as well as
      DT_ANDROID_RELR* dynamic tags instead of DT_RELR*. The generated
      section contents are identical.
      
      '--pack-dyn-relocs=android+relr --use-android-relr-tags' enables both
      '--pack-dyn-relocs=android' and '--pack-dyn-relocs=relr': lld will
      encode the relative relocations in a SHT_ANDROID_RELR section, and pack
      the rest of the dynamic relocations in a SHT_ANDROID_REL(A) section.
      
      Differential Revision: https://reviews.llvm.org/D48247
      
      llvm-svn: 336594
      11479daf
    • Mark Searles's avatar
      RenameIndependentSubregs: Fix handling of undef tied operands · 7139dea6
      Mark Searles authored
      Ensure that, if updating a tied operand pair, to only update
      that pair.
      
      Differential Revision: https://reviews.llvm.org/D49052
      
      llvm-svn: 336593
      7139dea6
    • Alexey Bataev's avatar
      [OPENMP] Do not mark local variables as declare target. · c1943e75
      Alexey Bataev authored
      When the parsing of the functions happens inside of the declare target
      region, we may erroneously mark local variables as declare target
      thought they are not. This attribute can be applied only to global
      variables.
      
      llvm-svn: 336592
      c1943e75
    • Alex Lorenz's avatar
      [libclang] NFC, simplify clang_Cursor_Evaluate · c4cf96e3
      Alex Lorenz authored
      Take advantage of early returns as suggested by Duncan in
      https://reviews.llvm.org/D49051
      
      llvm-svn: 336591
      c4cf96e3
    • Alex Lorenz's avatar
      [libclang] evalute compound statement cursors before trying to evaluate · 81f157b4
      Alex Lorenz authored
      the cursor like a declaration
      
      This change fixes a bug in libclang in which it tries to evaluate a statement
      cursor as a declaration cursor, because that statement still has a pointer to
      the declaration parent.
      
      rdar://38888477
      
      Differential Revision: https://reviews.llvm.org/D49051
      
      llvm-svn: 336590
      81f157b4
    • Daniel Sanders's avatar
      [globalisel][irtranslator] Add support for atomicrmw and (strong) cmpxchg · 9481399c
      Daniel Sanders authored
      Summary:
      This patch adds support for the atomicrmw instructions and the strong
      cmpxchg instruction to the IRTranslator.
      
      I've left out weak cmpxchg because LangRef.rst isn't entirely clear on what
      difference it makes to the backend. As far as I can tell from the code, it
      only matters to AtomicExpandPass which is run at the LLVM-IR level.
      
      Reviewers: ab, t.p.northover, qcolombet, rovka, aditya_nandakumar, volkan, javed.absar
      
      Reviewed By: qcolombet
      
      Subscribers: kristof.beyls, javed.absar, igorb, llvm-commits
      
      Differential Revision: https://reviews.llvm.org/D40092
      
      llvm-svn: 336589
      9481399c
    • Mark Searles's avatar
      [AMDGPU][Waitcnt] fix "comparison of integers of different signs" build error · 5bfd8d89
      Mark Searles authored
      Build error on Android; reported by and fix provided by (thanks) by Mauro Rossi <issor.oruam@gmail.com>
      
      Fixes the following building error:
      
      external/llvm/lib/Target/AMDGPU/SIInsertWaitcnts.cpp:1903:61:
      error: comparison of integers of different signs:
      'typename iterator_traits<__wrap_iter<MachineBasicBlock **> >::difference_type'
      (aka 'int') and 'unsigned int' [-Werror,-Wsign-compare]
                            BlockWaitcntProcessedSet.end(), &MBB) < Count)) {
                            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~~
      1 error generated.
      
      Differential Revision: https://reviews.llvm.org/D49089
      
      llvm-svn: 336588
      5bfd8d89
    • Matt Arsenault's avatar
      AMDGPU: Force inlining if LDS global address is used · 40cb6cab
      Matt Arsenault authored
      These won't work for the forseeable future. These aren't allowed
      from OpenCL, but IPO optimizations can make them appear.
      
      Also directly set the attributes on functions, regardless
      of the linkage rather than cloning functions like before.
      
      llvm-svn: 336587
      40cb6cab
    • Jonathan Peyton's avatar
      Fix const cast problem introduced in r336563 · dc73f512
      Jonathan Peyton authored
      336563 eliminated CCAST() macros caused build failures
      
      llvm-svn: 336586
      dc73f512
    • Roman Lebedev's avatar
      [X86][TLI] DAGCombine: Unfold variable bit-clearing mask to two shifts. · 5ccae175
      Roman Lebedev authored
      Summary:
      This adds a reverse transform for the instcombine canonicalizations
      that were added in D47980, D47981.
      
      As discussed later, that was worse at least for the code size,
      and potentially for the performance, too.
      
      https://rise4fun.com/Alive/Zmpl
      
      Reviewers: craig.topper, RKSimon, spatel
      
      Reviewed By: spatel
      
      Subscribers: reames, llvm-commits
      
      Differential Revision: https://reviews.llvm.org/D48768
      
      llvm-svn: 336585
      5ccae175
    • Eric Liu's avatar
      [Index] Ignore noop #undef's when handling macro occurrences. · 22a0c8db
      Eric Liu authored
      llvm-svn: 336584
      22a0c8db
    • Craig Topper's avatar
      [Builtins][Attributes][X86] Tag all X86 builtins with their required vector... · 74c10e32
      Craig Topper authored
      [Builtins][Attributes][X86] Tag all X86 builtins with their required vector width. Add a min_vector_width function attribute and tag all x86 instrinsics with it
      
      This is part of an ongoing attempt at making 512 bit vectors illegal in the X86 backend type legalizer due to CPU frequency penalties associated with wide vectors on Skylake Server CPUs. We want the loop vectorizer to be able to emit IR containing wide vectors as intermediate operations in vectorized code and allow these wide vectors to be legalized to 256 bits by the X86 backend even though we are targetting a CPU that supports 512 bit vectors. This is similar to what happens with an AVX2 CPU, the vectorizer can emit wide vectors and the backend will split them. We want this splitting behavior, but still be able to use new Skylake instructions that work on 256-bit vectors and support things like masking and gather/scatter.
      
      Of course if the user uses explicit vector code in their source code we need to not split those operations. Especially if they have used any of the 512-bit vector intrinsics from immintrin.h. And we need to make it so that merely using the intrinsics produces the expected code in order to be backwards compatible.
      
      To support this goal, this patch adds a new IR function attribute "min-legal-vector-width" that can indicate the need for a minimum vector width to be legal in the backend. We need to ensure this attribute is set to the largest vector width needed by any intrinsics from immintrin.h that the function uses. The inliner will be reponsible for merging this attribute when a function is inlined. We may also need a way to limit inlining in the future as well, but we can discuss that in the future.
      
      To make things more complicated, there are two different ways intrinsics are implemented in immintrin.h. Either as an always_inline function containing calls to builtins(can be target specific or target independent) or vector extension code. Or as a macro wrapper around a taget specific builtin. I believe I've removed all cases where the macro was around a target independent builtin.
      
      To support the always_inline function case this patch adds attribute((min_vector_width(128))) that can be used to tag these functions with their vector width. All x86 intrinsic functions that operate on vectors have been tagged with this attribute.
      
      To support the macro case, all x86 specific builtins have also been tagged with the vector width that they require. Use of any builtin with this property will implicitly increase the min_vector_width of the function that calls it. I've done this as a new property in the attribute string for the builtin rather than basing it on the type string so that we can opt into it on a per builtin basis and avoid any impact to target independent builtins.
      
      There will be future work to support vectors passed as function arguments and supporting inline assembly. And whatever else we can find that isn't covered by this patch.
      
      Special thanks to Chandler who suggested this direction and reviewed a preview version of this patch. And thanks to Eric Christopher who has had many conversations with me about this issue.
      
      Differential Revision: https://reviews.llvm.org/D48617
      
      llvm-svn: 336583
      74c10e32
    • Raphael Isemann's avatar
      Don't take the address of an xvalue when printing an expr result · b69854f0
      Raphael Isemann authored
      Summary:
      If we have an xvalue here, we will always hit the `err_typecheck_invalid_lvalue_addrof` error
      in 'Sema::CheckAddressOfOperand' when trying to take the address of the result. This patch
      uses the fallback code path where we store the result in a local variable instead when we hit
      this case.
      
      Fixes rdar://problem/40613277
      
      Reviewers: jingham, vsk
      
      Reviewed By: vsk
      
      Subscribers: vsk, friss, lldb-commits
      
      Differential Revision: https://reviews.llvm.org/D48303
      
      llvm-svn: 336582
      b69854f0
    • Eric Liu's avatar
      a62c9d62
    • Philip Pfaffe's avatar
      [Utils] Fix gdb pretty printers to work with Python 3. · 0566f235
      Philip Pfaffe authored
      Reiterate D23202 for container printers added after the change landed.
      
      Differential Revision: https://reviews.llvm.org/D46578
      
      llvm-svn: 336580
      0566f235
    • Stefan Pintilie's avatar
      [Power9] Add __float128 builtins for Round To Odd · 3dbde8a7
      Stefan Pintilie authored
      Add a number of builtins for __float128 Round To Odd.
      This is the Clang portion of the builtins work.
      
      Differential Revision: https://reviews.llvm.org/D47548
      
      llvm-svn: 336579
      3dbde8a7
    • Stefan Pintilie's avatar
      [Power9] Add __float128 builtins for Round To Odd · 83a5fe14
      Stefan Pintilie authored
      GCC has builtins for these round to odd instructions:
      
      __float128 __builtin_sqrtf128_round_to_odd (__float128)
      __float128 __builtin_{add,sub,mul,div}f128_round_to_odd (__float128, __float128)
      __float128 __builtin_fmaf128_round_to_odd (__float128, __float128, __float128)
      
      Differential Revision: https://reviews.llvm.org/D47550
      
      llvm-svn: 336578
      83a5fe14
    • Maksim Panchenko's avatar
      [DebugInfo] Change default value of FDEPointerEncoding · fa762cc1
      Maksim Panchenko authored
      Summary:
      If the encoding is not specified in CIE augmentation string, then it
      should be DW_EH_PE_absptr instead of DW_EH_PE_omit.
      
      Reviewers: ruiu, MaskRay, plotfi, rafauler
      
      Reviewed By: MaskRay
      
      Subscribers: rafauler, JDevlieghere, llvm-commits
      
      Differential Revision: https://reviews.llvm.org/D49000
      
      llvm-svn: 336577
      fa762cc1
    • Craig Topper's avatar
      [SelectionDAG] Add VT consistency checks to the creation of ISD::FMA. · e3b0c7e5
      Craig Topper authored
      This is similar to what is done for binops. I don't know if this would have helped us catch the bug fixed in r336566 earlier or not, but I figured it couldn't hurt.
      
      llvm-svn: 336576
      e3b0c7e5
    • Jonathan Peyton's avatar
      [OpenMP] Fix a few formatting issues · 61d44f18
      Jonathan Peyton authored
      llvm-svn: 336575
      61d44f18
    • Steven Wu's avatar
      Add bitcode compatibility test for 6.0 · a1a8e66a
      Steven Wu authored
      Summary:
      Add bitcode compatibility test for 6.0. On top of the normal disassemble
      test, also runs the verifier to make sure simple 6.0 bitcode can pass
      the current IR verifier.
      
      Reviewers: vsk
      
      Reviewed By: vsk
      
      Subscribers: dexonsmith, llvm-commits
      
      Differential Revision: https://reviews.llvm.org/D49086
      
      llvm-svn: 336574
      a1a8e66a
    • Alex Shlyapnikov's avatar
      [ASan] Minor ASan error reporting cleanup · 63af9157
      Alex Shlyapnikov authored
      Summary:
      - use proper Error() decorator for error messages
      - refactor ASan thread id and name reporting
      
      Reviewers: eugenis
      
      Subscribers: kubamracek, delcypher, #sanitizers, llvm-commits
      
      Differential Revision: https://reviews.llvm.org/D49044
      
      llvm-svn: 336573
      63af9157
    • Diego Caballero's avatar
      [LoopInfo] Port loop exit interfaces from Loop to LoopBase · 29a07b37
      Diego Caballero authored
      This patch ports hasDedicatedExits, getUniqueExitBlocks and
      getUniqueExitBlock in Loop to LoopBase so that they can be used
      from other LoopBase sub-classes.
      
      Reviewers: chandlerc, sanjoy, hfinkel, fhahn
      
      Reviewed By: chandlerc
      
      Differential Revision: https://reviews.llvm.org/D48817
      
      llvm-svn: 336572
      29a07b37
    • Jonathan Peyton's avatar
      [OpenMP] Introduce hierarchical scheduling · f6399367
      Jonathan Peyton authored
      This patch introduces the logic implementing hierarchical scheduling.
      First and foremost, hierarchical scheduling is off by default
      To enable, use -DLIBOMP_USE_HIER_SCHED=On during CMake's configure stage.
      This work is based off if the IWOMP paper:
      "Workstealing and Nested Parallelism in SMP Systems"
      
      Hierarchical scheduling is the layering of OpenMP schedules for different layers
      of the memory hierarchy. One can have multiple layers between the threads and
      the global iterations space. The threads will go up the hierarchy to grab
      iterations, using possibly a different schedule & chunk for each layer.
      
      [ Global iteration space (0-999) ]
      
      (use static)
      [ L1 | L1 | L1 | L1 ]
      
      (use dynamic,1)
      [ T0 T1 | T2 T3 | T4 T5 | T6 T7 ]
      
      In the example shown above, there are 8 threads and 4 L1 caches begin targeted.
      If the topology indicates that there are two threads per core, then two
      consecutive threads will share the data of one L1 cache unit. This example
      would have the iteration space (0-999) split statically across the four L1
      caches (so the first L1 would get (0-249), the second would get (250-499), etc).
      Then the threads will use a dynamic,1 schedule to grab iterations from the L1
      cache units. There are currently four supported layers: L1, L2, L3, NUMA
      
      OMP_SCHEDULE can now read a hierarchical schedule with this syntax:
      OMP_SCHEDULE='EXPERIMENTAL LAYER,SCHED[,CHUNK][:LAYER,SCHED[,CHUNK]...]:SCHED,CHUNK
      And OMP_SCHEDULE can still read the normal SCHED,CHUNK syntax from before
      
      I've kept most of the hierarchical scheduling logic inside kmp_dispatch_hier.h
      to try to keep it separate from the rest of the code.
      
      Differential Revision: https://reviews.llvm.org/D47962
      
      llvm-svn: 336571
      f6399367
    • Sanjay Patel's avatar
      [InstCombine] correct test comments; NFC · 651438c2
      Sanjay Patel authored
      llvm-svn: 336570
      651438c2
    • Alexey Bataev's avatar
      [OPENMP, NVPTX] Support several images in the executable. · 2622e9e5
      Alexey Bataev authored
      Summary:
      Currently Cuda plugin supports loading of the single image, though we
      may have the executable with the several images, if it has target
      regions inside of the dynamically loaded library. Patch allows to load
      multiple images.
      
      Reviewers: grokos
      
      Subscribers: guansong, openmp-commits, kkwli0
      
      Differential Revision: https://reviews.llvm.org/D49036
      
      llvm-svn: 336569
      2622e9e5
    • Jonathan Peyton's avatar
      [OpenMP] Restructure loop code for hierarchical scheduling · 39ada854
      Jonathan Peyton authored
      This patch reorganizes the loop scheduling code in order to allow hierarchical
      scheduling to use it more effectively. In particular, the goal of this patch
      is to separate the algorithmic parts of the scheduling from the thread
      logistics code.
      
      Moves declarations & structures to kmp_dispatch.h for easier access in
      other files.  Extracts the algorithmic part of __kmp_dispatch_init() and
      __kmp_dispatch_next() into __kmp_dispatch_init_algorithm() and
      __kmp_dispatch_next_algorithm(). The thread bookkeeping logic is still kept in
      __kmp_dispatch_init() and __kmp_dispatch_next(). This is done because the
      hierarchical scheduler needs to access the scheduling logic without the
      bookkeeping logic.  To prepare for new pointer in dispatch_private_info_t, a
      new flags variable is created which stores the ordered and nomerge flags instead
      of them being in two separate variables. This will keep the
      dispatch_private_info_t structure the same size.
      
      Differential Revision: https://reviews.llvm.org/D47961
      
      llvm-svn: 336568
      39ada854
    • Alexey Bataev's avatar
      [OPENMP, NVPTX] Do not globalize local variables in parallel regions. · b99dcb5f
      Alexey Bataev authored
      In generic data-sharing mode we are allowed to not globalize local
      variables that escape their declaration context iff they are declared
      inside of the parallel region. We can do this because L2 parallel
      regions are executed sequentially and, thus, we do not need to put
      shared local variables in the global memory.
      
      llvm-svn: 336567
      b99dcb5f
    • Craig Topper's avatar
      [X86] In combineFMA, make sure we bitcast the result of isFNEG back the... · 47170b31
      Craig Topper authored
      [X86] In combineFMA, make sure we bitcast the result of isFNEG back the expected type before creating the new FMA node.
      
      Previously, we were creating malformed SDNodes, but nothing noticed because the type constraints prevented isel from noticing.
      
      llvm-svn: 336566
      47170b31
    • Simon Pilgrim's avatar
      [X86][AVX] Regenerate AVX1 fast-isel tests. · d0706592
      Simon Pilgrim authored
      Let the update script merge 32/64 tests where possible
      
      llvm-svn: 336565
      d0706592
    • Stella Stamenova's avatar
      Retrieve a function PDB symbol correctly from nested blocks · 67a19dfb
      Stella Stamenova authored
      Summary:
      This patch fixes a problem with retrieving a function symbol by an address in a nested block. In the current implementation of ResolveSymbolContext function it retrieves a symbol with PDB_SymType::None and then checks if found symbol's tag equals to PDB_SymType::Function. So, if nested block's symbol was found, ResolveSymbolContext does not resolve a function.
      
      It is very simple to reproduce this. For example, in the next program
      
      ```
      int main() {
        auto r = 0;
        for (auto i = 1; i <= 10; i++) {
          r += i & 1 + (i - 1) & 1 - 1;
        }
      
        return r;
      }
      ```
      
      if we will stop inside the cycle and will do a backtrace, the top element will be broken. But how we can test this? I thought to add an option to lldb-test to allow search a function by address, but the address may change when the compiler will be changed.
      
      Patch by: Aleksandr Urakov
      
      Reviewers: asmith, labath, zturner
      
      Reviewed By: asmith, labath
      
      Subscribers: stella.stamenova, llvm-commits
      
      Differential Revision: https://reviews.llvm.org/D47939
      
      llvm-svn: 336564
      67a19dfb
    • Jonathan Peyton's avatar
      [OpenMP] Use C++11 Atomics - barrier, tasking, and lock code · 37e2ef54
      Jonathan Peyton authored
      These are preliminary changes that attempt to use C++11 Atomics in the runtime.
      We are expecting better portability with this change across architectures/OSes.
      Here is the summary of the changes.
      
      Most variables that need synchronization operation were converted to generic
      atomic variables (std::atomic<T>). Variables that are updated with combined CAS
      are packed into a single atomic variable, and partial read/write is done
      through unpacking/packing
      
      Patch by Hansang Bae
      
      Differential Revision: https://reviews.llvm.org/D47903
      
      llvm-svn: 336563
      37e2ef54
    • Sanjay Patel's avatar
      [InstCombine] avoid extra poison when moving shift above shuffle · 7cd32419
      Sanjay Patel authored
      As discussed in D49047 / D48987, shift-by-undef produces poison,
      so we can't use undef vector elements in that case..
      
      Note that we need to extend this for poison-generating flags,
      and there's a proposal to create poison from FMF in D47963,
      
      llvm-svn: 336562
      7cd32419
    • Jonas Devlieghere's avatar
      [dsymutil] Add support for outputting assembly · 82dee6ac
      Jonas Devlieghere authored
      When implementing the DWARF accelerator tables in dsymutil I ran into an
      assertion in the assembler. Debugging these kind of issues is a lot
      easier when looking at the assembly instead of debugging the assembler
      itself. Since it's only a matter of creating an AsmStreamer instead of a
      MCObjectStreamer it made sense to turn this into a (hidden) dsymutil
      feature.
      
      Differential revision: https://reviews.llvm.org/D49079
      
      llvm-svn: 336561
      82dee6ac
    • Steven Wu's avatar
      [BitcodeReader] Infer the correct runtime preemption for GlobalValue · e1f7c5f8
      Steven Wu authored
      Summary:
      To allow bitcode built by old compiler to pass the current verifer,
      BitcodeReader needs to auto infer the correct runtime preemption from
      linkage and visibility for GlobalValues.
      
      Since llvm-6.0 bitcode already contains the new field but can be
      incorrect in some cases, the attribute needs to be recomputed all the
      time in BitcodeReader. This will make all the GVs has dso_local marked
      correctly if read from bitcode, and it should still allow the verifier
      to catch mistakes in optimization passes.
      
      This should fix PR38009.
      
      Reviewers: sfertile, vsk
      
      Reviewed By: vsk
      
      Subscribers: dexonsmith, llvm-commits
      
      Differential Revision: https://reviews.llvm.org/D49039
      
      llvm-svn: 336560
      e1f7c5f8
    • Zaara Syeda's avatar
      [PPC64] Add TLS local dynamic to local exec relaxation · 75c348a0
      Zaara Syeda authored
      This patch adds the target call back relaxTlsLdToLe to support TLS relaxation
      from local dynamic to local exec model.
      
      Differential Revision: https://reviews.llvm.org/D48293
      
      llvm-svn: 336559
      75c348a0
    • Sanjay Patel's avatar
      [InstCombine] generalize safe vector constant utility · a6272531
      Sanjay Patel authored
      This is almost NFC, but there could be some case where the original
      code had undefs in the constants (rather than just the shuffle mask),
      and we'll use safe constants rather than undefs now.
      
      The FIXME noted in foldShuffledBinop() is already visible in existing
      tests, so correcting that is the next step.
      
      llvm-svn: 336558
      a6272531
    • Craig Topper's avatar
      [X86] Remove some patterns that include a bitcast of a floating point load to an integer type. · e9cff7d4
      Craig Topper authored
      DAG combine should have converted the type of the load.
      
      llvm-svn: 336557
      e9cff7d4
    • Craig Topper's avatar
      [X86] Remove some patterns that seems to be unreachable. · 16ee4b49
      Craig Topper authored
      These patterns mapped (v2f64 (X86vzmovl (v2f64 (scalar_to_vector FR64:$src)))) to a MOVSD and an zeroing XOR. But the complexity of a pattern for (v2f64 (X86vzmovl (v2f64))) that selects MOVQ is artificially and hides this MOVSD pattern.
      
      Weirder still, the SSE version of the pattern was explicitly blocked on SSE41, but yet we had copied it to AVX and AVX512.
      
      llvm-svn: 336556
      16ee4b49
Loading