Skip to content
  1. Sep 10, 2019
  2. Sep 09, 2019
    • Philip Reames's avatar
      [Tests] Add anyextend tests for unordered atomics · 48453bb8
      Philip Reames authored
      Motivated by work on changing our representation of unordered atomics in SelectionDAG, but as an aside, all our lowerings for O3 are terrible.  Even the ones which ignore the atomicity.  
      
      llvm-svn: 371449
      48453bb8
    • Douglas Yung's avatar
    • Philip Reames's avatar
      Introduce infrastructure for an incremental port of SelectionDAG atomic load/store handling · 20aafa31
      Philip Reames authored
      This is the first patch in a large sequence. The eventual goal is to have unordered atomic loads and stores - and possibly ordered atomics as well - handled through the normal ISEL codepaths for loads and stores. Today, there handled w/instances of AtomicSDNodes. The result of which is that all transforms need to be duplicated to work for unordered atomics. The benefit of the current design is that it's harder to introduce a silent miscompile by adding an transform which forgets about atomicity.  See the thread on llvm-dev titled "FYI: proposed changes to atomic load/store in SelectionDAG" for further context.
      
      Note that this patch is NFC unless the experimental flag is set.
      
      The basic strategy I plan on taking is:
      
          introduce infrastructure and a flag for testing (this patch)
          Audit uses of isVolatile, and apply isAtomic conservatively*
          piecemeal conservative* update generic code and x86 backedge code in individual reviews w/tests for cases which didn't check volatile, but can be found with inspection
          flip the flag at the end (with minimal diffs)
          Work through todo list identified in (2) and (3) exposing performance ops
      
      (*) The "conservative" bit here is aimed at minimizing the number of diffs involved in (4). Ideally, there'd be none. In practice, getting it down to something reviewable by a human is the actual goal. Note that there are (currently) no paths which produce LoadSDNode or StoreSDNode with atomic MMOs, so we don't need to worry about preserving any behaviour there.
      
      We've taken a very similar strategy twice before with success - once at IR level, and once at the MI level (post ISEL). 
      
      Differential Revision: https://reviews.llvm.org/D66309
      
      llvm-svn: 371441
      20aafa31
    • Matt Arsenault's avatar
      AMDGPU/GlobalISel: Legalize G_BUILD_VECTOR v2s16 · a0933e6d
      Matt Arsenault authored
      Handle it the same way as G_BUILD_VECTOR_TRUNC. Arguably only
      G_BUILD_VECTOR_TRUNC should be legal for this, but G_BUILD_VECTOR will
      probably be more convenient in most cases.
      
      llvm-svn: 371440
      a0933e6d
    • Matt Arsenault's avatar
      AMDGPU: Make VReg_1 size be 1 · 8bc05d7d
      Matt Arsenault authored
      This was getting chosen as the preferred 32-bit register class based
      on how TableGen selects subregister classes.
      
      llvm-svn: 371438
      8bc05d7d
    • Matt Arsenault's avatar
      AMDGPU/GlobalISel: Select llvm.amdgcn.class · 77e3e9ca
      Matt Arsenault authored
      Also fixes missing SubtargetPredicate on f16 class instructions.
      
      llvm-svn: 371436
      77e3e9ca
    • Matt Arsenault's avatar
      AMDGPU/GlobalISel: Select fmed3 · d6c1f5bb
      Matt Arsenault authored
      llvm-svn: 371435
      d6c1f5bb
    • Eli Friedman's avatar
      [IfConversion] Correctly handle cases where analyzeBranch fails. · 79f0d3a6
      Eli Friedman authored
      If analyzeBranch fails, on some targets, the out parameters point to
      some blocks in the function. But we can't use that information, so make
      sure to clear it out.  (In some places in IfConversion, we assume that
      any block with a TrueBB is analyzable.)
      
      The change to the testcase makes it trigger a bug on builds without this
      fix: IfConvertDiamond tries to perform a followup "merge" operation,
      which isn't legal, and we somehow end up with a branch to a deleted MBB.
      I'm not sure how this doesn't crash the compiler.
      
      Differential Revision: https://reviews.llvm.org/D67306
      
      llvm-svn: 371434
      79f0d3a6
    • Sanjay Patel's avatar
      [x86] add test for false dependency with minsize (PR43239); NFC · c195bde3
      Sanjay Patel authored
      llvm-svn: 371433
      c195bde3
    • Matt Arsenault's avatar
      AMDGPU: Use PatFrags to allow selecting custom nodes or intrinsics · 6ebf6058
      Matt Arsenault authored
      This enables GlobalISel to handle various intrinsics. The custom node
      pattern will be ignored, and the intrinsic will work. This will also
      allow SelectionDAG to directly select the intrinsics, but as they are
      all custom lowered to the nodes, this ends up leaving dead code in the
      table.
      
      Eventually either GlobalISel should add the equivalent of custom nodes
      equivalent, or intrinsics should be directly used. These each have
      different tradeoffs.
      
      There are a few more to handle, but these are easy to handle
      ones. Some others fail for other reasons.
      
      llvm-svn: 371432
      6ebf6058
    • Craig Topper's avatar
      [X86] Allow _MM_FROUND_CUR_DIRECTION and _MM_FROUND_NO_EXC to be used together... · ce2cb0f0
      Craig Topper authored
      [X86] Allow _MM_FROUND_CUR_DIRECTION and _MM_FROUND_NO_EXC to be used together on instructions that only support SAE and not embedded rounding.
      
      Current for SAE instructions we only allow _MM_FROUND_CUR_DIRECTION(bit 2) or _MM_FROUND_NO_EXC(bit 3) to be used as the immediate passed to the inrinsics. But these instructions don't perform rounding so _MM_FROUND_CUR_DIRECTION is just sort of a default placeholder when you don't want to suppress exceptions. Using _MM_FROUND_NO_EXC by itself is really bit equivalent to (_MM_FROUND_NO_EXC | _MM_FROUND_TO_NEAREST_INT) since _MM_FROUND_TO_NEAREST_INT is 0. Since we aren't rounding on these instructions we should also accept (_MM_FROUND_CUR_DIRECTION | _MM_FROUND_NO_EXC) as equivalent to (_MM_FROUND_NO_EXC). icc allows this, but gcc does not.
      
      Differential Revision: https://reviews.llvm.org/D67289
      
      llvm-svn: 371430
      ce2cb0f0
    • Matt Arsenault's avatar
      AMDGPU: Move MnemonicAlias out of instruction def hierarchy · d2a9516a
      Matt Arsenault authored
      Unfortunately MnemonicAlias defines a "Predicates" field just like an
      instruction or pattern, with a somewhat different interpretation.
      
      This ends up overriding the intended Predicates set by
      PredicateControl on the pseudoinstruction defintions with an empty
      list. This allowed incorrectly selecting instructions that should have
      been rejected due to the SubtargetPredicate from patterns on the
      instruction definition.
      
      This does remove the divergent predicate from the 64-bit shift
      patterns, which were already not used for the 32-bit shift, so I'm not
      sure what the point was. This also removes a second, redundant copy of
      the 64-bit divergent patterns.
      
      llvm-svn: 371427
      d2a9516a
    • Jessica Paquette's avatar
      [GlobalISel][AArch64] Handle tail calls with non-void return types · bfb00e3d
      Jessica Paquette authored
      Just return once you emit the call, which is exactly what SelectionDAG does in
      this situation.
      
      Update call-translator-tail-call.ll.
      
      Also update dllimport.ll to show that we tail call here in GISel again. Add
      -verify-machineinstrs to the GISel line too, to defend against verifier
      failures.
      
      Differential revision: https://reviews.llvm.org/D67282
      
      llvm-svn: 371425
      bfb00e3d
    • Matt Arsenault's avatar
      AMDGPU/GlobalISel: Implement LDS G_GLOBAL_VALUE · 64ecca90
      Matt Arsenault authored
      Handle the simple case that lowers to a constant.
      
      llvm-svn: 371424
      64ecca90
    • Matt Arsenault's avatar
      AMDGPU/GlobalISel: Legalize G_BUILD_VECTOR_TRUNC · 182f9248
      Matt Arsenault authored
      Treat this as legal on gfx9 since it can use S_PACK_* instructions for
      this.
      
      This isn't used by anything yet. The same will probably apply to
      16-bit G_BUILD_VECTOR without the trunc.
      
      llvm-svn: 371423
      182f9248
    • Dmitri Gribenko's avatar
      Revert "[MachineCopyPropagation] Remove redundant copies after TailDup via machine-cp" · d9c4060b
      Dmitri Gribenko authored
      This reverts commit 371359. I'm suspecting a miscompile, I posted a
      reproducer to https://reviews.llvm.org/D65267.
      
      llvm-svn: 371421
      d9c4060b
    • David Green's avatar
      [ARM] Fix loads and stores for predicate vectors · 2b708994
      David Green authored
      These predicate vectors can usually be loaded and stored with a single
      instruction, a VSTR_P0. However this instruction will store the entire P0
      predicate, 16 bits, zeroextended to 32bits. Each lane of the the
      v4i1/v8i1/v16i1 representing 4/2/1 bits.
      
      As far as I understand, when llvm says "store this v4i1", it really does need
      to store 4 bits (or 8, that being the size of a byte, with this bottom 4 as the
      interesting bits). For example a bitcast from a v8i1 to a i8 is defined as a
      store followed by a load, which is how the code is expanded.
      
      So this instead lowers the v4i1/v8i1 load/store through some shuffles to get
      the bits into the correct positions. This, as you might imagine, is not as
      efficient as a single instruction. But I believe it is needed for correctness.
      v16i1 equally should not load/store 32bits, only storing the 16bits of data.
      Stack loads/stores are still using the VSTR_P0 (as can be seen by the test not
      changing). This is fine as they are self-consistent, it is only "externally
      observable loads/stores" (from our point of view) that need to be corrected.
      
      Differential revision: https://reviews.llvm.org/D67085
      
      llvm-svn: 371419
      2b708994
    • Matt Arsenault's avatar
      AMDGPU/GlobalISel: Select atomic loads · 63e6d8db
      Matt Arsenault authored
      A new check for an explicitly atomic MMO is needed to avoid
      incorrectly matching pattern for non-atomic loads
      
      llvm-svn: 371418
      63e6d8db
    • Matt Arsenault's avatar
      d8409b17
    • Matt Arsenault's avatar
      AMDGPU/GlobalISel: Fix regbankselect for uniform extloads · 02eb3083
      Matt Arsenault authored
      There are no scalar extloads.
      
      llvm-svn: 371414
      02eb3083
    • Matt Arsenault's avatar
      AMDGPU: Remove code address space predicates · ebbd6e49
      Matt Arsenault authored
      Fixes 8-byte, 8-byte aligned LDS loads. 16-byte case still broken due
      to not be reported as legal.
      
      llvm-svn: 371413
      ebbd6e49
    • Matt Arsenault's avatar
      AMDGPU/GlobalISel: Select G_PTR_MASK · c34b4036
      Matt Arsenault authored
      llvm-svn: 371412
      c34b4036
    • Matt Arsenault's avatar
      AMDGPU/GlobalISel: Fix reg bank for uniform LDS loads · fdb70301
      Matt Arsenault authored
      The pointer is always a VGPR. Also fix hardcoding the pointer size to
      64.
      
      llvm-svn: 371411
      fdb70301
    • Matt Arsenault's avatar
      AMDGPU/GlobalISel: Use known bits for selection · 2dd088ec
      Matt Arsenault authored
      llvm-svn: 371409
      2dd088ec
    • Matt Arsenault's avatar
      AMDGPU/GlobalISel: Legalize wavefrontsize intrinsic · 8e3bc9b5
      Matt Arsenault authored
      llvm-svn: 371407
      8e3bc9b5
    • James Molloy's avatar
      [DFAPacketizer] Reapply: Track resources for packetized instructions · b6c7fce6
      James Molloy authored
      Reapply with fix to reduce resources required by the compiler - use
      unsigned[2] instead of std::pair. This causes clang and gcc to compile
      the generated file multiple times faster, and hopefully will reduce
      the resource requirements on Visual Studio also. This fix is a little
      ugly but it's clearly the same issue the previous author of
      DFAPacketizer faced (the previous tables use unsigned[2] rather uglily
      too).
      
      This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
      resources were allocated to the packetized instructions.
      
      This is particularly important for targets that do their own bundle packing - it's not
      sufficient to know simply that instructions can share a packet; which slots are used is
      also required for encoding.
      
      This extends the emitter to emit a side-table containing resource usage diffs for each
      state transition. The packetizer maintains a set of all possible resource states in its
      current state. After packetization is complete, all remaining resource states are
      possible packetization strategies.
      
      The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
      (most uses of the packetizer like MachinePipeliner don't care and don't need the extra
      maintained state).
      
      Differential Revision: https://reviews.llvm.org/D66936
      
      llvm-svn: 371399
      b6c7fce6
    • Sam Parker's avatar
      [ARM][MVE] VCTP instruction selection · 1ad508e8
      Sam Parker authored
          
      Add codegen support for vctp{8,16,32}.
      
      Differential Revision: https://reviews.llvm.org/D67344
      
      llvm-svn: 371395
      1ad508e8
    • Simon Pilgrim's avatar
      Revert rL371198 from llvm/trunk: [DFAPacketizer] Track resources for packetized instructions · 462e3d80
      Simon Pilgrim authored
      This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
      resources were allocated to the packetized instructions.
      
      This is particularly important for targets that do their own bundle packing - it's not
      sufficient to know simply that instructions can share a packet; which slots are used is
      also required for encoding.
      
      This extends the emitter to emit a side-table containing resource usage diffs for each
      state transition. The packetizer maintains a set of all possible resource states in its
      current state. After packetization is complete, all remaining resource states are
      possible packetization strategies.
      
      The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
      (most uses of the packetizer like MachinePipeliner don't care and don't need the extra
      maintained state).
      
      Differential Revision: https://reviews.llvm.org/D66936
      ........
      Reverted as this is causing "compiler out of heap space" errors on MSVC 2017/19 NDEBUG builds
      
      llvm-svn: 371393
      462e3d80
    • Cullen Rhodes's avatar
      [AArch64][SVE] Implement abs and neg intrinsics · 55244bee
      Cullen Rhodes authored
      Summary:
      This patch implements two arithmetic intrinsics:
      
            * int_aarch64_sve_abs
            * int_aarch64_sve_neg
      
      testing the support for scalable vector types in intrinsics added in D65930.
      
      Reviewed By: greened
      
      Differential Revision: https://reviews.llvm.org/D65931
      
      llvm-svn: 371388
      55244bee
    • Tim Northover's avatar
      GlobalISel: add combiner to form indexed loads. · 36147adc
      Tim Northover authored
      Loosely based on DAGCombiner version, but this part is slightly simpler in
      GlobalIsel because all address calculation is performed by G_GEP. That makes
      the inc/dec distinction moot so there's just pre/post to think about.
      
      No targets can handle it yet so testing is via a special flag that overrides
      target hooks.
      
      llvm-svn: 371384
      36147adc
Loading