- Nov 07, 2017
-
-
Graham Yiu authored
Use new vector insert half-word and byte instructions when we see insertelement on '8 x i16' and '16 x i8' types. Also extended existing lit testcase to cover these cases. Differential Revision: https://reviews.llvm.org/D34630 llvm-svn: 317613
-
Paul Robinson authored
Supporting this form in .debug_line.dwo will be done as a follow-up. Differential Revision: https://reviews.llvm.org/D33155 llvm-svn: 317607
-
Craig Topper authored
The hexagon test should be fixed now. Original commit message: This pulls shifts through a select+binop with a constant where the select conditionally executes the binop. We already do this for just the binop, but not with the select. This can allow us to get the select closer to other selects to enable removing one. Differential Revision: https://reviews.llvm.org/D39222 llvm-svn: 317600
-
Craig Topper authored
Datalayout is no longer optional so the comment didn't match what the code currently does. llvm-svn: 317594
-
Krzysztof Parzyszek authored
An "or" that sets the sign-bit can be replaced with a "xor", if the sign-bit was known to be clear before. With some changes to instruction combining, the simple sign-bit check was failing. Replace it with a more flexible one to catch more cases. llvm-svn: 317592
-
Florian Hahn authored
Patch [5/5] in a series to add assembler/disassembler support for AArch64 SVE unpredicated ADD/SUB instructions. Patch by Sander De Smalen. Reviewed by: rengolin Differential Revision: https://reviews.llvm.org/D39091 llvm-svn: 317591
-
Florian Hahn authored
Patch [3/5] in a series to add assembler/disassembler support for AArch64 SVE unpredicated ADD/SUB instructions. To summarise, this patch adds: * SVE register definitions * Methods to parse SVE register operands * Methods to print SVE register operands * RegKind SVEDataVector to distinguish it from other data types like scalar register or Neon vector. * k_SVEDataRegister and SVEDataRegOp to describe SVE registers (which will be extended by further patches with e.g. ElementWidth and the shift-extend type). Patch by Sander De Smalen. Reviewed by: rengolin Differential Revision: https://reviews.llvm.org/D39089 llvm-svn: 317590
-
Craig Topper authored
llvm-svn: 317588
-
Florian Hahn authored
Patch [4/5] in a series to add assembler/disassembler support for AArch64 SVE unpredicated ADD/SUB instructions. We add SVE as unsupported feature for CPUs that don't have SVE to prevent errors from scheduler models saying it lacks information for these instructions. Patch by Sander De Smalen. Reviewed by: rengolin Differential Revision: https://reviews.llvm.org/D39090 llvm-svn: 317582
-
Petar Jovanovic authored
Reland r317100 with minor fix regarding ComputeCommonTailLength function in BranchFolding.cpp. Skipping top CFI instructions block needs to executed on several more return points in ComputeCommonTailLength(). Original r317100 message: "Correct dwarf unwind information in function epilogue for X86" This patch aims to provide correct dwarf unwind information in function epilogue for X86. It consists of two parts. The first part inserts CFI instructions that set appropriate cfa offset and cfa register in emitEpilogue() in X86FrameLowering. This part is X86 specific. The second part is platform independent and ensures that: - CFI instructions do not affect code generation - Unwind information remains correct when a function is modified by different passes. This is done in a late pass by analyzing information about cfa offset and cfa register in BBs and inserting additional CFI directives where necessary. Changed CFI instructions so that they: - are duplicable - are not counted as instructions when tail duplicating or tail merging - can be compared as equal Added CFIInstrInserter pass: - analyzes each basic block to determine cfa offset and register valid at its entry and exit - verifies that outgoing cfa offset and register of predecessor blocks match incoming values of their successors - inserts additional CFI directives at basic block beginning to correct the rule for calculating CFA Having CFI instructions in function epilogue can cause incorrect CFA calculation rule for some basic blocks. This can happen if, due to basic block reordering, or the existence of multiple epilogue blocks, some of the blocks have wrong cfa offset and register values set by the epilogue block above them. CFIInstrInserter is currently run only on X86, but can be used by any target that implements support for adding CFI instructions in epilogue. Patch by Violeta Vukobrat. llvm-svn: 317579
-
Alexey Bataev authored
Summary: The cost calculation for default case on X86 target does not always follow correct wayt because of missing 4-th argument in `BaseT::getCastInstrCost()` call. Added this missing parameter. Reviewers: hfinkel, mkuper, RKSimon, spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D39687 llvm-svn: 317576
-
Kristof Beyls authored
... to silence gcc 7's default -Wimplicit-fallthrough. llvm-svn: 317573
-
Florian Hahn authored
Patch [2/5] in a series to add assembler/disassembler support for AArch64 SVE unpredicated ADD/SUB instructions. This change is a non functional change that adds RegKind as an alternative to 'isVector' to prepare it for newer types (SVE data vectors and predicate vectors) that will be added in next patches (where the SVE data vector is added as part of this patch set) Patch by Sander De Smalen. Reviewed by: rengolin Differential Revision: https://reviews.llvm.org/D39088 llvm-svn: 317569
-
Kristof Beyls authored
The warning started triggering after r317560. This commit silences it in the same way as previously done in a similar situation, see http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20140915/236088.html llvm-svn: 317568
-
Kristof Beyls authored
This changes the interface of how targets describe how to legalize, see the below description. 1. Interface for targets to describe how to legalize. In GlobalISel, the API in the LegalizerInfo class is the main interface for targets to specify which types are legal for which operations, and what to do to turn illegal type/operation combinations into legal ones. For each operation the type sizes that can be legalized without having to change the size of the type are specified with a call to setAction. This isn't different to how GlobalISel worked before. For example, for a target that supports 32 and 64 bit adds natively: for (auto Ty : {s32, s64}) setAction({G_ADD, 0, s32}, Legal); or for a target that needs a library call for a 32 bit division: setAction({G_SDIV, s32}, Libcall); The main conceptual change to the LegalizerInfo API, is in specifying how to legalize the type sizes for which a change of size is needed. For example, in the above example, how to specify how all types from i1 to i8388607 (apart from s32 and s64 which are legal) need to be legalized and expressed in terms of operations on the available legal sizes (again, i32 and i64 in this case). Before, the implementation only allowed specifying power-of-2-sized types (e.g. setAction({G_ADD, 0, s128}, NarrowScalar). A worse limitation was that if you'd wanted to specify how to legalize all the sized types as allowed by the LLVM-IR LangRef, i1 to i8388607, you'd have to call setAction 8388607-3 times and probably would need a lot of memory to store all of these specifications. Instead, the legalization actions that need to change the size of the type are specified now using a "SizeChangeStrategy". For example: setLegalizeScalarToDifferentSizeStrategy( G_ADD, 0, widenToLargerAndNarrowToLargest); This example indicates that for type sizes for which there is a larger size that can be legalized towards, do it by Widening the size. For example, G_ADD on s17 will be legalized by first doing WidenScalar to make it s32, after which it's legal. The "NarrowToLargest" indicates what to do if there is no larger size that can be legalized towards. E.g. G_ADD on s92 will be legalized by doing NarrowScalar to s64. Another example, taken from the ARM backend is: for (unsigned Op : {G_SDIV, G_UDIV}) { setLegalizeScalarToDifferentSizeStrategy(Op, 0, widenToLargerTypesUnsupportedOtherwise); if (ST.hasDivideInARMMode()) setAction({Op, s32}, Legal); else setAction({Op, s32}, Libcall); } For this example, G_SDIV on s8, on a target without a divide instruction, would be legalized by first doing action (WidenScalar, s32), followed by (Libcall, s32). The same principle is also followed for when the number of vector lanes on vector data types need to be changed, e.g.: setAction({G_ADD, LLT::vector(8, 8)}, LegalizerInfo::Legal); setAction({G_ADD, LLT::vector(16, 8)}, LegalizerInfo::Legal); setAction({G_ADD, LLT::vector(4, 16)}, LegalizerInfo::Legal); setAction({G_ADD, LLT::vector(8, 16)}, LegalizerInfo::Legal); setAction({G_ADD, LLT::vector(2, 32)}, LegalizerInfo::Legal); setAction({G_ADD, LLT::vector(4, 32)}, LegalizerInfo::Legal); setLegalizeVectorElementToDifferentSizeStrategy( G_ADD, 0, widenToLargerTypesUnsupportedOtherwise); As currently implemented here, vector types are legalized by first making the vector element size legal, followed by then making the number of lanes legal. The strategy to follow in the first step is set by a call to setLegalizeVectorElementToDifferentSizeStrategy, see example above. The strategy followed in the second step "moreToWiderTypesAndLessToWidest" (see code for its definition), indicating that vectors are widened to more elements so they map to natively supported vector widths, or when there isn't a legal wider vector, split the vector to map it to the widest vector supported. Therefore, for the above specification, some example legalizations are: * getAction({G_ADD, LLT::vector(3, 3)}) returns {WidenScalar, LLT::vector(3, 8)} * getAction({G_ADD, LLT::vector(3, 8)}) then returns {MoreElements, LLT::vector(8, 8)} * getAction({G_ADD, LLT::vector(20, 8)}) returns {FewerElements, LLT::vector(16, 8)} 2. Key implementation aspects. How to legalize a specific (operation, type index, size) tuple is represented by mapping intervals of integers representing a range of size types to an action to take, e.g.: setScalarAction({G_ADD, LLT:scalar(1)}, {{1, WidenScalar}, // bit sizes [ 1, 31[ {32, Legal}, // bit sizes [32, 33[ {33, WidenScalar}, // bit sizes [33, 64[ {64, Legal}, // bit sizes [64, 65[ {65, NarrowScalar} // bit sizes [65, +inf[ }); Please note that most of the code to do the actual lowering of non-power-of-2 sized types is currently missing, this is just trying to make it possible for targets to specify what is legal, and how non-legal types should be legalized. Probably quite a bit of further work is needed in the actual legalizing and the other passes in GlobalISel to support non-power-of-2 sized types. I hope the documentation in LegalizerInfo.h and the examples provided in the various {Target}LegalizerInfo.cpp and LegalizerInfoTest.cpp explains well enough how this is meant to be used. This drops the need for LLT::{half,double}...Size(). Differential Revision: https://reviews.llvm.org/D30529 llvm-svn: 317560
-
Serguei Katkov authored
This patch disables the handling of selects in optimization extensing scope of optimizeMemoryInst. The optimization itself is disable by default. The idea here is just to switch optimiztion level step by step. Specifically, first optimization will be enabled only for Phi nodes, then select instructions will be added. In case someone will complain about perfromance it will be easier to detect what part of optimizations is responsible for that. Differential Revision: https://reviews.llvm.org/D36073 llvm-svn: 317555
-
Bjorn Steinbrink authored
Summary: Calls using invoke in funclet based functions are assumed to clobber all registers, which causes the stack adjustment using pops to consider all registers not defined by the call to be undefined, which can unfortunately include the base pointer, if one is needed. To prevent this (and possibly other hazards), skip reserved registers when looking for candidate registers. This fixes issue #45034 in the Rust compiler. Reviewers: mkuper Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D39636 llvm-svn: 317551
-
Craig Topper authored
llvm-svn: 317548
-
Craig Topper authored
Disable the peephole pass to prove that the pattern is working. llvm-svn: 317547
-
Craig Topper authored
Looks like there's some missed load folding opportunities for i64 loads. llvm-svn: 317544
-
Craig Topper authored
ExeDepsFix pass should take care of making the registers match. llvm-svn: 317542
-
Craig Topper authored
llvm-svn: 317541
-
Davide Italiano authored
According to the docs on opegroup.org, the function can return EINVAL if: The len argument is less than zero, or the offset argument is less than zero, or the underlying file system does not support this operation. I'd say it's a peculiar choice (when EONOTSUPP is right there), but let's keep POSIX happy for now. This was independently discovered by Mark Millard (on FreeBSD/ZFS). Quickly ack'ed by Rui on IRC. llvm-svn: 317535
-
Adrian Prantl authored
We can't safely split arithmetic into multiple fragments because we can't express carry-over between fragments. llvm-svn: 317534
-
Davide Italiano authored
Blockaddresses refer to the function itself, therefore replacing them would cause an assertion in doRAUW. Fixes https://bugs.llvm.org/show_bug.cgi?id=35201 This was found when trying CFI on a proprietary kernel by Dmitry Mikulin. Differential Revision: https://reviews.llvm.org/D39695 llvm-svn: 317527
-
Matt Arsenault authored
This combine was already done in two places. The generic combiner already has done this since r217610, for adds (with a single use). This one was added in r303641, and added support for handling or as well. r313251 later added support to the generic combine for or. It also turns out the isOrEquivalentToAdd check is not necessary for this combine. Additionally, we already reproduce this combine in yet another place in the backend, although in that version multiple uses of the add are still folded if it will allow a fold into the addressing mode. That version needs to be improved to understand ors though, as well as the correct legal offsets for private. llvm-svn: 317526
-
Vedant Kumar authored
This makes DILocation::getMergedLocation() do what its comment says it does when merging locations for an Instruction: set the common inlineAt scope. This simplifies Instruction::applyMergedLocation() a bit. Testing: check-llvm, check-clang Differential Revision: https://reviews.llvm.org/D39628 llvm-svn: 317524
-
Simon Dardis authored
rL316419 exposed a platform specific issue where the type of the values passed to llvm::format could be different to the format string. Debian unstable for mips uses long long int for std::chrono:duration, while x86_64 uses long int. For mips, this resulted in the value being corrupted when rendered to a string. Address this by explicitly casting the result of the duration_cast to the type specified in the format string. Reviewers: sammccall Differential Revision: https://reviews.llvm.org/D39597 llvm-svn: 317523
-
- Nov 06, 2017
-
-
Adrian Prantl authored
rdar://problem/31209283 llvm-svn: 317522
-
Craig Topper authored
The EVEX to VEX pass is already assuming this is true under AVX512VL. We had special patterns to use zmm instructions if VLX and F16C weren't available. Instead just make AVX512 imply F16C to make the EVEX to VEX behavior explicitly legal and remove the extra patterns. All known CPUs with AVX512 have F16C so this should safe for now. llvm-svn: 317521
-
Craig Topper authored
Previously our VEX patterns were checking Subtarget.hasFMA() which checked FMA || AVX512. So we were behaving as if AVX512 implied it anyway. Which means we'd allow VEX encoded 128/256 FMA when AVX512F was enabled but AVX512VL is off. Regardless of the FMA flag. EVEX to VEX also transforms scalar EVEX FMA instructions to their VEX versions even without the FMA flag. Similarly for 128/256 under AVX512VL. So this makes AVX512 imply FeatureFMA to make our current behavior explicit. All known CPUs that support AVX512 have VEX FMA instructions. llvm-svn: 317520
-
Sanjay Patel authored
As discussed in D39204, this is effectively a revert of rL265521 which required nnan to vectorize sqrt libcalls based on the old LangRef definition of llvm.sqrt. Now that the definition has been updated so the libcall and intrinsic have the same semantics apart from potentially setting errno, we can remove the nnan requirement. We have the right check to know that errno is not set: if (!ICS.onlyReadsMemory()) ...ahead of the switch. This will solve https://bugs.llvm.org/show_bug.cgi?id=27435 assuming that's being built for a target with -fno-math-errno. Differential Revision: https://reviews.llvm.org/D39642 llvm-svn: 317519
-
Hans Wennborg authored
This broke the CodeGen/Hexagon/loop-idiom/pmpy-mod.ll test on a bunch of buildbots. > This pulls shifts through a select+binop with a constant where the select conditionally executes the binop. We already do this for just the binop, but not with the select. > > This can allow us to get the select closer to other selects to enable removing one. > > Differential Revision: https://reviews.llvm.org/D39222 > > git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317510 91177308-0d34-0410-b5e6-96231b3b80d8 llvm-svn: 317518
-
Xinliang David Li authored
llvm-svn: 317514
-
Bjorn Pettersson authored
Summary: Print %subreg.<subregidxname> instead of just the subregister index when printing immediate operands corresponding to subreg indices in INSERT_SUBREG, EXTRACT_SUBREG, SUBREG_TO_REG and REG_SEQUENCE. Reviewers: qcolombet, MatzeB Reviewed By: MatzeB Subscribers: nhaehnle, javed.absar, llvm-commits Differential Revision: https://reviews.llvm.org/D39696 llvm-svn: 317513
-
Craig Topper authored
This pulls shifts through a select+binop with a constant where the select conditionally executes the binop. We already do this for just the binop, but not with the select. This can allow us to get the select closer to other selects to enable removing one. Differential Revision: https://reviews.llvm.org/D39222 llvm-svn: 317510
-
Graham Yiu authored
Fix buildbot breakages from r317503. Add parentheses to assignment when using result as a condition. llvm-svn: 317508
-
Graham Yiu authored
Adds code to PPC ISEL lowering to recognize byte inserts from vector_shuffles, and use P9 shift and vector insert byte instructions instead of vperm. Extends tests from vector insert half-word. Differential Revision: https://reviews.llvm.org/D34497 llvm-svn: 317503
-
Dehao Chen authored
Summary: When computing the SUM for indirect call promotion, if the callsite is already promoted in the profile, it will be promoted before ICP. In the current implementation, ICP only sees remaining counts in SUM. This may cause extra indirect call targets being promoted. This patch updates the SUM to include the counts already promoted earlier. This way we do not end up promoting too many indirect call targets. Reviewers: tejohnson Reviewed By: tejohnson Subscribers: llvm-commits, sanjoy Differential Revision: https://reviews.llvm.org/D38763 llvm-svn: 317502
-
Guozhi Wei authored
Power doesn't have bswap instructions, so llvm generates following code sequence for bswap64. rotldi 5, 3, 16 rotldi 4, 3, 8 rotldi 9, 3, 24 rotldi 10, 3, 32 rotldi 11, 3, 48 rotldi 12, 3, 56 rldimi 4, 5, 8, 48 rldimi 4, 9, 16, 40 rldimi 4, 10, 24, 32 rldimi 4, 11, 40, 16 rldimi 4, 12, 48, 8 rldimi 4, 3, 56, 0 But Power9 has vector bswap instructions, they can also be used to speed up scalar bswap intrinsic. With this patch, bswap64 can be translated to: mtvsrdd 34, 3, 3 xxbrd 34, 34 mfvsrld 3, 34 Differential Revision: https://reviews.llvm.org/D39510 llvm-svn: 317499
-