- Feb 22, 2021
-
-
Florian Hahn authored
Update unit tests that did not expect VPWidenPHIRecipes after 15a74b64.
-
Simon Pilgrim authored
-
David Green authored
Remove the unnecessary code from 21a4faab, left over from a different way of lowering.
-
Florian Hahn authored
This patch extends VPWidenPHIRecipe to manage pairs of incoming (VPValue, VPBasicBlock) in the VPlan native path. This is made possible because we now directly manage defined VPValues for recipes. By keeping both the incoming value and block in the recipe directly, code-generation in the VPlan native path becomes independent of the predecessor ordering when fixing up non-induction phis, which currently can cause crashes in the VPlan native path. This fixes PR45958. Reviewed By: sguggill Differential Revision: https://reviews.llvm.org/D96773
-
David Green authored
This removes the existing patterns for inserting two lanes into an f16/i16 vector register using VINS, instead using a DAG combine to pattern match the same code sequences. The tablegen patterns were already on the large side (foreach LANE = [0, 2, 4, 6]) and were not handling all the cases they could. Moving that to a DAG combine, whilst not less code, allows us to better control and expand the selection of VINSs. Additionally this allows us to remove the AddedComplexity on VCVTT. The extra trick that this has learned in the process is to move two adjacent lanes using a single f32 vmov, allowing some extra inefficiencies to be removed. Differenial Revision: https://reviews.llvm.org/D96876
-
Andy Wingo authored
If the reference-types feature is enabled, call_indirect will explicitly reference its corresponding function table via `TABLE_NUMBER` relocations against a table symbol. Also, as before, address-taken functions can also cause the function table to be created, only with reference-types they additionally cause a symbol table entry to be emitted. We abuse the used-in-reloc flag on symbols to indicate which tables should end up in the symbol table. We do this because unfortunately older wasm-ld will carp if it see a table symbol. Differential Revision: https://reviews.llvm.org/D90948
-
Djordje Todorovic authored
Small optimization of the code -- No need to calculate any stats for NULL nodes, and also no need to call the collectStatsForDie() if it is the CU itself. Differential Revision: https://reviews.llvm.org/D96871
-
Amara Emerson authored
We can only select this type if the source is on GPR, not FPR.
-
Kazu Hirata authored
-
Kazu Hirata authored
Identified with llvm-header-guard.
-
Kazu Hirata authored
-
Petr Hosek authored
__start_/__stop_ references retain C identifier name sections such as __llvm_prf_*. Putting these into a section group disables this logic. The ELF section group semantics ensures that group members are retained or discarded as a unit. When a function symbol is discarded, this allows allows linker to discard counters, data and values associated with that function symbol as well. Note that `noduplicates` COMDAT is lowered to zero-flag section group in ELF. We only set this for functions that aren't already in a COMDAT and for those that don't have available_externally linkage since we already use regular COMDAT groups for those. Differential Revision: https://reviews.llvm.org/D96757
-
- Feb 21, 2021
-
-
Craig Topper authored
The result must be less than or equal to the LHS side, so any leading zeros in the left hand side must also exist in the result. This is stronger than the previous behavior where we only considered the sign bit being 0. The affected test case used the sign bit being known 0 to change a sign extend to a zero extend pre type legalization. After type legalization the types were promoted to i64, but we no longer knew bit 31 was zero. This shifts are are the equivalent of an AND with 0xffffffff or zext_inreg X, i32. This patch allows us to see that bit 31 is zero and remove the shifts. Reviewed By: RKSimon Differential Revision: https://reviews.llvm.org/D97124
-
Simon Pilgrim authored
-
Simon Pilgrim authored
[X86] Replace explicit constant handling in sub(C1, xor(X, C2)) -> add(xor(X, ~C2), C1+1) fold. NFCI. NFC cleanup before adding vector support - rely on the SelectionDAG to handle everything for us.
-
Simon Pilgrim authored
-
Simon Pilgrim authored
This is also in sub.ll but that's for a specific i686 pattern - this adds x86_64 and vector tests
-
Simon Pilgrim authored
-
Craig Topper authored
This also removes a pattern from RISCV that is no longer needed since the sexti32 on the LHS of the srem in the pattern implies the result is sign extended so the sign_extend_inreg should be removed in DAG combine now. Reviewed By: luismarques, RKSimon Differential Revision: https://reviews.llvm.org/D97133
-
Simon Pilgrim authored
In conjunction with the 'vperm2x128(bitcast(x),bitcast(y),c) -> bitcast(vperm2x128(x,y,c))' fold in combineTargetShuffle, this should remove any unnecessary bitcasts around vperm2x128 lane shuffles.
-
madhur13490 authored
Differential Revision: https://reviews.llvm.org/D97157
-
Nikita Popov authored
FindAvailableLoadedValue() accepts an iterator by reference. If no available value is found, then the iterator will either be left at a clobbering instruction or the beginning of the basic block. This allows using FindAvailableLoadedValue() across multiple blocks. If this functionality is not needed, as is the case in InstCombine, then we can use a much more efficient implementation: First try to find an available value, and only perform clobber checks if we actually found one. As this function only looks at a very small number of instructions (6 by default) and usually doesn't find an available value, this saves many expensive alias analysis queries.
-
Sanjay Patel authored
The arguments in all cases should be vectors of exactly one of integer or FP. All of the tests currently pass the verifier because we check for any vector type regardless of the type of reduction. This obviously can't work if we mix up integer and FP, and based on current LangRef text it was not intended to work for pointers either. The pointer case from https://llvm.org/PR49215 is what led me here. That example was avoided with 5b250a27. Differential Revision: https://reviews.llvm.org/D96904
-
Nikita Popov authored
This contains the logic for extracting an available load/store from a given instruction, to be reused in a following patch.
-
Kristina Bessonova authored
Currently, if there is a module that contains a strong definition of a global variable and a module that has both a weak definition for the same global and a reference to it, it may result in an undefined symbol error while linking with ThinLTO. It happens because: * the strong definition become internal because it is read-only and can be imported; * the weak definition gets replaced by a declaration because it's non-prevailing; * the strong definition failed to be imported because the destination module already contains another definition of the global yet this def is non-prevailing. The patch adds a check to computeImportForReferencedGlobals() that allows considering a global variable for being imported even if the module contains a definition of it in the case this def has an interposable linkage type. Note that currently the check is based only on the linkage type (and this seems to be enough at the moment), but it might be worth to account the information whether the def is prevailing or not. Reviewed By: tejohnson Differential Revision: https://reviews.llvm.org/D95943
-
Simon Pilgrim authored
This patch handles usubsat patterns hidden through zext/trunc and uses the getTruncatedUSUBSAT helper to determine if the USUBSAT can be correctly performed in the truncated form: zext(x) >= y ? x - trunc(y) : 0 --> usubsat(x,trunc(umin(y,SatLimit))) zext(x) > y ? x - trunc(y) : 0 --> usubsat(x,trunc(umin(y,SatLimit))) Based on original examples: void foo(unsigned short *p, int max, int n) { int i; unsigned m; for (i = 0; i < n; i++) { m = *--p; *p = (unsigned short)(m >= max ? m-max : 0); } } Differential Revision: https://reviews.llvm.org/D25987
-
Simon Pilgrim authored
Fixes regression exposed by removing bitcasts across logic-ops in D96206. Differential Revision: https://reviews.llvm.org/D96206
-
Simon Pilgrim authored
Extend the existing combine that handles bitcasting for fp-logic ops to also help remove logic ops across bitcasts to/from the same integer types. This helps improve AVX512 predicate handling for D/Q logic ops and also allows DAGCombine's scalarizeExtractedBinop to remove some annoying gpr->simd->gpr transfers. The concat_vectors regression in pr40891.ll will be addressed in a followup commit on this patch. Differential Revision: https://reviews.llvm.org/D96206
-
Craig Topper authored
Largely copied from AArch64/arm64-xaluo.ll
-
Kazu Hirata authored
-
Kazu Hirata authored
-
Jianzhou Zhao authored
-
- Feb 20, 2021
-
-
Petr Hosek authored
This can reduce the binary size because counters will no longer occupy space in the binary, instead they will be allocated by dynamic linker. Differential Revision: https://reviews.llvm.org/D97110
-
Craig Topper authored
[RISCV] Add another test case showing failure to use remw when the RHS has been zero extended from less than i32. NFC
-
Nikita Popov authored
When one of the inputs is a wrapping range, intersect with the union of the two inputs. The union of the two inputs corresponds to the result we would get if we treated the min/max as a simple select. This fixes PR48643.
-
Sanjay Patel authored
Follow-up to: D96648 / b40fde06 ...for the special-case base calls. From the earlier commit: This is unusual in the general (non-reciprocal) case because we need an extra instruction, but that should be better for general FP reassociation and codegen. We conservatively check for "arcp" FMF here as we do with existing fdiv folds, but it is not strictly necessary to have that.
-
Sanjay Patel authored
-
Nikita Popov authored
We don't need any special handling for wrapping ranges (or empty ranges for that matter). The sub() call will already compute a correct and precise range. We only need to adjust the test expectation: We're now computing an optimal result, rather than an unsigned envelope.
-
Craig Topper authored
This adds the IR for this C code int32_t foo(uint16_t x, int16_t y) { x %= y; return x; } Note the dividend is unsigned and the divisor is signed. C type promotion rules will extend them and use a 32-bit srem and the function returns a 32-bit result. We fail to use remw for this case. The zero extended input has enough sign bits, but we won't consider (i64 AssertZext X, i16) in the sexti32 isel pattern. We also end up with a extra shifts to zero upper bits on the result. computeKnownBits knew the result was positive before type legalization and allowed the SIGN_EXTEND to become ZERO_EXTEND. But after promoting to i64 we no longer know that bit 31 (and all bits above it) should be 0.
-
Nikita Popov authored
When the optimality check fails, print the inputs, the computed range and the better range that was found. This makes it much simpler to identify the cause of the failure. Make sure that full ranges (which, unlikely all the other cases, have multiple ways to construct them that all result in the same range) only print one message by handling them separately.
-