- Oct 20, 2021
-
-
Jeremy Morse authored
Here's another performance patch for InstrRefBasedLDV: rather than processing all variable values in a scope at a time, instead, process one variable at a time. The benefits are twofold: * It's easier to reason about one variable at a time in your mind, * It improves performance, apparently from increased locality. The downside is that the value-propagation code gets indented one level further, plus there's some churn in the unit tests. Differential Revision: https://reviews.llvm.org/D111799
-
Sander de Smalen authored
When inserting a scalable subvector into a scalable vector through the stack, the index to store to needs to be scaled by vscale. Before this patch, that didn't yet happen, so it would generate the wrong offset, thus storing a subvector to the incorrect address and overwriting the wrong lanes. For some insert: nxv8f16 insert_subvector(nxv8f16 %vec, nxv2f16 %subvec, i64 2) The offset was not scaled by vscale: orr x8, x8, #0x4 st1h { z0.h }, p0, [sp] st1h { z1.d }, p1, [x8] ld1h { z0.h }, p0/z, [sp] And is changed to: mov x8, sp st1h { z0.h }, p0, [sp] st1h { z1.d }, p1, [x8, #1, mul vl] ld1h { z0.h }, p0/z, [sp] Differential Revision: https://reviews.llvm.org/D111633
-
Simon Pilgrim authored
These are folded to left shifts in the backend. We should be able to extend this for multiply-by-negpow2 after D111968 has landed to resolve PR51436
-
Simon Pilgrim authored
Replace X86ProcFamilyEnum::IntelSLM enum with a TuningUseSLMArithCosts flag instead, matching what we already do for Goldmont. This just leaves X86ProcFamilyEnum::IntelAtom to replace with general Tuning/Feature flags and we can finally get rid of the old X86ProcFamilyEnum enum. Differential Revision: https://reviews.llvm.org/D112079
-
Daniel Kiss authored
autiasp, autibsp instructions are the counterpart of paciasp/pacibsp instructions therefore let's emit .cfi_negate_ra_state for these too. In case of Armv8.3 instruction set the retaa/retbb will do the return and authentication in one step here we can't emit the . cfi_negate_ra_state because that would be point after the ret* instruction. Reviewed By: nickdesaulniers, MaskRay Differential Revision: https://reviews.llvm.org/D111780
-
Joerg Sonnenberger authored
Reviewed By: LemonBoy Differential Revision: https://reviews.llvm.org/D96311
-
Paulo Matos authored
This change implements new DAG nodes TABLE_GET/TABLE_SET, and lowering methods for load and stores of reference types from IR arrays. These global LLVM IR arrays represent tables at the Wasm level. Differential Revision: https://reviews.llvm.org/D111154
-
Zi Xuan Wu authored
Complete the basic integer instruction set and add related predictor in CSKY.td. And it includes the instruction definition and asm parser support. Differential Revision: https://reviews.llvm.org/D111701
-
Evgeniy Brevnov authored
To guarantee convergence of the algorithm each optimization step should decrease number of instructions when IR is modified. This property is not held in this test case. The problem is that SCEV Expander may do "unexpected" reassociation what results in creation of new min/max chains and introduction of extra instructions. As a result on each step we indefinitely optimize back and forth. The solution is to restrict SCEV Expander to perform uncontrolled reassociations by means of "Unknown" expressions. Reviewed By: nikic Differential Revision: https://reviews.llvm.org/D112060
-
Zhi An Ng authored
Add i8x16 relaxed_swizzle instructions. These are only exposed as builtins, and require user opt-in. Differential Revision: https://reviews.llvm.org/D112022
-
Yonghong Song authored
Currently, .BTF and .BTF.ext has default alignment of 1. For example, $ cat t.c int foo() { return 0; } $ clang -target bpf -O2 -c -g t.c $ llvm-readelf -S t.o ... Section Headers: [Nr] Name Type Address Off Size ES Flg Lk Inf Al ... [ 7] .BTF PROGBITS 0000000000000000 000167 00008b 00 0 0 1 [ 8] .BTF.ext PROGBITS 0000000000000000 0001f2 000050 00 0 0 1 But to have no misaligned data access, .BTF and .BTF.ext actually requires alignment of 4. Misalignment is not an issue for architecture like x64/arm64 as it can handle it well. But some architectures like mips may incur a trap if .BTF/.BTF.ext is not properly aligned. This patch explicitly forced .BTF and .BTF.ext alignment to be 4. For the above example, we will have [ 7] .BTF PROGBITS 0000000000000000 000168 00008b 00 0 0 4 [ 8] .BTF.ext PROGBITS 0000000000000000 0001f4 000050 00 0 0 4 Differential Revision: https://reviews.llvm.org/D112106
-
Artem Belevich authored
Fixes performance regression https://bugs.llvm.org/show_bug.cgi?id=52037 Differential Revision: https://reviews.llvm.org/D111471
-
Yuta Saito authored
Emit __clangast in custom section instead of named data segment to find it while iterating sections. This could be avoided if all data segements (the wasm sense) were represented as their own sections (in the llvm sense). This can be resolved by https://github.com/WebAssembly/tool-conventions/issues/138 And the on-disk hashtable in clangast needs to be aligned by 4 bytes, so add paddings in name length field in custom section header. The length of clangast section name can be represented in 1 byte by leb128, and possible maximum pads are 3 bytes, so the section name length won't be invalid in theory. Fixes https://bugs.llvm.org/show_bug.cgi?id=35928 Differential Revision: https://reviews.llvm.org/D74531
-
- Oct 19, 2021
-
-
Sanjay Patel authored
usubsat X, SMIN --> (X ^ SMIN) & (X s>> BW-1) This would be a regression with D112085 where we combine to usubsat more aggressively, so avoid that by matching the special-case where we are subtracting SMIN (signmask): https://alive2.llvm.org/ce/z/4_3gBD Differential Revision: https://reviews.llvm.org/D112095
-
Bjorn Pettersson authored
Accidentally pushed D112080 without this clang-format cleanup.
-
Bjorn Pettersson authored
As seen in PR51869 the ScalarEvolution::isImpliedCond function might end up spending lots of time when doing the isKnownPredicate checks. Calling isKnownPredicate for example result in isKnownViaInduction being called, which might result in isLoopBackedgeGuardedByCond being called, and then we might get one or more new calls to isImpliedCond. Even if the scenario described here isn't an infinite loop, using some random generated C programs as input indicates that those isKnownPredicate checks quite often returns true. On the other hand, the third condition that needs to be fulfilled in order to "prove implications via truncation", i.e. the isImpliedCondBalancedTypes check, is rarely fulfilled. I also made some similar experiments to look at how often we would get the same result when using isKnownViaNonRecursiveReasoning instead of isKnownPredicate. So far I haven't seen a single case when codegen is negatively impacted by using isKnownViaNonRecursiveReasoning. On the other hand, it seems like we get rid of the compile time explosion seen in PR51869 that way. Hence this patch. Reviewed By: nikic Differential Revision: https://reviews.llvm.org/D112080
-
Philip Reames authored
This is trivial. It was left out of the original review only because we had multiple copies of the same code in review at the same time, and keeping them in sync was easiest if the structure was kept in sync.
-
Philip Reames authored
This patch duplicates a bit of logic we apply to comparisons encountered during the IV users walk to conditions which feed exit conditions. Why? simplifyAndExtend has a very limited list of users it walks. In particular, in the examples is stops at the zext and never visits the icmp. (Because we can't fold the zext to an addrec yet in SCEV.) Being willing to visit when we haven't simplified regresses multiple tests (seemingly because of less optimal results when computing trip counts). Note that this can be trivially extended to multiple exiting blocks. I'm leaving that to a future patch (solely to cut down on the number of versions of the same code in review at once.) Differential Revision: https://reviews.llvm.org/D111896
-
Anna Thomas authored
Using BPI within loop predication is non-trivial because BPI is only preserved lossily in loop pass manager (one fix exposed by lossy preservation is up for review at D111448). However, since loop predication is only used in downstream pipelines, it is hard to keep BPI from breaking for incomplete state with upstream changes in BPI. Also, correctly preserving BPI for all loop passes is a non-trivial undertaking (D110438 does this lossily), while the benefit of using it in loop predication isn't clear. In this patch, we rely on profile metadata to get almost similar benefit as BPI, without actually using the complete heuristics provided by BPI. This avoids the compile time explosion we tried to fix with D110438 and also avoids fragile bugs because BPI can be lossy in loop passes (D111448). Reviewed-By: asbirlea, apilipenko Differential Revision: https://reviews.llvm.org/D111668
-
Arthur Eubanks authored
And fix a typo.
-
Jamie Schmeiser authored
Summary: Break out non-functional changes to the print-changed classes that are needed for reuse with the DotCfg change printer in https://reviews.llvm.org/D87202. Various changes to the change printers to facilitate reuse with the upcoming DotCfg change printer. This includes changing several of the classes and their support classes to being templates. Also, some template parameter names were simplified to avoid confusion with planned identifiers in the DotCfg change printer to come. A virtual function in the class for comparing functions was changed to a lambda. The virtual function same was replaced with calls to operator==. The only intentional functional change was to add the exe name as the first parameter to llvm::sys::ExecuteAndWait Author: Jamie Schmeiser <schmeise@ca.ibm.com> Reviewed By: aeubanks (Arthur Eubanks) Differential Revision: https://reviews.llvm.org/D110737
-
David Sherwood authored
Following on from an earlier patch that introduced support for -mtune for AArch64 backends, this patch splits out the tuning features from the processor features. This gives us the ability to enable architectural feature set A for a given processor with "-mcpu=A" and define the set of tuning features B with "-mtune=B". It's quite difficult to write a test that proves we select the right features according to the tuning attribute because most of these relate to scheduling. I have created a test here: CodeGen/AArch64/misched-fusion-addr-tune.ll that demonstrates the different scheduling choices based upon the tuning. Differential Revision: https://reviews.llvm.org/D111551
-
David Sherwood authored
This patch ensures that we always tune for a given CPU on AArch64 targets when the user specifies the "-mtune=xyz" flag. In the AArch64Subtarget if the tune flag is unset we use the CPU value instead. I've updated the release notes here: llvm/docs/ReleaseNotes.rst and added tests here: clang/test/Driver/aarch64-mtune.c Differential Revision: https://reviews.llvm.org/D110258
-
Simon Pilgrim authored
Inspired by D111968, provide a isNegatedPowerOf2() wrapper instead of obfuscating code with (-Value).isPowerOf2() patterns, which I'm sure are likely avenues for typos..... Differential Revision: https://reviews.llvm.org/D111998
-
Jeremy Morse authored
This is purely a performance patch: InstrRefBasedLDV used to use three DenseMaps to store variable values, two for long term storage and one as a working set. This patch eliminates the working set, and updates the long term storage in place, thus avoiding two DenseMap comparisons and two DenseMap assignments, which can be expensive. Differential Revision: https://reviews.llvm.org/D111716
-
Lasse Folger authored
When printing names in lldb on windows these names contain the full type information while on linux only the name is contained. This change introduces a flag in the Microsoft demangler to control if the type information should be included. With the flag enabled demangled name contains only the qualified name, e.g: without flag -> with flag int (*array2d)[10] -> array2d int (*abc::array2d)[10] -> abc::array2d const int *x -> x For globals there is a second inconsistency which is not yet addressed by this change. On linux globals (in global namespace) are prefixed with :: while on windows they are not. Reviewed By: teemperor, rnk Differential Revision: https://reviews.llvm.org/D111715
-
Jeremy Morse authored
This field gets assigned when the relevant object starts being used; but it remains uninitialized beforehand. This risks introducing hard-to-detect bugs if something changes, so zero-initialize the field.
-
Lang Hames authored
This lifts the global offset table and procedure linkage table builders out of ELF_x86_64.h and into x86_64.h, renaming them with generic names x86_64::GOTTableBuilder and x86_64::PLTTableBuilder. MachO_x86_64.cpp is updated to use these classes instead of the older PerGraphGOTAndStubsBuilder tool.
-
Noah Shutty authored
We would like to move ThinLTO’s battle-tested file caching mechanism to the LLVM Support library so that we can use it elsewhere in LLVM. Patch By: noajshu Differential Revision: https://reviews.llvm.org/D111371
-
Hsiangkai Wang authored
GPR uses argument registers as the first group of registers to allocate. This patch uses vector argument registers, v8 to v23, as the first group to allocate. Differential Revision: https://reviews.llvm.org/D111304
-
Lang Hames authored
Moves visitEdge into the TableManager derivatives, replacing the fixEdgeKind methods in those classes. The visitEdge method takes on responsibility for updating the edge target, as well as its kind.
-
Anshil Gandhi authored
By default clang emits complete contructors as alias of base constructors if they are the same. The backend is supposed to emit symbols for the alias, otherwise it causes undefined symbols. @yaxunl observed that this issue is related to the llvm options `-amdgpu-early-inline-all=true` and `-amdgpu-function-calls=false`. This issue is resolved by only inlining global values with internal linkage. The `getCalleeFunction()` in AMDGPUResourceUsageAnalysis also had to be extended to support aliases to functions. inline-calls.ll was corrected appropriately. Reviewed By: yaxunl, #amdgpu Differential Revision: https://reviews.llvm.org/D109707
-
- Oct 18, 2021
-
-
Simon Pilgrim authored
If we're using an ashr to sign-extend the entire upper 16 bits of the i32 element, then we can replace with a lshr. The sign bit will be correctly shifted for PMADDWD's implicit sign-extension and the upper 16 bits are zero so the upper i16 sext-multiply is guaranteed to be zero. The lshr also has a better chance of folding with shuffles etc.
-
Arthur Eubanks authored
-
Craig Topper authored
RISCVISAInfo::toFeatures needs to allocate strings using ArgList::MakeArgString, but toFeatures lives in Support and MakeArgString lives in Option. toFeature only has one caller, so the simple fix is to have that caller pass a lamdba that wraps MakeArgString to break the dependency. Differential Revision: https://reviews.llvm.org/D112032
-
Matt Morehouse authored
The feature tells the backend to allow tags in the upper bits of global variable addresses. These tags will be ignored by upcoming CPUs with the Intel LAM feature but may be used in instrumentation passes (e.g., HWASan). This patch implements the feature by using @GOTPCREL relocations instead of direct references to the locally defined global. Thus the full tagged address can be loaded by a single instruction: movq global@GOTPCREL(%rip), %rax Reviewed By: eugenis Differential Revision: https://reviews.llvm.org/D111343
-
Alexandros Lamprineas authored
When compiling for the RWPI relocation model the debug information is wrong: * the debug location is described as { DW_OP_addr Var } instead of { DW_OP_constNu Var DW_OP_bregX 0 DW_OP_plus } * the relocation type is R_ARM_ABS32 instead of R_ARM_SBREL32 Differential Revision: https://reviews.llvm.org/D111404
-
Arthur Eubanks authored
This trades off more compile time for less peak memory usage. Right now it invalidates all function analyses after a module->function or cgscc->function adaptor. https://llvm-compile-time-tracker.com/compare.php?from=1fb24fe85a19ae71b00875ff6c96ef1831dcf7e3&to=cb28ddb063c87f0d5df89812ab2de9a69dd276db&stat=instructions https://llvm-compile-time-tracker.com/compare.php?from=1fb24fe85a19ae71b00875ff6c96ef1831dcf7e3&to=cb28ddb063c87f0d5df89812ab2de9a69dd276db&stat=max-rss For now this is just experimental. See comments on why this may affect optimizations. Reviewed By: asbirlea, nikic Differential Revision: https://reviews.llvm.org/D111575
-
Alexey Bataev authored
Need to follow the order of the reused scalars from the ReuseShuffleIndices mask rather than rely on the natural order. Differential Revision: https://reviews.llvm.org/D111898
-
modimo authored
The goal is to allow grafting an inline tree from Clang or GCC into a new compilation without affecting other functions. For GCC, we're doing this by extracting the inline tree from dwarf information and generating the equivalent remarks. This allows easier side-by-side asm analysis and a trial way to see if a particular inlining setup provides benefits by itself. Testing: ninja check-all Reviewed By: wenlei, mtrofin Differential Revision: https://reviews.llvm.org/D110658
-