- Aug 23, 2016
-
-
Pete Cooper authored
That commit added a new version of Intrinsic::getName which should only be called when the intrinsic has no overloaded types. There are several debugging paths, such as SDNode::dump which are printing the name of the intrinsic but don't have the overloaded types. These paths should be ok to just print the name instead of crashing. The fix here is ultimately to just add a 'None' second argument as that calls the overload capable getName, which is less efficient, but this is a debugging path anyway, and not perf critical. Thanks to Björn Pettersson for pointing out that there were more crashes. llvm-svn: 279528
-
Matthias Braun authored
Revert "(HEAD -> master, origin/master, origin/HEAD) CodeGen: Remove MachineFunctionAnalysis => Enable (Machine)ModulePasses" Reverting while tracking down a use after free. This reverts commit r279502. llvm-svn: 279503
-
Matthias Braun authored
This patch removes the MachineFunctionAnalysis. Instead we keep a map from IR Function to MachineFunction in the MachineModuleInfo. This allows the insertion of ModulePasses into the codegen pipeline without breaking it because the MachineFunctionAnalysis gets dropped before a module pass. Peak memory should stay unchanged without a ModulePass in the codegen pipeline: Previously the MachineFunction was freed at the end of a codegen function pipeline because the MachineFunctionAnalysis was dropped; With this patch the MachineFunction is freed after the AsmPrinter has finished. Differential Revision: http://reviews.llvm.org/D23736 llvm-svn: 279502
-
Pete Cooper authored
The assert in r279466 checks that we call the correct version of Intrinsic::getName. The version which accepts only an ID should not be used for intrinsics with overloaded types. The global-isel code was calling the wrong version. The test CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll will ensure that we call the correct version from now on. llvm-svn: 279487
-
- Aug 22, 2016
-
-
Tim Shen authored
This should finish the GraphTraits migration. Differential Revision: http://reviews.llvm.org/D23730 llvm-svn: 279475
-
Tim Shen authored
__guard_local is defined as long on OpenBSD. If the source file contains a definition of __guard_local, it mismatches with the int8 pointer type used in LLVM. In that case, Module::getOrInsertGlobal() returns a cast operation instead of a GlobalVariable. Trying to set the visibility on the cast operation leads to random segfaults (seen when compiling the OpenBSD kernel, which also runs with stack protection). In the kernel, the hidden attribute does not matter. For userspace code, __guard_local is defined as hidden in the startup code. If a program re-defines __guard_local, the definition from the startup code will either win or the linker complains about multiple definitions (depending on whether the re-defined __guard_local is placed in the common segment or not). It also matches what gcc on OpenBSD does. Thanks Stefan Kempf <sisnkemp@gmail.com> for the patch! Differential Revision: http://reviews.llvm.org/D23674 llvm-svn: 279449
-
Krzysztof Parzyszek authored
llvm-svn: 279437
-
- Aug 20, 2016
-
-
Simon Pilgrim authored
llvm-svn: 279381
-
Matthias Braun authored
Most compilers should give you a warning anyway though. llvm-svn: 279346
-
Krzysztof Parzyszek authored
llvm-svn: 279344
-
Tim Northover authored
llvm-svn: 279341
-
Tim Northover authored
llvm-svn: 279340
-
Matthias Braun authored
- Always compile print() regardless of LLVM_ENABLE_DUMP. (We usually only gard dump() functions with that). - Only show the set properties to reduce output clutter. - Remove the unused variant that even shows the unset properties. - Fix comments llvm-svn: 279338
-
Matthias Braun authored
This avoids unnecessary cases in switch statements covering all properties. llvm-svn: 279337
-
- Aug 19, 2016
-
-
Tim Shen authored
Currently nodes_iterator may dereference to a NodeType* or a NodeType&. Make them all dereference to NodeType*, which is NodeRef later. Differential Revision: https://reviews.llvm.org/D23704 Differential Revision: https://reviews.llvm.org/D23705 llvm-svn: 279326
-
Krzysztof Parzyszek authored
llvm-svn: 279325
-
Tim Northover authored
llvm-svn: 279319
-
Tim Northover authored
llvm-svn: 279311
-
Tim Northover authored
llvm-svn: 279309
-
Tim Northover authored
No tests yet unfortunately (ConstantFolding reduces all supported constants to ConstantInts before we get to translation). Soon. llvm-svn: 279308
-
Tim Northover authored
This adds a G_INSERT instruction, which technically makes G_SEQUENCE redundant (it's equivalent to a G_INSERT into an IMPLICIT_DEF). We'll leave G_SEQUENCE for now though: it's likely to be far more common as it's a fundamental part of legalization, so avoiding the mess and bloat of the extra IMPLICIT_DEFs is probably worthwhile. llvm-svn: 279306
-
Tom Stellard authored
Summary: This way they can be re-used by target-specific schedulers. Reviewers: atrick, MatzeB, kparzysz Subscribers: kparzysz, llvm-commits, MatzeB Differential Revision: https://reviews.llvm.org/D23678 llvm-svn: 279305
-
Tim Northover authored
First, make sure all types involved are represented, rather than being implicit from the register width. Second, canonicalize all types to scalar. These operations just act in bits and don't care about vectors. Also standardize spelling of Indices in the MachineIRBuilder (NFC here). llvm-svn: 279294
-
Kyle Butt authored
This reverts commit bfd62a4b4465dd21811bf615c3b04c30ddb09f7b. llvm-svn: 279289
-
Kyle Butt authored
This reverts commit 0fda93481c4231c06b838ef476c0c404c51ff875. llvm-svn: 279288
-
Tim Northover authored
llvm-svn: 279287
-
Tim Northover authored
llvm-svn: 279285
-
Tim Northover authored
Unsigned addition and subtraction can reuse the instructions created to legalize large width operations (i.e. both produce and consume a carry flag). Signed operations and multiplies get a dedicated op-with-overflow instruction. Once this is produced the two values are combined into a struct register (which will almost always be merged with a corresponding G_EXTRACT as part of legalization). llvm-svn: 279278
-
James Molloy authored
The heuristic above this code is incredibly suspect, but disregarding that it mutates the cast opcode so we need to check the *mutated* opcode later to see if we need to emit an AssertSext or AssertZext node. Fixes PR29041. llvm-svn: 279223
-
Matthias Braun authored
The ppc64 multistage bot fails on this. This reverts commit r279124. Also Revert "CodeGen: Add/Factor out LiveRegUnits class; NFCI" because it depends on the previous change This reverts commit r279171. llvm-svn: 279199
-
Matthias Braun authored
This is a set of register units intended to track register liveness, it is similar in spirit to LivePhysRegs. You can also think of this as the liveness tracking parts of the RegisterScavenger factored out into an own class. This was proposed in http://llvm.org/PR27609 Differential Revision: http://reviews.llvm.org/D21916 llvm-svn: 279171
-
Kyle Butt authored
The following function currently relies on tail-merging for if conversion to succeed. The common tail of cond_true and cond_false is extracted, and this then forms a diamond pattern that can be successfully if converted. If this block does not get extracted, either because tail-merging is disabled or the threshold is higher, we should still recognize this pattern and if-convert it. Fixed a regression in the original commit. Need to un-reverse branches after reversing them, or other conversions go awry. Regression on self-hosting bots with no obvious explanation. Tidied up range handling to be more obviously correct, but there was no smoking gun. define i32 @t2(i32 %a, i32 %b) nounwind { entry: %tmp1434 = icmp eq i32 %a, %b ; <i1> [#uses=1] br i1 %tmp1434, label %bb17, label %bb.outer bb.outer: ; preds = %cond_false, %entry %b_addr.021.0.ph = phi i32 [ %b, %entry ], [ %tmp10, %cond_false ] %a_addr.026.0.ph = phi i32 [ %a, %entry ], [ %a_addr.026.0, %cond_false ] br label %bb bb: ; preds = %cond_true, %bb.outer %indvar = phi i32 [ 0, %bb.outer ], [ %indvar.next, %cond_true ] %tmp. = sub i32 0, %b_addr.021.0.ph %tmp.40 = mul i32 %indvar, %tmp. %a_addr.026.0 = add i32 %tmp.40, %a_addr.026.0.ph %tmp3 = icmp sgt i32 %a_addr.026.0, %b_addr.021.0.ph br i1 %tmp3, label %cond_true, label %cond_false cond_true: ; preds = %bb %tmp7 = sub i32 %a_addr.026.0, %b_addr.021.0.ph %tmp1437 = icmp eq i32 %tmp7, %b_addr.021.0.ph %indvar.next = add i32 %indvar, 1 br i1 %tmp1437, label %bb17, label %bb cond_false: ; preds = %bb %tmp10 = sub i32 %b_addr.021.0.ph, %a_addr.026.0 %tmp14 = icmp eq i32 %a_addr.026.0, %tmp10 br i1 %tmp14, label %bb17, label %bb.outer bb17: ; preds = %cond_false, %cond_true, %entry %a_addr.026.1 = phi i32 [ %a, %entry ], [ %tmp7, %cond_true ], [ %a_addr.026.0, %cond_false ] ret i32 %a_addr.026.1 } Without tail-merging or diamond-tail if conversion: LBB1_1: @ %bb @ =>This Inner Loop Header: Depth=1 cmp r0, r1 ble LBB1_3 @ BB#2: @ %cond_true @ in Loop: Header=BB1_1 Depth=1 subs r0, r0, r1 cmp r1, r0 it ne cmpne r0, r1 bgt LBB1_4 LBB1_3: @ %cond_false @ in Loop: Header=BB1_1 Depth=1 subs r1, r1, r0 cmp r1, r0 bne LBB1_1 LBB1_4: @ %bb17 bx lr With diamond-tail if conversion, but without tail-merging: @ BB#0: @ %entry cmp r0, r1 it eq bxeq lr LBB1_1: @ %bb @ =>This Inner Loop Header: Depth=1 cmp r0, r1 ite le suble r1, r1, r0 subgt r0, r0, r1 cmp r1, r0 bne LBB1_1 @ BB#2: @ %bb17 bx lr llvm-svn: 279168
-
Kyle Butt authored
The cost of predicating a diamond is only the instructions that are not shared between the two branches. Additionally If a predicate clobbering instruction occurs in the shared portion of the branches (e.g. a cond move), it may still be possible to if convert the sub-cfg. This change handles these two facts by rescanning the non-shared portion of a diamond sub-cfg to recalculate both the predication cost and whether both blocks are pred-clobbering. llvm-svn: 279167
-
Kyle Butt authored
This may affect calculations for thresholds, but is not a significant change in behavior. The problem was that an inclusive range must have an additonal flag to showr that it is empty, because otherwise begin == end implies that the range has one element, and it may not be possible to move past on either side. llvm-svn: 279166
-
- Aug 18, 2016
-
-
Matthias Braun authored
Re-apply r276044 with off-by-1 instruction fix for the reload placement. This is a variant of scavengeRegister() that works for enterBasicBlockEnd()/backward(). The benefit of the backward mode is that it is not affected by incomplete kill flags. This patch also changes PrologEpilogInserter::doScavengeFrameVirtualRegs() to use the register scavenger in backwards mode. Differential Revision: http://reviews.llvm.org/D21885 llvm-svn: 279124
-
Kyle Butt authored
This is prep work for allowing the threshold to be different during layout, and to enforce a single threshold between merging and duplicating during layout. No observable change intended. llvm-svn: 279117
-
Alex Bradbury authored
Summary: This is a pretty trivial, but I thought it was worth just checking that nobody feels it's completely the wrong thing to be doing. The motivation is that when starting a new backend, you often start with a minimal stub, pretty much just FooTargetMachine and FooTargetInfo. Once that's built, you might naturally try `llc -march=foo myinput.ll` and it seems more developer-friendly if this ends up asserting due to the lack of MCAsmInfo with an informative message rather than just segfaulting. Reviewers: MatzeB, chandlerc Subscribers: bogner, llvm-commits Differential Revision: https://reviews.llvm.org/D23443 llvm-svn: 279061
-
Matthias Braun authored
Some inputs would after r278974 without this fix (see http://lab.llvm.org:8080/green/job/clang-stage2-cmake-RgSan_build/2733/console for an example) llvm-svn: 279022
-
- Aug 17, 2016
-
-
Kyle Butt authored
This will allow tail duplication and tail merging during layout to have a shared threshold to make sure that they don't overlap. No observable change intended. llvm-svn: 278981
-
Kyle Butt authored
This will cause minsize functions to have the same threshold as optsize functions, but otherwise should have no effects. llvm-svn: 278980
-