- Sep 03, 2016
-
-
Gor Nishanov authored
Summary: VS code creates .vscode folder to keep its stuff that we really don't need in git. Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D24211 llvm-svn: 280551
-
Ivan Krasin authored
Summary: This is a follow up to r280455, where a check for the process exit code was introduced. Some ASAN bots throw this error now, but it's impossible to understand what's wrong with them, and the issue is not reproducible. Reviewers: vitalybuka Differential Revision: https://reviews.llvm.org/D24210 llvm-svn: 280550
-
Zachary Turner authored
Before we were kind of imitating the behavior of a Yaml sequence by outputting each record one after the other. This makes it a little cumbersome when we want to go the other direction -- from Yaml to Pdb. So this treats FieldList records as no different than any other list of records, by printing them as a Yaml sequence with the exact same format. llvm-svn: 280549
-
Xinliang David Li authored
Builtin expect lowering currently ignores select. This patch fixes the issue Differential Revision: http://reviews.llvm.org/D24166 llvm-svn: 280547
-
- Sep 02, 2016
-
-
Hal Finkel authored
When we have an offset into a global, etc. that is accessed relative to the TOC base pointer, and the offset is larger than the minimum alignment of the global itself and the TOC base pointer (which is 8-byte aligned), we can still fold the @toc@ha into the memory access, but we must update the addis instruction's symbol reference with the offset as the symbol addend. When there is only one use of the addi to be folded and only one use of the addis that would need its symbol's offset adjusted, then we can make the adjustment and fold the @toc@l into the memory access. llvm-svn: 280545
-
James Y Knight authored
Recently, llvm wants to emit calls to these functions, while it didn't seem to be an issue before. Not sure why. Nor do I know why only these three are important to disable, out of all of the i128 libcalls. Nevertheless, many other targets have this snippet of code, so, just copying it to sparc as well, to unbreak things. llvm-svn: 280537
-
Jan Vesely authored
AMDGPU/R600: EXTRACT_VECT_ELT should only bypass BUILD_VECTOR if the vectors have the same number of elements. Fixes R600 piglit regressions since r280298 Differential Revision: https://reviews.llvm.org/D24174 llvm-svn: 280535
-
Sjoerd Meijer authored
r280246 and calculates compatibility of functions attributes in a better way. Differential Revision: https://reviews.llvm.org/D24070 llvm-svn: 280534
-
Krzysztof Parzyszek authored
Subregister definitions are considered uses for the purpose of tracking liveness of the whole register. At the same time, when calculating live interval subranges, subregister defs should not be treated as uses. Differential Revision: https://reviews.llvm.org/D24190 llvm-svn: 280532
-
Sanjay Patel authored
llvm-svn: 280531
-
Chad Rosier authored
Differential Revision: https://reviews.llvm.org/D24199 llvm-svn: 280527
-
Jan Vesely authored
LOCAL and GLOBAL AS only PRIVATE needs special treatment Differential Revision: https://reviews.llvm.org/D23971 llvm-svn: 280526
-
Jan Vesely authored
Split by AS. Merge with some prviously failing tests. Differential Revision: https://reviews.llvm.org/D23969 llvm-svn: 280523
-
Reid Kleckner authored
Previously we were splitting our records at 0xFFFF bytes, which the Microsoft tools don't like. Should fix failure on the new Windows self-host buildbot. This length appears in microsoft-pdb/PDB/dbi/dbiimpl.h llvm-svn: 280522
-
Kyle Butt authored
One side of a diamond may end with a predicate clobbering instruction. That side of the diamond has to be if-converted second. Both sides can't clobber the predicate or the ifconversion is invalid. This is checked elsewhere, but add an assert as a safety check. NFC llvm-svn: 280518
-
Kyle Butt authored
Passing the wrong values for predicate-clobbering. Simple to miss. Added an assert to make this easier to catch in the future. llvm-svn: 280517
-
Adam Nemet authored
llvm-svn: 280508
-
Wei Mi authored
For the store of a wide value merged from a pair of values, especially int-fp pair, sometimes it is more efficent to split it into separate narrow stores, which can remove the bitwise instructions or sink them to colder places. Now the feature is only enabled on x86 target, and only store of int-fp pair is splitted. It is possible that the application scope gets extended with perf evidence support in the future. Differential Revision: https://reviews.llvm.org/D22840 llvm-svn: 280505
-
Sanjay Patel authored
The motivating case occurs with SSE/AVX scalar intrinsics, so this is a first step towards shrinking that to a single shufflevector. Note that the transform is intentionally limited to shuffles that are equivalent to vector selects to avoid creating arbitrary shuffle masks that may not lower well. This should solve PR29126: https://llvm.org/bugs/show_bug.cgi?id=29126 Differential Revision: https://reviews.llvm.org/D23886 llvm-svn: 280504
-
Davide Italiano authored
llvm-svn: 280503
-
Reid Kleckner authored
llvm-svn: 280502
-
Reid Kleckner authored
Do this by creating a temp directory in the normal system temp directory, and cleaning it up on exit. It is still possible for this temp directory to leak if Python exits abnormally, but this is probably good enough for now. Fixes PR18335 llvm-svn: 280501
-
Derek Schuff authored
Fixed an issue with the experimental C headers llvm-svn: 280498
-
Matthew Simpson authored
For uniform instructions, we're only required to generate a scalar value for the first vector lane of each unroll iteration. Thus, if we have a reverse interleaved group, computing the member index off the scalar GEP corresponding to the last vector lane of its pointer operand technically makes the GEP non-uniform. We should compute the member index off the first scalar GEP instead. I've added the updated member index computation to the existing reverse interleaved group test. llvm-svn: 280497
-
Andrea Di Biagio authored
We don't need to call `GetCompareTy(LHS)' every single time true or false is returned from function SimplifyFCmpInst as suggested by Sanjay in review D24142. llvm-svn: 280491
-
Sanjay Patel authored
llvm-svn: 280489
-
Andrea Di Biagio authored
This patch fixes a crash caused by an incorrect folding of an ordered comparison between a packed floating point vector and a splat vector of NaN. An ordered comparison between a vector and a constant vector of NaN, should always be folded into a constant vector where each element is i1 false. Since revision 266175, SimplifyFCmpInst folds the ordered fcmp into a scalar 'false'. Later on, this would cause an assertion failure, since the value type of the folded value doesn't match the expected value type of the uses of the original instruction: "Assertion failed: New->getType() == getType() && "replaceAllUses of value with new value of different type!". This patch fixes the issue and adds a test case to the already existing test InstSimplify/floating-point-compares.ll. Differential Revision: https://reviews.llvm.org/D24143 llvm-svn: 280488
-
Andrea Di Biagio authored
This fixes a regression introduced by revision 268094. Revision 268094 added the following dag combine rule: // trunc (shl x, K) -> shl (trunc x), K => K < vt.size / 2 That rule converts a truncate of a shift-by-constant into a shift of a truncated value. We do this only if the shift count is less than half the size in bits of the truncated value (K < vt.size / 2). The problem is that the constraint on the shift count is incorrect, so the rule doesn't work well in some cases involving vector types. The combine rule should have been written instead like this: // trunc (shl x, K) -> shl (trunc x), K => K < vt.getScalarSizeInBits() Basically, if K is smaller than the "scalar size in bits" of the truncated value then we know that by "sinking" the truncate into the operand of the shift we would never accidentally make the shift undefined. This patch fixes the check on the shift count, and adds test cases to make sure that we don't regress the behavior. Differential Revision: https://reviews.llvm.org/D24154 llvm-svn: 280482
-
Andrey Bokhanko authored
llvm-svn: 280481
-
Chandler Carruth authored
constructor when trying to do copy construction by adding an explicit move constructor. Will watch the bots to discover if this is sufficient. llvm-svn: 280479
-
Alexey Bataev authored
Added a tests that shows that several insertelementinsts with constant indexes/data are not folded into a single shuffleinst. llvm-svn: 280474
-
George Rimar authored
Crash was possible if match() method was called on object that was moved or object created with empty constructor. Testcases updated. DIfferential revision: https://reviews.llvm.org/D24123 llvm-svn: 280473
-
George Rimar authored
Previously DT_AUXILIARY was unknown, patch fixes that. Differential revision: https://reviews.llvm.org/D24138 llvm-svn: 280471
-
James Molloy authored
We're sinking stores, which is a good thing, but in the process creating selects for the store address operand, which SROA/Mem2Reg can't look through, which caused serious regressions. The real fix is in SROA, which I'll be looking into. llvm-svn: 280470
-
Craig Topper authored
[AVX-512] Move tests for masked floating point logical operations to avx512dqvl-intrinsics-upgrade.ll since they have now been autoupgraded. llvm-svn: 280467
-
Craig Topper authored
llvm-svn: 280466
-
Craig Topper authored
[AVX-512] Add more patterns for masked and broadcasted logical operations where the select or broadcast has a floating point type. These are needed in order to remove the masked floating point logical operation intrinsics and use native IR. llvm-svn: 280465
-
Craig Topper authored
[AVX-512] Add execution domain fixing for logical operations with broadcast loads. This builds on the handling of masked ops since we need to keep element size the same. llvm-svn: 280464
-
Craig Topper authored
llvm-svn: 280463
-
Craig Topper authored
[AVX-512] Add NoVLX Predicates to some patterns so they don't rely on pattern ordering to be lower priority than their equivalent VLX pattern. llvm-svn: 280462
-