- Apr 08, 2019
-
-
Justin Bogner authored
Simplify building with particular C++ standards by replacing the specific "enable standard X" flags with a flag that allows specifying the standard you want directly. We preserve compatibility with the existing flags so that anyone with those flags in existing caches won't break mysteriously. Differential Revision: https://reviews.llvm.org/D60399 llvm-svn: 357899
-
Roman Lebedev authored
Reviewers: courbet, gchatelet Reviewed By: gchatelet Subscribers: tschuett, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D60066 llvm-svn: 357898
-
Pavel Labath authored
Summary: The ModuleList stream consists of an integer giving the number of entries in the list, followed by the list itself. Each entry in the list describes a module (dynamically loaded objects which were loaded in the process when it crashed (or when the minidump was generated). The code for reading the list is relatively straight-forward, with a single gotcha. Some minidump writers are emitting padding after the "count" field in order to align the subsequent list on 8 byte boundary (this depends on how their ModuleList type was defined and the native alignment of various types on their platform). Fortunately, the minidump format contains enough redundancy (in the form of the stream length field in the stream directory), which allows us to detect this situation and correct it. This patch just adds the ability to parse the stream. Code for conversion to/from yaml will come in a follow-up patch. Reviewers: zturner, amccarth, jhenderson, clayborg Subscribers: jdoerfert, markmentovai, lldb-commits, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D60121 llvm-svn: 357897
-
Chen Zheng authored
llvm-svn: 357894
-
Craig Topper authored
[X86] Make LowerOperationWrapper more robust. Remove now unnecessary ReplaceAllUsesWith from LowerMSCATTER. Previously LowerOperationWrapper took the number of results from the original node and counted that many results from the new node. This was intended to drop chain operands from FP_TO_SINT lowering that uses X87 with memory operations to stack temporaries. The final load had an extra chain output that needs to be ignored. Unfortunately, it didn't work with scatter which has 2 result operands, the mask output which is discarded and a chain output. The chain output is the one that is needed but it comes second and it would be dropped by the previous logic here. To workaround this we were doing a ReplaceAllUses in the lowering code so that the generic legalization code wouldn't see any uses to replace since it had been given the wrong result/type. After this change we take the LowerOperation result directly if the original node has one result. This allows us to directly return the chain from scatter or the load data from the FP_TO_SINT case. When the original node has multiple results we'll ensure the returned node has the same number and copy them over. For cases where the original node has multiple results and the new code for some reason has even more results, MERGE_VALUES can be used to pass only the needed results. llvm-svn: 357887
-
Fangrui Song authored
These calls are redundant because the quotients have the same BitWidth as MinValue/MaxValue. llvm-svn: 357886
-
Chen Zheng authored
llvm-svn: 357884
-
Chen Zheng authored
llvm-svn: 357883
-
Craig Topper authored
[X86] Split floating point tests out of atomic-mi.ll into atomic-fp.ll. Add avx and avx512f command lines. NFC llvm-svn: 357882
-
Craig Topper authored
llvm-svn: 357881
-
Fangrui Song authored
llvm-svn: 357880
-
- Apr 07, 2019
-
-
Nikita Popov authored
This extends D59959 to unionWith(), allowing to specify that a non-wrapping unsigned/signed range is preferred. This is somewhat less useful than the intersect case, because union operations are rarer. An example use would the the phi union computed in SCEV. The implementation is mostly a straightforward use of getPreferredRange(), but I also had to adjust some <=/< checks to make sure that no ranges with lower==upper get constructed before they're passed to getPreferredRange(), as these have additional constraints. Differential Revision: https://reviews.llvm.org/D60377 llvm-svn: 357876
-
Craig Topper authored
[X86] Use (SUBREG_TO_REG (MOV32rm)) for extloadi64i8/extloadi64i16 when the load is 4 byte aligned or better and not volatile. Summary: Previously we would use MOVZXrm8/MOVZXrm16, but those are longer encodings. This is similar to what we do in the loadi32 predicate. Reviewers: RKSimon, spatel Reviewed By: RKSimon Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D60341 llvm-svn: 357875
-
Nikita Popov authored
Extract the exhaustive intersection tests into a separate function, so that it may be reused for unions as well. llvm-svn: 357874
-
Nikita Popov authored
The intersection of two ConstantRanges may consist of two disjoint ranges. As we can only return one range as the result, we need to return one of the two possible ranges that cover both. Currently the result is picked based on set size. However, this is not always optimal: If we're in an unsigned context, we'd prefer to get a large unsigned range over a small signed range -- the latter effectively becomes a full set in the unsigned domain. This revision adds a PreferredRangeType, which can be either Smallest, Unsigned or Signed. Smallest is the current behavior and Unsigned and Signed are new variants that prefer not to wrap the unsigned/signed domain. The new type isn't used anywhere yet (but SCEV will be a good first user, see D60035). I've also added some comments to illustrate the various cases in intersectWith(), which should hopefully make it more obvious what is going on. Differential Revision: https://reviews.llvm.org/D59959 llvm-svn: 357873
-
Robert Widmann authored
Summary: Add an accessor for the type of a binary file. Reviewers: whitequark, deadalnix Reviewed By: whitequark Subscribers: hiraditya, aheejin, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D60366 llvm-svn: 357872
-
Nikita Popov authored
Add isAllNegative() and isAllNonNegative() methods to ConstantRange, which determine whether all values in the constant range are negative/non-negative. This is useful for replacing KnownBits isNegative() and isNonNegative() calls when changing code to use constant ranges. Differential Revision: https://reviews.llvm.org/D60264 llvm-svn: 357871
-
Nikita Popov authored
Add support for min/max flavor selects in computeConstantRange(), which allows us to fold comparisons of a min/max against a constant in InstSimplify. This fixes an infinite InstCombine loop, with the test case taken from D59378. Relative to the previous iteration, this contains some adjustments for AMDGPU med3 tests: The AMDGPU target runs InstSimplify prior to codegen, which ends up constant folding some existing med3 tests after this change. To preserve these tests a hidden -amdgpu-scalar-ir-passes option is added, which allows disabling scalar IR passes (that use InstSimplify) for testing purposes. Differential Revision: https://reviews.llvm.org/D59506 llvm-svn: 357870
-
Fangrui Song authored
The main disassembly loop is hard to read due to special handling of ARM ELF data & ELF data. Split off the logic into two functions dumpARMELFData and dumpELFData. Hoist some checks outside of the loop. --start-address --stop-address have redundant checks and minor off-by-1 issues. Fix them. llvm-svn: 357869
-
Chris Lattner authored
llvm-svn: 357868
-
Chris Lattner authored
llvm-svn: 357867
-
Fangrui Song authored
llvm-svn: 357866
-
Chris Lattner authored
llvm-svn: 357865
-
Simon Pilgrim authored
Expansion/truncation is better described by SK_PermuteTwoSrc than SK_Select llvm-svn: 357864
-
Chris Lattner authored
llvm-svn: 357863
-
Chris Lattner authored
Copy the C++ kaleidoscope tutorial into a subdirectory and clean up various things, aligning with the direction of the WiCT workshop, and Meike Baumgärtner's view of how this should work. The old version of the documentation is unmodified, this is an experiment. llvm-svn: 357862
-
Simon Pilgrim authored
llvm-svn: 357861
-
Simon Pilgrim authored
In the case where we only want the sign bit (e.g. when using PACKSS truncation of comparison results for MOVMSK) then we can just demand the sign bit of the source operands. This makes use of the fact that PACKSS saturates out of range values to the min/max int values - so the sign bit is always preserved. Differential Revision: https://reviews.llvm.org/D60333 llvm-svn: 357859
-
Fangrui Song authored
If the file does not end with a newline, it may be dropped. Fix the splitting algorithm. Also delete an unnecessary SourceCache lookup. llvm-svn: 357858
-
Fangrui Song authored
llvm-svn: 357857
-
Fangrui Song authored
llvm-svn: 357856
-
Fangrui Song authored
llvm-svn: 357855
-
Marcello Maggioni authored
if we do SHL of two 16-bit ranges like [0, 30000) with [1,2) we get "full-set" instead of what I would have expected [0, 60000) which is still in the 16-bit unsigned range. This patch changes the SHL algorithm to allow getting a usable range even in this case. Differential Revision: https://reviews.llvm.org/D57983 llvm-svn: 357854
-
Fangrui Song authored
* Use std::binary_search to replace some std::lower_bound * Use llvm::upper_bound to replace some std::upper_bound * Use format_hex and support::endian::read{16,32} llvm-svn: 357853
-
Fangrui Song authored
llvm-svn: 357852
-
Petr Hosek authored
This change also introduces the clang_enable_per_target_runtime_dir to enable the use of per-target runtime directory layout which is the equivalent of LLVM_ENABLE_PER_TARGET_RUNTIME_DIR CMake option. Differential Revision: https://reviews.llvm.org/D60332 llvm-svn: 357850
-
Nick Lewycky authored
llvm-svn: 357849
-
- Apr 06, 2019
-
-
Craig Topper authored
[X86] When converting (x << C1) AND C2 to (x AND (C2>>C1)) << C1 during isel, try using andl over andq by favoring 32-bit unsigned immediates. llvm-svn: 357848
-
Simon Pilgrim authored
Prep work to make it easier to reuse the BITCAST->MOVSMK combine in other cases. llvm-svn: 357847
-
Craig Topper authored
This function reorders AND and SHL to enable the SHL to fold into an LEA. The upper bits of the AND will be shifted out by the SHL so it doesn't matter what mask value we use for these bits. By using sign bits from the original mask in these upper bits we might enable a shorter immediate encoding to be used. llvm-svn: 357846
-