- Sep 08, 2014
-
-
Alexander Kornienko authored
clang-tidy. Reviewers: chandlerc, djasper Reviewed By: djasper Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D5236 llvm-svn: 217365
-
Sid Manning authored
Another trivial spelling change. llvm-svn: 217364
-
Simon Atanasyan authored
defined in a shared library. Now LLD does not export a strong defined symbol if it coalesces away a weak symbol defined in a shared library. This bug affects all ELF architectures and leads to segfault: % cat foo.c extern int __attribute__((weak)) flag; int foo() { return flag; } % cat main.c int flag = 1; int foo(); int main() { return foo() == 1 ? 0 : -1; } % clang -c -fPIC foo.c main.c % lld -flavor gnu -target x86_64 -shared -o libfoo.so ... foo.o % lld -flavor gnu -target x86_64 -o a.out ... main.o libfoo.so % ./a.out Segmentation fault The problem is caused by the fact that we lose all information about coalesced symbols after the `Resolver::resolve()` method is finished. The patch solves the problem by overriding the `LinkingContext::notifySymbolTableCoalesce()` method and saving names of coalesced symbols. Later in the `buildDynamicSymbolTable()` routine we use this information to export these symbols. llvm-svn: 217363
-
Evgeniy Stepanov authored
llvm-svn: 217362
-
Evgeniy Stepanov authored
llvm-svn: 217361
-
Saleem Abdulrasool authored
Linking Release+Asserts executable lldb-gdbserver (without symbols) liblldb.so: undefined reference to `lldb_private::MemoryHistoryASan::Initialize()' liblldb.so: undefined reference to `lldb_private::MemoryHistoryASan::Terminate()' liblldb.so: undefined reference to `vtable for lldb_private::TypeValidatorImpl_CXX' liblldb.so: undefined reference to `lldb_private::TypeValidatorImpl::TypeValidatorImpl(lldb_private::TypeValidatorImpl::Flags const&)' liblldb.so underlinked to lldbPluginMemoryHistoryASan.a when building with the make based build system (as opposed to CMake). llvm-svn: 217360
-
Shankar Easwaran authored
When a file is not found, produce a proper error message. The previous error message produced a file format error, which made me wonder for a while why there is a file format error, but essentially the file was not found. This fixes the problem by producing a proper error message. llvm-svn: 217359
-
Shankar Easwaran authored
By default linker would not create a separate segment to hold read only data. This option overrides that behavior by creating the a separate read only segment for read only data. llvm-svn: 217358
-
Shankar Easwaran authored
When dynamic libraries are built, undefined symbols should always be allowed and the linker should not exit with an error. llvm-svn: 217356
-
Saleem Abdulrasool authored
We would previously simply assume that the write would always succeed. However, write(2) may return -1 for error as well as fail to perform a complete write (in which case the returned number of bytes will be less than the requested bytes). Explicitly check if an error condition is encountered. This would previously not be caught as we default initialized success to true. Add an assertion that we always perform a complete write (a continuous retry could be added to ensure that we finish writing completely). This was caught by GCC's signed comparison warning and manual inspection. llvm-svn: 217355
-
Shankar Easwaran authored
Remove unused functions in the Target relocation handler. llvm-svn: 217354
-
Saleem Abdulrasool authored
Removes a non-ascii character that was committed. llvm-svn: 217353
-
NAKAMURA Takumi authored
llvm-svn: 217352
-
Hal Finkel authored
Temporarily comment out the test for really-large powers of two. This seems to be host-sensitive for some reason... trying to fix the clang-i386-freebsd builder. llvm-svn: 217351
-
Andrew Trick authored
llvm-svn: 217350
-
Hal Finkel authored
This makes use of the recently-added @llvm.assume intrinsic to implement a __builtin_assume(bool) intrinsic (to provide additional information to the optimizer). This hooks up __assume in MS-compatibility mode to mirror __builtin_assume (the semantics have been intentionally kept compatible), and implements GCC's __builtin_assume_aligned as assume((p - o) & mask == 0). LLVM now contains special logic to deal with assumptions of this form. llvm-svn: 217349
-
- Sep 07, 2014
-
-
Hal Finkel authored
This adds a basic (but important) use of @llvm.assume calls in ScalarEvolution. When SE is attempting to validate a condition guarding a loop (such as whether or not the loop count can be zero), this check should also include dominating assumptions. llvm-svn: 217348
-
Hal Finkel authored
InstCombine just got a bit smarter about checking known bits of returned values, and because this test runs the optimizer, it requires an update. We should really rewrite this test to directly check the IR output from CodeGen. llvm-svn: 217347
-
Hal Finkel authored
From a combination of @llvm.assume calls (and perhaps through other means, such as range metadata), it is possible that all bits of a return value might be known. Previously, InstCombine did not check for this (which is understandable given assumptions of constant propagation), but means that we'd miss simple cases where assumptions are involved. llvm-svn: 217346
-
Hal Finkel authored
This change teaches LazyValueInfo to use the @llvm.assume intrinsic. Like with the known-bits change (r217342), this requires feeding a "context" instruction pointer through many functions. Aside from a little refactoring to reuse the logic that turns predicates into constant ranges in LVI, the only new code is that which can 'merge' the range from an assumption into that otherwise computed. There is also a small addition to JumpThreading so that it can have LVI use assumptions in the same block as the comparison feeding a conditional branch. With this patch, we can now simplify this as expected: int foo(int a) { __builtin_assume(a > 5); if (a > 3) { bar(); return 1; } return 0; } llvm-svn: 217345
-
Hal Finkel authored
This adds a ScalarEvolution-powered transformation that updates load, store and memory intrinsic pointer alignments based on invariant((a+q) & b == 0) expressions. Many of the simple cases we can get with ValueTracking, but we still need something like this for the more complicated cases (such as those with an offset) that require some algebra. Note that gcc's __builtin_assume_aligned's optional third argument provides exactly for this kind of 'misalignment' offset for which this kind of logic is necessary. The primary motivation is to fixup alignments for vector loads/stores after vectorization (and unrolling). This pass is added to the optimization pipeline just after the SLP vectorizer runs (which, admittedly, does not preserve SE, although I imagine it could). Regardless, I actually don't think that the preservation matters too much in this case: SE computes lazily, and this pass won't issue any SE queries unless there are any assume intrinsics, so there should be no real additional cost in the common case (SLP does preserve DT and LoopInfo). llvm-svn: 217344
-
Hal Finkel authored
This builds on r217342, which added the infrastructure to compute known bits using assumptions (@llvm.assume calls). That original commit added only a few patterns (to catch common cases related to determining pointer alignment); this change adds several other patterns for simple cases. r217342 contained that, for assume(v & b = a), bits in the mask that are known to be one, we can propagate known bits from the a to v. It also had a known-bits transfer for assume(a = b). This patch adds: assume(~(v & b) = a) : For those bits in the mask that are known to be one, we can propagate inverted known bits from the a to v. assume(v | b = a) : For those bits in b that are known to be zero, we can propagate known bits from the a to v. assume(~(v | b) = a): For those bits in b that are known to be zero, we can propagate inverted known bits from the a to v. assume(v ^ b = a) : For those bits in b that are known to be zero, we can propagate known bits from the a to v. For those bits in b that are known to be one, we can propagate inverted known bits from the a to v. assume(~(v ^ b) = a) : For those bits in b that are known to be zero, we can propagate inverted known bits from the a to v. For those bits in b that are known to be one, we can propagate known bits from the a to v. assume(v << c = a) : For those bits in a that are known, we can propagate them to known bits in v shifted to the right by c. assume(~(v << c) = a) : For those bits in a that are known, we can propagate them inverted to known bits in v shifted to the right by c. assume(v >> c = a) : For those bits in a that are known, we can propagate them to known bits in v shifted to the right by c. assume(~(v >> c) = a) : For those bits in a that are known, we can propagate them inverted to known bits in v shifted to the right by c. assume(v >=_s c) where c is non-negative: The sign bit of v is zero assume(v >_s c) where c is at least -1: The sign bit of v is zero assume(v <=_s c) where c is negative: The sign bit of v is one assume(v <_s c) where c is non-positive: The sign bit of v is one assume(v <=_u c): Transfer the known high zero bits assume(v <_u c): Transfer the known high zero bits (if c is know to be a power of 2, transfer one more) A small addition to InstCombine was necessary for some of the test cases. The problem is that when InstCombine was simplifying and, or, etc. it would fail to check the 'do I know all of the bits' condition before checking less specific conditions and would not fully constant-fold the result. I'm not sure how to trigger this aside from using assumptions, so I've just included the change here. llvm-svn: 217343
-
Hal Finkel authored
This change, which allows @llvm.assume to be used from within computeKnownBits (and other associated functions in ValueTracking), adds some (optional) parameters to computeKnownBits and friends. These functions now (optionally) take a "context" instruction pointer, an AssumptionTracker pointer, and also a DomTree pointer, and most of the changes are just to pass this new information when it is easily available from InstSimplify, InstCombine, etc. As explained below, the significant conceptual change is that known properties of a value might depend on the control-flow location of the use (because we care that the @llvm.assume dominates the use because assumptions have control-flow dependencies). This means that, when we ask if bits are known in a value, we might get different answers for different uses. The significant changes are all in ValueTracking. Two main changes: First, as with the rest of the code, new parameters need to be passed around. To make this easier, I grouped them into a structure, and I made internal static versions of the relevant functions that take this structure as a parameter. The new code does as you might expect, it looks for @llvm.assume calls that make use of the value we're trying to learn something about (often indirectly), attempts to pattern match that expression, and uses the result if successful. By making use of the AssumptionTracker, the process of finding @llvm.assume calls is not expensive. Part of the structure being passed around inside ValueTracking is a set of already-considered @llvm.assume calls. This is to prevent a query using, for example, the assume(a == b), to recurse on itself. The context and DT params are used to find applicable assumptions. An assumption needs to dominate the context instruction, or come after it deterministically. In this latter case we only handle the specific case where both the assumption and the context instruction are in the same block, and we need to exclude assumptions from being used to simplify their own ephemeral values (those which contribute only to the assumption) because otherwise the assumption would prove its feeding comparison trivial and would be removed. This commit adds the plumbing and the logic for a simple masked-bit propagation (just enough to write a regression test). Future commits add more patterns (and, correspondingly, more regression tests). llvm-svn: 217342
-
Renato Golin authored
llvm-svn: 217341
-
Saleem Abdulrasool authored
'#import' is an Objective-C construct; avoid using it in C++. NFC. Addresses PR20867. Patch by Kevin Avila! llvm-svn: 217340
-
David Blaikie authored
llvm-svn: 217339
-
David Blaikie authored
It's probably not a huge deal to not do this - if we could, maybe the address could be reused by a subprogram low_pc and avoid an extra relocation, but it's just one per CU at best. llvm-svn: 217338
-
David Blaikie authored
llvm-svn: 217337
-
Tobias Grosser authored
This allows to link Polly's lit.site.cfg from the build into the src directory, without having it removed by every 'git clean': ln -s build/tools/polly/test/lit.site.cfg to src/tools/polly/test Having this file in our src directory allows us to run llvm-lit on specific test cases in the Polly test directory just by running 'llvm-lit test/case.ll'. llvm-svn: 217336
-
Hal Finkel authored
This adds a set of utility functions for collecting 'ephemeral' values. These are LLVM IR values that are used only by @llvm.assume intrinsics (directly or indirectly), and thus will be removed prior to code generation, implying that they should be considered free for certain purposes (like inlining). The inliner's cost analysis, and a few other passes, have been updated to account for ephemeral values using the provided functionality. This functionality is important for the usability of @llvm.assume, because it limits the "non-local" side-effects of adding llvm.assume on inlining, loop unrolling, etc. (these are hints, and do not generate code, so they should not directly contribute to estimates of execution cost). llvm-svn: 217335
-
Hal Finkel authored
This adds an immutable pass, AssumptionTracker, which keeps a cache of @llvm.assume call instructions within a module. It uses callback value handles to keep stale functions and intrinsics out of the map, and it relies on any code that creates new @llvm.assume calls to notify it of the new instructions. The benefit is that code needing to find @llvm.assume intrinsics can do so directly, without scanning the function, thus allowing the cost of @llvm.assume handling to be negligible when none are present. The current design is intended to be lightweight. We don't keep track of anything until we need a list of assumptions in some function. The first time this happens, we scan the function. After that, we add/remove @llvm.assume calls from the cache in response to registration calls and ValueHandle callbacks. There are no new direct test cases for this pass, but because it calls it validation function upon module finalization, we'll pick up detectable inconsistencies from the other tests that touch @llvm.assume calls. This pass will be used by follow-up commits that make use of @llvm.assume. llvm-svn: 217334
-
Chandler Carruth authored
I hadn't actually run all the tests yet and these combines have somewhat surprisingly far reaching effects. llvm-svn: 217333
-
Chandler Carruth authored
support for MOVDDUP which is really important for matrix multiply style operations that do lots of non-vector-aligned load and splats. The original motivation was to add support for MOVDDUP as the lack of it regresses matmul_f64_4x4 by 5% or so. However, all of the rules here were somewhat suspicious. First, we should always be using the floating point domain shuffles, regardless of how many copies we have to make as a movapd is *crazy* faster than the domain switching cost on some chips. (Mostly because movapd is crazy cheap.) Because SHUFPD can't do the copy-for-free trick of the PSHUF instructions, there is no need to avoid canonicalizing on UNPCK variants, so do that canonicalizing. This also ensures we have the chance to form MOVDDUP. =] Second, we assume SSE2 support when doing any vector lowering, and given that we should just use UNPCKLPD and UNPCKHPD as they can operate on registers or memory. If vectors get spilled or come from memory at all this is going to allow the load to be folded into the operation. If we want to optimize for encoding size (the only difference, and only a 2 byte difference) it should be done *much* later, likely after RA. llvm-svn: 217332
-
Hans Wennborg authored
llvm-svn: 217331
-
Hans Wennborg authored
Instead of aligning and moving the CurPtr forward, and then comparing with End, simply calculate how much space is needed, and compare that to how much is available. Hopefully this avoids any doubts about comparing addresses possibly derived from past the end of the slab array, overflowing, etc. Also add a test where aligning CurPtr would move it past End. llvm-svn: 217330
-
Lang Hames authored
r217328. llvm-svn: 217329
-
Lang Hames authored
field of RelocationValueRef, rather than the 'Addend' field. This is consistent with RuntimeDyldELF's use of RelocationValueRef, and more consistent with the semantics of the data being stored (the offset from the start of a section or symbol). llvm-svn: 217328
-
Hans Wennborg authored
llvm-svn: 217327
-
Hans Wennborg authored
llvm-svn: 217326
-
Lang Hames authored
The previous implementation was writing to the high-bytes of integers on BE targets (when run on LE hosts). http://llvm.org/PR20640 llvm-svn: 217325
-