- Apr 06, 2013
-
-
Tom Stellard authored
This is an R600 GPU with double support. Reviewed-by:
Christian König <christian.koenig@amd.com> llvm-svn: 178929
-
Tom Stellard authored
Reviewed-by:
Christian König <christian.koenig@amd.com> llvm-svn: 178928
-
Tom Stellard authored
SITargetLowering::analyzeImmediate() was converting the 64-bit values to 32-bit and then checking if they were an inline immediate. Some of these conversions caused this check to succeed and produced S_MOV instructions with 64-bit immediates, which are illegal. v2: - Clean up logic Reviewed-by:
Christian König <christian.koenig@amd.com> llvm-svn: 178927
-
Hal Finkel authored
On cores for which we know the misprediction penalty, and we have the isel instruction, we can profitably perform early if conversion. This enables us to replace some small branch sequences with selects and avoid the potential stalls from mispredicting the branches. Enabling this feature required implementing canInsertSelect and insertSelect in PPCInstrInfo; isel code in PPCISelLowering was refactored to use these functions as well. llvm-svn: 178926
-
Hal Finkel authored
The manual states that there is a minimum of 13 cycles from when the mispredicted branch is issued to when the correct branch target is issued. llvm-svn: 178925
-
Greg Clayton authored
Now we can: 1 - see the return value for functions that return types that use the "ext_vector_size" 2 - dump values that use the vector attributes ("expr $ymm0") 3 - modified the DWARF parser to correctly parse GNU vector types from the DWARF by turning them into clang::Type::ExtVector types instead of just standard arrays llvm-svn: 178924
-
Richard Trieu authored
more information to the notes. This information is already present on other diagnostic messages that involves overloads. llvm-svn: 178923
-
Michael Gottesman authored
This is the counterpart to commit r160637, except it performs the action in the bottomup portion of the data flow analysis. llvm-svn: 178922
-
Michael Gottesman authored
The normal dataflow sequence in the ARC optimizer consists of the following states: Retain -> CanRelease -> Use -> Release The optimizer before this patch stored the uses that determine the lifetime of the retainable object pointer when it bottom up hits a retain or when top down it hits a release. This is correct for an imprecise lifetime scenario since what we are trying to do is remove retains/releases while making sure that no ``CanRelease'' (which is usually a call) deallocates the given pointer before we get to the ``Use'' (since that would cause a segfault). If we are considering the precise lifetime scenario though, this is not correct. In such a situation, we *DO* care about the previous sequence, but additionally, we wish to track the uses resulting from the following incomplete sequences: Retain -> CanRelease -> Release (TopDown) Retain <- Use <- Release (BottomUp) *NOTE* This patch looks large but the most of it consists of updating test cases. Additionally this fix exposed an additional bug. I removed the test case that expressed said bug and will recommit it with the fix in a little bit. llvm-svn: 178921
-
Jason Molenda authored
platform.plugin.darwin-kernel.kext-directories platform.plugin.darwin-kernel.search-locally-for-kexts and fix a few FileSpec handling issues for the kext-directories setting. llvm-svn: 178920
-
Hal Finkel authored
This fixes PEI as previously described, but correctly handles the case where the instruction defining the virtual register to be scavenged is the first in the block. Arnold provided me with a bugpoint-reduced test case, but even that seems too large to use as a regression test. If I'm successful in cleaning it up then I'll commit that as well. Original commit message: This change fixes a bug that I introduced in r178058. After a register is scavenged using one of the available spills slots the instruction defining the virtual register needs to be moved to after the spill code. The scavenger has already processed the defining instruction so that registers killed by that instruction are available for definition in that same instruction. Unfortunately, after this, the scavenger needs to iterate through the spill code and then visit, again, the instruction that defines the now-scavenged register. In order to avoid confusion, the register scavenger needs the ability to 'back up' through the spill code so that it can again process the instructions in the appropriate order. Prior to this fix, once the scavenger reached the just-moved instruction, it would assert if it killed any registers because, having already processed the instruction, it believed they were undefined. Unfortunately, I don't yet have a small test case. Thanks to Pranav Bhandarkar for diagnosing the problem and testing this fix. llvm-svn: 178919
-
Michael J. Spencer authored
llvm-svn: 178918
-
- Apr 05, 2013
-
-
Bill Wendling authored
During LTO, the target options on functions within the same Module may change. This would necessitate resetting some of the back-end. Do this for X86, because it's a Friday afternoon. llvm-svn: 178917
-
Hal Finkel authored
Reverting because this breaks one of the LTO builders. Original commit message: This change fixes a bug that I introduced in r178058. After a register is scavenged using one of the available spills slots the instruction defining the virtual register needs to be moved to after the spill code. The scavenger has already processed the defining instruction so that registers killed by that instruction are available for definition in that same instruction. Unfortunately, after this, the scavenger needs to iterate through the spill code and then visit, again, the instruction that defines the now-scavenged register. In order to avoid confusion, the register scavenger needs the ability to 'back up' through the spill code so that it can again process the instructions in the appropriate order. Prior to this fix, once the scavenger reached the just-moved instruction, it would assert if it killed any registers because, having already processed the instruction, it believed they were undefined. Unfortunately, I don't yet have a small test case. Thanks to Pranav Bhandarkar for diagnosing the problem and testing this fix. llvm-svn: 178916
-
Jim Grosbach authored
llvm-svn: 178915
-
Michael J. Spencer authored
llvm-svn: 178914
-
Michael J. Spencer authored
llvm-svn: 178913
-
Shuxin Yang authored
This optimization is unstable at this moment; it 1) block us on a very important application 2) PR15200 3) test6 and test7 in test/Transforms/ScalarRepl/dynamic-vector-gep.ll (the CHECK command compare the output against wrong result) I personally believe this optimization should not have any impact on the autovectorized code, as auto-vectorizer is supposed to put gather/scatter in a "right" way. Although in theory downstream optimizaters might reveal some gather/scatter optimization opportunities, the chance is quite slim. For the hand-crafted vectorizing code, in term of redundancy elimination, load-CSE, copy-propagation and DSE can collectively achieve the same result, but in much simpler way. On the other hand, these optimizers are able to improve the code in a incremental way; in contrast, SROA is sort of all-or-none approach. However, SROA might slighly win in stack size, as it tries to figure out a stretch of memory tightenly cover the area accessed by the dynamic index. rdar://13174884 PR15200 llvm-svn: 178912
-
Argyrios Kyrtzidis authored
rdar://13535645 llvm-svn: 178911
-
Akira Hatanaka authored
llvm-mips-linux green. llvm-mips-linux runs on a big endian machine. This test passes if I change 'e' to 'E' in the target data layout string. llvm-svn: 178910
-
rdar://problem/13551789Douglas Gregor authored
It's possible for the lock file to disappear and the owning process to return before we're able to see the generated file. Spin for a little while to see if it shows up before failing. llvm-svn: 178909
-
rdar://problem/13551789Douglas Gregor authored
If the directory that will contain the unique file doesn't exist when we tried to create the file, but another process creates it before we get a chance to try creating it, we would bail out rather than try to create the unique file. llvm-svn: 178908
-
Ariel J. Bernal authored
cast UseNullptr previously matched the implicit cast to const pointer as well as the explicit cast within that has an implicit cast to nullptr as a descendant. -Refactored UseNullptr to avoid special-casing certain kinds of cast sequences -Added test cases. llvm-svn: 178907
-
Tanya Lattner authored
llvm-svn: 178906
-
Michael J. Spencer authored
llvm-svn: 178905
-
Rafael Espindola authored
llvm-svn: 178904
-
Fariborz Jahanian authored
// rdar://12379114 llvm-svn: 178903
-
Edwin Vane authored
llvm-svn: 178902
-
Edwin Vane authored
With cpp11-migrate core functionality moved to a separate library (for enabling unit tests) this library contained code that referenced symbols that are still in the main binary. On some platforms, the shared library build broke as a result. This revision fixes the dependency problem and is safe for the eventual lib-ification of the transforms as well. llvm-svn: 178901
-
Edwin Vane authored
With the lib-ification of cpp11-migrate, real unit tests can be written. Replacing dummy tests with some simple tests for the Transform public interface. llvm-svn: 178900
-
Anton Yartsev authored
Now treat AF_None family as impossible in isTrackedFamily() llvm-svn: 178899
-
Manman Ren authored
llvm-svn: 178898
-
rdar://problem/13563628Enrico Granata authored
Introducing a negative cache for ObjCLanguageRuntime::LookupInCompleteClassCache() This helps speed up the (common) case of us looking for classes that are hidden deep within Cocoa internals and repeatedly failing at finding type information for them. In order for this to work, we need to clean this cache whenever debug information is added. A new symbols loaded event is added that is triggered with add-dsym (before modules loaded would be triggered for both adding modules and adding symbols). Interested parties can register for this event. Internally, we make sure to clean the negative cache whenever symbols are added. Lastly, ClassDescriptor::IsTagged() has been refactored to GetTaggedPointerInfo() that also (optionally) returns info and value bits. In this way, data formatters can share tagged pointer code instead of duplicating the required arithmetic. llvm-svn: 178897
-
Rafael Espindola authored
These should really be templated like ELF, but this is a start. llvm-svn: 178896
-
Michael Gottesman authored
llvm-svn: 178895
-
Rafael Espindola authored
llvm-svn: 178894
-
Michael Gottesman authored
llvm-svn: 178893
-
Howard Hinnant authored
llvm-svn: 178892
-
Jordan Rose authored
As mentioned in the previous commit message, the use-after-free and double-free warnings for 'delete' are worth enabling even while the leak warnings still have false positives. llvm-svn: 178891
-
Jordan Rose authored
This splits the leak-checking part of alpha.cplusplus.NewDelete into a separate user-level checker, alpha.cplusplus.NewDeleteLeaks. All the difficult false positives we've seen with the new/delete checker have been spurious leak warnings; the use-after-free warnings and mismatched deallocator warnings, while rare, have always been valid. <rdar://problem/6194569> llvm-svn: 178890
-