- Mar 14, 2013
-
-
Shankar Easwaran authored
llvm-svn: 177079
-
Vincent Lejeune authored
llvm-svn: 177078
-
Alexey Samsonov authored
[Sanitizer] Fix compiler warnings and style issues in sanitizer_common tests. Use -Werror=sign-compare when building them. llvm-svn: 177077
-
Matt Kopec authored
llvm-svn: 177076
-
Daniel Jasper authored
The stronger binding of a string ending in :/= does not really make sense if it is the only character. Before: llvm::outs() << aaaaaaaaaaaaaaaaaaaaaaaa << "=" << bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb; After: llvm::outs() << aaaaaaaaaaaaaaaaaaaaaaaa << "=" << bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb; llvm-svn: 177075
-
Alexey Samsonov authored
[Sanitizer] Add generic ThreadRegistry class for sanitizer runtimes. This class holds basic thread bookkeeping logic and allows specific sanitizer runtimes to create thread contexts and mark threads as created/running/joined etc. The class is based on the way we currently store thread contexts in TSan. llvm-svn: 177074
-
Daniel Jasper authored
llvm-svn: 177073
-
Alexey Samsonov authored
llvm-svn: 177072
-
Evgeniy Stepanov authored
llvm-svn: 177071
-
Alexey Samsonov authored
llvm-svn: 177070
-
Kostya Serebryany authored
[asan] remove one redundant malloc stress test, unify the usage of ASAN_LOW_MEMORY macro in tests, slightly reduce test memory usage (all to make 32-bit runs consume less RAM) llvm-svn: 177069
-
Alexander Potapenko authored
Patch by Sergey Matveev (earthdok@google.com) llvm-svn: 177068
-
Evgeniy Stepanov authored
llvm-svn: 177067
-
Dmitri Gribenko authored
llvm-svn: 177066
-
Evgeniy Stepanov authored
llvm-svn: 177065
-
Alexey Samsonov authored
llvm-svn: 177064
-
Alexey Samsonov authored
llvm-svn: 177063
-
Alexey Samsonov authored
llvm-svn: 177062
-
Alexey Samsonov authored
[ASan] Make -fsanitize=address imply -fsanitize=init-order (if the latter is not explicitly disabled). llvm-svn: 177061
-
Alexander Potapenko authored
Also, extended the test to check that ThreadLister::Reset() works as intended. Patch by Sergey Matveev (earthdok@google.com) llvm-svn: 177060
-
Evgeniy Stepanov authored
llvm-svn: 177059
-
Alexey Samsonov authored
[ASan] turn off checking initialization order in ASan runtime by default. Instead, it should be turned on by default in the compiler llvm-svn: 177058
-
Evgeniy Stepanov authored
Does not change default behavior. llvm-svn: 177057
-
Evgeniy Stepanov authored
llvm-svn: 177056
-
Chandler Carruth authored
The fundamental problem is that SROA didn't allow for overly wide loads where the bits past the end of the alloca were masked away and the load was sufficiently aligned to ensure there is no risk of page fault, or other trapping behavior. With such widened loads, SROA would delete the load entirely rather than clamping it to the size of the alloca in order to allow mem2reg to fire. This was exposed by a test case that neatly arranged for GVN to run first, widening certain loads, followed by an inline step, and then SROA which miscompiles the code. However, I see no reason why this hasn't been plaguing us in other contexts. It seems deeply broken. Diagnosing all of the above took all of 10 minutes of debugging. The really annoying aspect is that fixing this completely breaks the pass. ;] There was an implicit reliance on the fact that no loads or stores extended past the alloca once we decided to rewrite them in the final stage of SROA. This was used to encode information about whether the loads and stores had been split across multiple partitions of the original alloca. That required threading explicit tracking of whether a *use* of a partition is split across multiple partitions. Once that was done, another problem arose: we allowed splitting of integer loads and stores iff they were loads and stores to the entire alloca. This is a really arbitrary limitation, and splitting at least some integer loads and stores is crucial to maximize promotion opportunities. My first attempt was to start removing the restriction entirely, but currently that does Very Bad Things by causing *many* common alloca patterns to be fully decomposed into i8 operations and lots of or-ing together to produce larger integers on demand. The code bloat is terrifying. That is still the right end-goal, but substantial work must be done to either merge partitions or ensure that small i8 values are eagerly merged in some other pass. Sadly, figuring all this out took essentially all the time and effort here. So the end result is that we allow splitting only when the load or store at least covers the alloca. That ensures widened loads and stores don't hurt SROA, and that we don't rampantly decompose operations more than we have previously. All of this was already fairly well tested, and so I've just updated the tests to cover the wide load behavior. I can add a test that crafts the pass ordering magic which caused the original PR, but that seems really brittle and to provide little benefit. The fundamental problem is that widened loads should Just Work. llvm-svn: 177055
-
Alexey Samsonov authored
llvm-svn: 177054
-
Chandler Carruth authored
isa and a cast inside the assert. The efficiency concern isn't really important here. The code should likely be cleaned up a bit more, especially getting a message into the assert. Please review Rafael. llvm-svn: 177053
-
Evgeniy Stepanov authored
llvm-svn: 177052
-
Alexey Samsonov authored
llvm-svn: 177051
-
Daniel Jasper authored
Before: for (char **a = b; * a; ++a) {} After: for (char **a = b; *a; ++a) {} llvm-svn: 177037
-
Alexey Samsonov authored
llvm-svn: 177036
-
Joey Gouly authored
llvm-svn: 177035
-
Daniel Jasper authored
Before: bool aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa __attribute__(( unused)); After: bool aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa __attribute__((unused)); llvm-svn: 177034
-
Joerg Sonnenberger authored
linkers to interact with GNU ld. llvm-svn: 177016
-
Craig Topper authored
llvm-svn: 177015
-
Craig Topper authored
Fix a bug in the calculation of the VEX.B bit for FMA4 rr with the VEX.W bit set. The VEX.B was being calculated from the wrong operand. Fixes at least some portion of PR14185. llvm-svn: 177014
-
Alexey Samsonov authored
llvm-svn: 177013
-
Alexey Samsonov authored
llvm-svn: 177012
-
Craig Topper authored
Teach X86 MC instruction lowering that VMOVAPSrr and other VEX-encoded register to register moves should be switched from using the MRMSrcReg form to the MRMDestReg form if the source register is a 64-bit extended register and the destination register is not. This allows the instruction to be encoded using the 2-byte VEX form instead of the 3-byte VEX form. The GNU assembler has similar behavior. llvm-svn: 177011
-
Michael Liao authored
- Fix the typo on type checking llvm-svn: 177010
-