- Jun 08, 2015
-
-
Rui Ueyama authored
This change seems to make the linker about 10% faster. Reading symbol name is not very cheap because it needs strlen() on the string table. We were wasting time on reading non-external symbol names that would never be used by the linker. llvm-svn: 239332
-
Jonathan Peyton authored
As an ongoing effort to sanitize the openmp code, these changes move variables under already existing macro guards. Patch by Jack Howarth llvm-svn: 239331
-
Jonathan Peyton authored
As an ongoing effort to sanitize the openmp code, these changes remove unused variables by adding proper macros around both variables and functions. Patch by Jack Howarth llvm-svn: 239330
-
Akira Hatanaka authored
This is a follow-up to r239325. llvm-svn: 239329
-
Benjamin Kramer authored
llvm-svn: 239327
-
Jonathan Peyton authored
Some variables are convenient to keep around even if they aren't really used in a release build. This is often seen in DEBUG guarded code where the variable is only used in a DEBUG build. Patch by Jack Howarth llvm-svn: 239326
-
Akira Hatanaka authored
on a per-function basis. Previously some of the passes were conditionally added to ARM's pass pipeline based on the target machine's subtarget. This patch makes changes to add those passes unconditionally and execute them conditonally based on the predicate functor passed to the pass constructors. This enables running different sets of passes for different functions in the module. rdar://problem/20542263 Differential Revision: http://reviews.llvm.org/D8717 llvm-svn: 239325
-
Pete Cooper authored
The Fragment and Section, and a bool for HasFragment were all used to create a PointerUnion. Just use a pointer union instead. llvm-svn: 239324
-
Jonathan Peyton authored
As an ongoing effort to sanitize the openmp code, these changes remove unused functions. The unused functions are: __kmp_fini_allocator_thread(), __kmp_env_isDefined(), __kmp_strip_quotes(), __kmp_convert_to_seconds(), and __kmp_convert_to_nanoseconds(). Patch by Jack Howarth llvm-svn: 239323
-
Evgeniy Stepanov authored
llvm-svn: 239322
-
Evgeniy Stepanov authored
/code/llvm/projects/compiler-rt/lib/sanitizer_common/sanitizer_linux.cc:971:8: error: address of function 'dl_iterate_phdr' will always evaluate to 'true' [-Werror,-Wpointer-bool-conversion] if (!dl_iterate_phdr) ~^~~~~~~~~~~~~~~ /code/llvm/projects/compiler-rt/lib/sanitizer_common/sanitizer_linux.cc:971:8: note: prefix with the address-of operator to silence this warning if (!dl_iterate_phdr) ^ & llvm-svn: 239321
-
Evgeniy Stepanov authored
Some of the asan-ubsan build changes were not replicated in the android branch in CMakeLists. llvm-svn: 239320
-
Pete Cooper authored
llvm-svn: 239318
-
Pete Cooper authored
All of ELF, COFF and MachO now manipulate the flags in helpers so we don't need anyone to read the flags directly, but instead via those helpers. Reviewed by Rafael Espíndola. llvm-svn: 239317
-
Pete Cooper authored
Also delete the now unused MCMachOSymbolFlags.h header as the only enum in there was moved to MCSymbolMachO. Similarly to ELF and COFF, manipulating the flags is now done via helpers instead of spread throughout the codebase. Reviewed by Rafael Espíndola. llvm-svn: 239316
-
Pete Cooper authored
Reviewed by Rafael Espíndola. llvm-svn: 239315
-
Pete Cooper authored
All flags setting/getting is now done in the class with helper methods instead of users having to get the bits in the correct order. Reviewed by Rafael Espíndola. llvm-svn: 239314
-
Pete Cooper authored
The flags field in MCSymbol only needs to be 16-bits on ELF and MachO. This moves the 16-bit Type out of there so that it can be reduced in size in a future commit. Reviewed by Rafael Espíndola. llvm-svn: 239313
-
Pete Cooper authored
Reviewed by Rafael Espíndola. llvm-svn: 239312
-
Pete Cooper authored
Reviewed by Rafael Espíndola. llvm-svn: 239311
-
Oleksiy Vyalov authored
llvm-svn: 239310
-
Matthias Braun authored
While we have some code to transform specification like {ax} into {eax}/{rax} if the operand type isn't 16bit, we should reject cases where there is no sane way to do this, like the i128 type in the example. Related to rdar://21042280 Differential Revision: http://reviews.llvm.org/D10260 llvm-svn: 239309
-
Oliver Stannard authored
The global-merge pass was crashing because it assumes that all ConstantExprs (reached via the global variables that they use) have at least one user. I haven't worked out a way to test this, as an unused ConstantExpr cannot be represented by serialised IR, and global-merge can only be run in llc, which does not run any passes which can make a ConstantExpr dead. This (reduced to the point of silliness) C code triggers this bug when compiled for arm-none-eabi at -O1: static a = 7; static volatile b[10] = {&a}; c; main() { c = 0; for (; c < 10;) printf(b[c]); } Differential Revision: http://reviews.llvm.org/D10314 llvm-svn: 239308
-
Colin LeMahieu authored
[Hexagon] Adding functionality for searching for compound instruction pairs. Compound instructions reduce slot resource requirements freeing those packet slots up for more instructions. llvm-svn: 239307
-
Tobias Grosser authored
This reverts commit 239219 which requires some LLVM changes I forgot to commit. Reported-by: Marshall Clow llvm-svn: 239306
-
Simon Pilgrim authored
llvm-svn: 239305
-
Sanjay Patel authored
llvm-svn: 239303
-
Javed Absar authored
This patch adds support for system register MMFR4_EL1 (memory model feature register) in the assembler. This register provides information about the implemented memory model and memory management support. llvm-svn: 239302
-
Petar Jovanovic authored
This patch adds R_MIPS_PC32 relocation for Mips64. Patch by Vladimir Radosavljevic. Differential Revision: http://reviews.llvm.org/D10235 llvm-svn: 239301
-
Igor Breger authored
Implemented DAG lowering for all these forms. Added tests for DAG lowering and encoding. Differential Revision: http://reviews.llvm.org/D10310 llvm-svn: 239300
-
Artur Pilipenko authored
For GEP instructions isDereferenceablePointer checks that all indices are constant and within bounds. Replace this index calculation logic to a call to accumulateConstantOffset. Separated from the http://reviews.llvm.org/D9791 Reviewed By: sanjoy Differential Revision: http://reviews.llvm.org/D9874 llvm-svn: 239299
-
Leny Kholodov authored
llvm-svn: 239298
-
Bruce Mitchener authored
Summary: Previously if an MI command had **X** mandatory and **Y** optional arguments you could provide **X** or more optional arguments without providing any of the mandatory arguments, and the argument validation code wouldn't complain. For example this would pass argument validation even though the mandatory **address** and **count** arguments are missing: -data-read-memory-bytes --thread 1 --frame 0 Part of the problem was that an empty string was considered a valid value for a mandatory argument, which didn't make much sense. Patch by Vadim Macagon. Thanks! Test Plan: ./dotest.py -A x86_64 -C clang --executable $BUILDDIR/bin/lldb tools/lldb-mi/ No unexpected failures on my Ubuntu 14.10 64bit Virtualbox VM. Reviewers: domipheus, ki.stfu, abidh Reviewed By: ki.stfu, abidh Subscribers: brucem, lldb-commits Differential Revision: http://reviews.llvm.org/D10299 llvm-svn: 239297
-
Leny Kholodov authored
llvm-svn: 239296
-
Silviu Baranga authored
Summary: We need to add a runtime memcheck for pair of accesses (x,y) where at least one of x and y are writes. Assuming we have w writes and r reads, currently this number is estimated as being w* (w+r-1). This estimation will count (write,write) pairs twice and will overestimate the number of checks required. This change adds a getNumberOfChecks method to RuntimePointerCheck, which will count the number of runtime checks needed (similar in implementation to needsAnyChecking) and uses it to produce the correct number of runtime checks. Test Plan: llvm test suite spec2k spec2k6 Performance results: no changes observed (not surprising since the formula for 1 writer is basically the same, which would covers most cases - at least with the current check limit). Reviewers: anemet Reviewed By: anemet Subscribers: mzolotukhin, llvm-commits Differential Revision: http://reviews.llvm.org/D10217 llvm-svn: 239295
-
Leny Kholodov authored
[CodeGen] Reuse stack space from unused function results (with more accurate unused result detection) This patch fixes issues with unused result detection which were found in patch http://reviews.llvm.org/D9743. Differential Revision: http://reviews.llvm.org/D10042 llvm-svn: 239294
-
Simon Pilgrim authored
llvm-svn: 239293
-
Rui Ueyama authored
MSVC profiler reported that this stable_sort takes 7% time when self-linking. As a result, createSection was taking 10% time. Now createSection takes 3%. This small change actually makes the linker a bit but perceptibly faster. llvm-svn: 239292
-
Hao Liu authored
Interleaved memory accesses are grouped and vectorized into vector load/store and shufflevector. E.g. for (i = 0; i < N; i+=2) { a = A[i]; // load of even element b = A[i+1]; // load of odd element ... // operations on a, b, c, d A[i] = c; // store of even element A[i+1] = d; // store of odd element } The loads of even and odd elements are identified as an interleave load group, which will be transfered into vectorized IRs like: %wide.vec = load <8 x i32>, <8 x i32>* %ptr %vec.even = shufflevector <8 x i32> %wide.vec, <8 x i32> undef, <4 x i32> <i32 0, i32 2, i32 4, i32 6> %vec.odd = shufflevector <8 x i32> %wide.vec, <8 x i32> undef, <4 x i32> <i32 1, i32 3, i32 5, i32 7> The stores of even and odd elements are identified as an interleave store group, which will be transfered into vectorized IRs like: %interleaved.vec = shufflevector <4 x i32> %vec.even, %vec.odd, <8 x i32> <i32 0, i32 4, i32 1, i32 5, i32 2, i32 6, i32 3, i32 7> store <8 x i32> %interleaved.vec, <8 x i32>* %ptr This optimization is currently disabled by defaut. To try it by adding '-enable-interleaved-mem-accesses=true'. llvm-svn: 239291
-
Rui Ueyama authored
This is NFC but makes log message a bit nicer because it doesn't append .\ (or ./ on Unix) to files in the current directory. llvm-svn: 239290
-