- Sep 19, 2013
-
-
David Blaikie authored
llvm-svn: 191020
-
David Blaikie authored
llvm-svn: 191018
-
Shuxin Yang authored
This is how it ignores the dead code: 1) When a dead branch target, say block B, is identified, all the blocks dominated by B is dead as well. 2) The PHIs of those blocks in dominance-frontier(B) is updated such that the operands corresponding to dead predecessors are replaced by "UndefVal". Using lattice's jargon, the "UndefVal" is the "Top" in essence. Phi node like this "phi(v1 bb1, undef xx)" will be optimized into "v1" if v1 is constant, or v1 is an instruction which dominate this PHI node. 3) When analyzing the availability of a load L, all dead mem-ops which L depends on disguise as a load which evaluate exactly same value as L. 4) The dead mem-ops will be materialized as "UndefVal" during code motion. llvm-svn: 191017
-
Fariborz Jahanian authored
objc_returns_inner_pointer on properties. // rdar://14990439 llvm-svn: 191016
-
Reid Kleckner authored
Various Windows SDK headers use _MSC_VER values to figure out what version of the VC++ headers they're using, in particular for SAL macros. Patch by Paul Hampson! llvm-svn: 191015
-
Shuxin Yang authored
As its name suggests, this function will return all basic blocks dominated by a given block. llvm-svn: 191014
-
Alexander Potapenko authored
[ASan] Fix init-order-dlopen.cc test to not depend on the -Wl,-undefined,dynamic_lookup being passed to the linker. llvm-svn: 191012
-
Reid Kleckner authored
llvm-svn: 191011
-
Reid Kleckner authored
Patch by Paul Hampson! llvm-svn: 191010
-
Fariborz Jahanian authored
of ObjectiveC properties to mean annotation of NS_RETURNS_INNER_POINTER on its synthesized getter. This also facilitates more migration to properties when methods are annotated with NS_RETURNS_INNER_POINTER. // rdar://14990439 llvm-svn: 191009
-
Evgeniy Stepanov authored
Adds a flag to the MemorySanitizer pass that enables runtime rewriting of indirect calls. This is part of MSanDR implementation and is needed to return control to the DynamiRio-based helper tool on transition between instrumented and non-instrumented modules. Disabled by default. llvm-svn: 191006
-
Ed Maste authored
Targets and hosts today are little-endian (arm, x86), so this change should be a no-op as they will not encounter the byte swapping cases. Byte swapping will happen when cross debugging of big endian-targets (e.g. MIPS, PPC) on a little-endian host (x86). Register- or word- sized data copies need to be swapped, but calls to ExtractBytes or CopyByteOrderedData that would invoke the swapping case are presumably in error. llvm-svn: 191005
-
Kostya Serebryany authored
llvm-svn: 191004
-
Ben Langmuir authored
llvm-svn: 191003
-
Ben Langmuir authored
This is consistent with ICC and Intel's SHA-enabled GCC version. llvm-svn: 191002
-
Amara Emerson authored
llvm-svn: 191001
-
Benjamin Kramer authored
DAGCombiner: Don't fold vector muls with constants that look like a splat of a power of 2 but differ in bit width. PR17283. llvm-svn: 191000
-
Ben Langmuir authored
Intrinsics added shaintrin.h, which is included from x86intrin.h if __SHA__ is enabled. SHA implies SSE2, which is needed for the __m128i type. Also add the -msha/-mno-sha option. llvm-svn: 190999
-
Justin Holewinski authored
llvm-svn: 190998
-
Justin Holewinski authored
llvm-svn: 190997
-
Amara Emerson authored
llvm-svn: 190996
-
Tim Northover authored
When selecting the DAG (add (WrapperRIP ...), (FrameIndex ...)), X86 code had spotted the FrameIndex possibility and was working out whether it could fold the WrapperRIP into this. The test for forming a %rip version is notionally whether we already have a base or index register (%rip precludes both), but we were forgetting to account for the register that would be inserted later to access the frame. rdar://problem/15024520 llvm-svn: 190995
-
Alexey Samsonov authored
llvm-svn: 190994
-
Alexey Samsonov authored
llvm-svn: 190993
-
Alexey Samsonov authored
llvm-svn: 190992
-
Dmitry Vyukov authored
this should fix episodic crashes on ARM/PPC x86_32 is still broken llvm-svn: 190991
-
Andrew Trick authored
Working on a better solution to this. This reverts commit 7d4e9934e7ca83094c5cf41346966c8350179ff2. llvm-svn: 190990
-
Dmitry Vyukov authored
WARNING: ThreadSanitizer: data race (pid=29103) Write of size 8 at 0x7d64003bbf00 by main thread: #0 free tsan_interceptors.cc:477 #1 __run_exit_handlers <null>:0 (libc.so.6+0x000000050cb7) Previous write of size 8 at 0x7d64003bbf00 by thread T78 (mutexes: write M9896): #0 calloc tsan_interceptors.cc:449 #1 ... llvm-svn: 190989
-
Dmitry Vyukov authored
llvm-svn: 190988
-
Dmitry Vyukov authored
llvm-svn: 190987
-
Rui Ueyama authored
llvm-svn: 190986
-
Reid Kleckner authored
llvm-svn: 190985
-
Eli Friedman authored
llvm-svn: 190984
-
Rui Ueyama authored
Test is coming after submitting http://llvm-reviews.chandlerc.com/D1719. llvm-svn: 190983
-
Craig Topper authored
llvm-svn: 190982
-
Eli Friedman authored
We don't really need to perform semantic analysis on the dependent expression anyway, so just call the cast dependent. <rdar://problem/15012610> llvm-svn: 190981
-
Eli Friedman authored
Before this patch, Lex() would recurse whenever the current lexer changed (e.g. upon entry into a macro). This patch turns the recursion into a loop: the various lex routines now don't return a token when the current lexer changes, and at the top level Preprocessor::Lex() now loops until it finds a token. Normally, the recursion wouldn't end up being very deep, but the recursion depth can explode in edge cases like a bunch of consecutive macros which expand to nothing (like in the testcase test/Preprocessor/macro_expand_empty.c in this patch). <rdar://problem/14569770> llvm-svn: 190980
-
Reid Kleckner authored
llvm-svn: 190979
-
Reid Kleckner authored
Test that intrin.h at least parses in C++ TUs. llvm-svn: 190978
-
Craig Topper authored
llvm-svn: 190977
-