- Jan 11, 2012
-
-
Chandler Carruth authored
extracts and scaled addressing modes into its own helper function. No functionality changed here, just hoisting and layout fixes falling out of that hoisting. llvm-svn: 147937
-
Chandler Carruth authored
detect a pattern which can be implemented with a small 'shl' embedded in the addressing mode scale. This happens in real code as follows: unsigned x = my_accelerator_table[input >> 11]; Here we have some lookup table that we look into using the high bits of 'input'. Each entity in the table is 4-bytes, which means this implicitly gets turned into (once lowered out of a GEP): *(unsigned*)((char*)my_accelerator_table + ((input >> 11) << 2)); The shift right followed by a shift left is canonicalized to a smaller shift right and masking off the low bits. That hides the shift right which x86 has an addressing mode designed to support. We now detect masks of this form, and produce the longer shift right followed by the proper addressing mode. In addition to saving a (rather large) instruction, this also reduces stalls in Intel chips on benchmarks I've measured. In order for all of this to work, one part of the DAG needs to be canonicalized *still further* than it currently is. This involves removing pointless 'trunc' nodes between a zextload and a zext. Without that, we end up generating spurious masks and hiding the pattern. llvm-svn: 147936
-
Stepan Dyatkovskiy authored
1. Size heuristics changed. Now we calculate number of unswitching branches only once per loop. 2. Some checks was moved from UnswitchIfProfitable to processCurrentLoop, since it is not changed during processCurrentLoop iteration. It allows decide to skip some loops at an early stage. Extended statistics: - Added total number of instructions analyzed. llvm-svn: 147935
-
NAKAMURA Takumi authored
llvm-svn: 147934
-
Abramo Bagnara authored
llvm-svn: 147933
-
Evgeniy Stepanov authored
Protected by an #ifdef, disabled by default. llvm-svn: 147932
-
Ted Kremenek authored
the common *alloc functions as well as a few tiny wibbles (adds a note to CWE/CERT advisory numbers in the bug output, and fixes a couple 80-column-wide violations.)" Patch by Austin Seipp! llvm-svn: 147931
-
Alexey Samsonov authored
llvm-svn: 147930
-
NAKAMURA Takumi authored
Also cygwin has not supported integrated-as yet. llvm-svn: 147929
-
NAKAMURA Takumi authored
llvm-svn: 147928
-
NAKAMURA Takumi authored
llvm-svn: 147927
-
Andrew Trick authored
This interface is misleading and dangerous, but it is actually what we need for unrolling. llvm-svn: 147926
-
Douglas Gregor authored
downgrade the default-error warning to an ExtWarn in C90/99. <rdar://problem/10668057> llvm-svn: 147925
-
Rafael Espindola authored
llvm-svn: 147924
-
Rafael Espindola authored
llvm-svn: 147923
-
Andrew Trick authored
Allow LDRD to be formed from pairs with different LDR encodings. This was the original intention of the pass. Somewhere along the way, the LDR opcodes were refined which broke the optimization. We really don't care what the original opcodes are as long as they both map to the same LDRD and the immediate still fits. Fixes rdar://10435045 ARMLoadStoreOptimization cannot handle mixed LDRi8/LDRi12 llvm-svn: 147922
-
Jakob Stoklund Olesen authored
llvm-svn: 147921
-
Eli Friedman authored
llvm-svn: 147920
-
Kostya Serebryany authored
llvm-svn: 147919
-
Zhongxing Xu authored
Add elidable CXXConstructExpr as block-level expr. It converts an lvalue to a rvalue, which is a useful step during AST evaluation. llvm-svn: 147918
-
Eli Friedman authored
Start refactoring code for capturing variables and 'this' so that it is shared between lambda expressions and block literals. llvm-svn: 147917
-
Kostya Serebryany authored
llvm-svn: 147916
-
Sean Callanan authored
to make assumptions if the type is unsized. We just give up (and let the JIT handle it) instead. llvm-svn: 147915
-
Jim Ingham authored
Don't assert but report & return a NULL type if we end up parsing a type we are in the middle of parsing. llvm-svn: 147914
-
Kostya Serebryany authored
llvm-svn: 147913
-
Jakob Stoklund Olesen authored
Consider this code: int h() { int x; try { x = f(); g(); } catch (...) { return x+1; } return x; } The variable x is undefined on the first edge to the landing pad, but it has the f() return value on the second edge to the landing pad. SplitAnalysis::getLastSplitPoint() would assume that the return value from f() was live into the landing pad when f() throws, which is of course impossible. Detect these cases, and treat them as if the landing pad wasn't there. This allows spill code to be inserted after the function call to f(). <rdar://problem/10664933> llvm-svn: 147912
-
Jakob Stoklund Olesen authored
Delete the alternative implementation in LiveIntervalAnalysis. These functions computed the same thing, but SplitAnalysis caches the result. llvm-svn: 147911
-
Kostya Serebryany authored
[asan] get rid of the scary TSD destructor code. Now, we store the leaky AsanThreadSummary in TSD and never remove it from there. llvm-svn: 147910
-
Johnny Chen authored
llvm-svn: 147909
-
Greg Clayton authored
and also print out the full path and architecture. llvm-svn: 147908
-
Johnny Chen authored
llvm-svn: 147907
-
Sean Callanan authored
to assume it's of pointer size. llvm-svn: 147906
-
John McCall authored
llvm-svn: 147905
-
Ted Kremenek authored
Remove '#if 0' from ExprEngine::InlineCall(), and start fresh by wiring up inlining for straight C calls. My hope is to reimplement this from first principles based on the simplifications of removing unneeded node builders and re-evaluating how C++ calls are handled in the CFG. The hope is to turn inlining "on-by-default" as soon as possible with a core set of things working well, and then expand over time. llvm-svn: 147904
-
Nick Kledzik authored
A couple of big refactorings: 1) Move most attributes of Atom down to DefinedAtom, so only atoms representing definitions need to implement them. 2) Remove definitionTentative, definitionWeak, mergeDuplicates, and autoHide. Replace with merge and interposable attributes. 3) Make all methods on Atom be virtual so that future object file readers can lazily generated attributes llvm-svn: 147903
-
Evan Cheng authored
the physical registers are not allocatable. llvm-svn: 147902
-
Johnny Chen authored
It is incomplete and untested; passes the compilation only. llvm-svn: 147901
-
John McCall authored
new-expressions. llvm-svn: 147900
-
Bill Wendling authored
with other symbols. An object in the __cfstring section is suppoed to be filled with CFString objects, which have a pointer to ___CFConstantStringClassReference followed by a pointer to a __cstring. If we allow the object in the __cstring section to be merged with another global, then it could end up in any section. Because the linker is going to remove these symbols in the final executable, we shouldn't bother to merge them. <rdar://problem/10564621> llvm-svn: 147899
-
Howard Hinnant authored
This is a transitory commit for __dynamic_cast. It contains debugging statements that are not intended to be in the finished product. However some of the dubbing statements themselves contain important documentation such as how to navigate a __class_type_info hierarchy, documenting object offset and inheritance access. The intention is that this debugging code will migrate into both actual code and comments. And capturing it here so that there is no chance this stuff will be lost. llvm-svn: 147898
-