- Jan 11, 2012
-
-
Jakob Stoklund Olesen authored
No functional change. llvm-svn: 147972
-
Eli Friedman authored
Re-fix the issue Bill fixed in r147899 in a slightly different way, which doesn't abuse the semantics of linker_private. We don't really want to merge any string constant with a weak_odr global. llvm-svn: 147971
-
Jim Grosbach authored
llvm-svn: 147970
-
Jim Grosbach authored
llvm-svn: 147969
-
Jim Grosbach authored
Previously let the JITEmitter do it. That's rather odd, and doesn't play nice with the MCJIT, so move the (trivial) logic up. llvm-svn: 147967
-
Eric Christopher authored
llvm-svn: 147966
-
Argyrios Kyrtzidis authored
llvm-svn: 147965
-
Nadav Rotem authored
When we load the v12i32 type, the GenWidenVectorLoads method generates two loads: v8i32 and v4i32 and attempts to use CONCAT_VECTORS to join them. In this fix I concat undef values to widen the smaller value. The test "widen_load-2.ll" also exposes this bug on AVX. llvm-svn: 147964
-
Bill Wendling authored
llvm-svn: 147961
-
Rafael Espindola authored
This uses TLS slot 90, which actually belongs to JavaScriptCore. We only support frames with static size Patch by Brian Anderson. llvm-svn: 147960
-
Rafael Espindola authored
Patch by Brian Anderson. llvm-svn: 147959
-
Rafael Espindola authored
Patch by Brian Anderson. llvm-svn: 147958
-
Chandler Carruth authored
hoped this would revive one of the llvm-gcc selfhost build bots, but it didn't so it doesn't appear that my transform is the culprit. If anyone else is seeing failures, please let me know! llvm-svn: 147957
-
Rafael Espindola authored
This is a comparison of two addresses, and GCC does the comparison unsigned. Patch by Brian Anderson. llvm-svn: 147954
-
http://llvm.org/bugs/show_bug.cgi?id=11395Kostya Serebryany authored
[asan] extend the workaround for http://llvm.org/bugs/show_bug.cgi?id=11395: don't instrument the function at all on x86_32 if it has a large asm blob llvm-svn: 147953
-
Rafael Espindola authored
Patch by Brian Anderson. llvm-svn: 147952
-
Kevin Enderby authored
directives was in the wrong place and getting triggered incorectly with a cpp .file directive. This change fixes that and adds a test case. llvm-svn: 147951
-
Jan Sjödin authored
llvm-svn: 147949
-
Nadav Rotem authored
Fix a bug in the lowering of BUILD_VECTOR for AVX. SCALAR_TO_VECTOR does not zero untouched elements. Use INSERT_VECTOR_ELT instead. llvm-svn: 147948
-
Duncan Sands authored
are invalid). Fixes a crash on array1.C from the GCC testsuite when compiled with dragonegg. llvm-svn: 147946
-
Chandler Carruth authored
strange build bot failures that look like a miscompile into an infloop. I'll investigate this tomorrow, but I'd both like to know whether my patch is the culprit, and get the bots back to green. llvm-svn: 147945
-
Chandler Carruth authored
lots of lines of code. No functionality changed. llvm-svn: 147942
-
Chandler Carruth authored
SRL-rooted code. llvm-svn: 147941
-
Chandler Carruth authored
factor the differences that were hiding in one of them into its other caller, the SRL handling code. No change in behavior. llvm-svn: 147940
-
Chandler Carruth authored
mask+shift pairs at the beginning of the ISD::AND case block, and then hoist the final pattern into a helper function, simplifying and reflowing it appropriately. This should have no observable behavior change, but several simplifications fell out of this such as directly computing the new mask constant, etc. llvm-svn: 147939
-
Jakob Stoklund Olesen authored
I don't think the compact encoding code is right, but at least is has defined behavior now. llvm-svn: 147938
-
Chandler Carruth authored
extracts and scaled addressing modes into its own helper function. No functionality changed here, just hoisting and layout fixes falling out of that hoisting. llvm-svn: 147937
-
Chandler Carruth authored
detect a pattern which can be implemented with a small 'shl' embedded in the addressing mode scale. This happens in real code as follows: unsigned x = my_accelerator_table[input >> 11]; Here we have some lookup table that we look into using the high bits of 'input'. Each entity in the table is 4-bytes, which means this implicitly gets turned into (once lowered out of a GEP): *(unsigned*)((char*)my_accelerator_table + ((input >> 11) << 2)); The shift right followed by a shift left is canonicalized to a smaller shift right and masking off the low bits. That hides the shift right which x86 has an addressing mode designed to support. We now detect masks of this form, and produce the longer shift right followed by the proper addressing mode. In addition to saving a (rather large) instruction, this also reduces stalls in Intel chips on benchmarks I've measured. In order for all of this to work, one part of the DAG needs to be canonicalized *still further* than it currently is. This involves removing pointless 'trunc' nodes between a zextload and a zext. Without that, we end up generating spurious masks and hiding the pattern. llvm-svn: 147936
-
Stepan Dyatkovskiy authored
1. Size heuristics changed. Now we calculate number of unswitching branches only once per loop. 2. Some checks was moved from UnswitchIfProfitable to processCurrentLoop, since it is not changed during processCurrentLoop iteration. It allows decide to skip some loops at an early stage. Extended statistics: - Added total number of instructions analyzed. llvm-svn: 147935
-
NAKAMURA Takumi authored
llvm-svn: 147928
-
NAKAMURA Takumi authored
llvm-svn: 147927
-
Andrew Trick authored
This interface is misleading and dangerous, but it is actually what we need for unrolling. llvm-svn: 147926
-
Rafael Espindola authored
llvm-svn: 147924
-
Rafael Espindola authored
llvm-svn: 147923
-
Andrew Trick authored
Allow LDRD to be formed from pairs with different LDR encodings. This was the original intention of the pass. Somewhere along the way, the LDR opcodes were refined which broke the optimization. We really don't care what the original opcodes are as long as they both map to the same LDRD and the immediate still fits. Fixes rdar://10435045 ARMLoadStoreOptimization cannot handle mixed LDRi8/LDRi12 llvm-svn: 147922
-
Jakob Stoklund Olesen authored
llvm-svn: 147921
-
Jakob Stoklund Olesen authored
Consider this code: int h() { int x; try { x = f(); g(); } catch (...) { return x+1; } return x; } The variable x is undefined on the first edge to the landing pad, but it has the f() return value on the second edge to the landing pad. SplitAnalysis::getLastSplitPoint() would assume that the return value from f() was live into the landing pad when f() throws, which is of course impossible. Detect these cases, and treat them as if the landing pad wasn't there. This allows spill code to be inserted after the function call to f(). <rdar://problem/10664933> llvm-svn: 147912
-
Jakob Stoklund Olesen authored
Delete the alternative implementation in LiveIntervalAnalysis. These functions computed the same thing, but SplitAnalysis caches the result. llvm-svn: 147911
-
Evan Cheng authored
the physical registers are not allocatable. llvm-svn: 147902
-
Bill Wendling authored
with other symbols. An object in the __cfstring section is suppoed to be filled with CFString objects, which have a pointer to ___CFConstantStringClassReference followed by a pointer to a __cstring. If we allow the object in the __cstring section to be merged with another global, then it could end up in any section. Because the linker is going to remove these symbols in the final executable, we shouldn't bother to merge them. <rdar://problem/10564621> llvm-svn: 147899
-