- Jan 11, 2012
-
-
Jakob Stoklund Olesen authored
This helper method is too simplistic for RAGreedy. llvm-svn: 147976
-
Jakob Stoklund Olesen authored
llvm-svn: 147975
-
Douglas Gregor authored
llvm-svn: 147974
-
Douglas Gregor authored
variably-modified type. llvm-svn: 147973
-
Jakob Stoklund Olesen authored
No functional change. llvm-svn: 147972
-
Eli Friedman authored
Re-fix the issue Bill fixed in r147899 in a slightly different way, which doesn't abuse the semantics of linker_private. We don't really want to merge any string constant with a weak_odr global. llvm-svn: 147971
-
Jim Grosbach authored
llvm-svn: 147970
-
Jim Grosbach authored
llvm-svn: 147969
-
Kaelyn Uhrain authored
are still added if the cached correction fails validation. Also fix a copy-and-paste error in a comment from my previous commit. Finally, add an example of the benefit the typo correction callback adds to TryNamespaceTypoCorrection--which happens to also tickle the above caching problem, as the only way a non-namespace Decl would be added to the possible corrections is if it was cached as the correction for a previous instance of the same typo where the typo was corrected to a non-namespace via a different code path. llvm-svn: 147968
-
Jim Grosbach authored
Previously let the JITEmitter do it. That's rather odd, and doesn't play nice with the MCJIT, so move the (trivial) logic up. llvm-svn: 147967
-
Eric Christopher authored
llvm-svn: 147966
-
Argyrios Kyrtzidis authored
llvm-svn: 147965
-
Nadav Rotem authored
When we load the v12i32 type, the GenWidenVectorLoads method generates two loads: v8i32 and v4i32 and attempts to use CONCAT_VECTORS to join them. In this fix I concat undef values to widen the smaller value. The test "widen_load-2.ll" also exposes this bug on AVX. llvm-svn: 147964
-
Fariborz Jahanian authored
llvm-svn: 147963
-
Kaelyn Uhrain authored
Also includes two examples of the callback: a wrapper/replacement for the CorrectTypoContext enum, and a conversion of the two calls to CorrectTypo in SemaDeclCXX.cpp (one of which provides verifiable improvement to the typo correction, as demonstrated in the added test). llvm-svn: 147962
-
Bill Wendling authored
llvm-svn: 147961
-
Rafael Espindola authored
This uses TLS slot 90, which actually belongs to JavaScriptCore. We only support frames with static size Patch by Brian Anderson. llvm-svn: 147960
-
Rafael Espindola authored
Patch by Brian Anderson. llvm-svn: 147959
-
Rafael Espindola authored
Patch by Brian Anderson. llvm-svn: 147958
-
Chandler Carruth authored
hoped this would revive one of the llvm-gcc selfhost build bots, but it didn't so it doesn't appear that my transform is the culprit. If anyone else is seeing failures, please let me know! llvm-svn: 147957
-
Fariborz Jahanian authored
life-time to that of its backing 'ivar's lifetime. // rdar://10558871 llvm-svn: 147956
-
Richard Smith authored
implicitly marked constexpr when they should be. llvm-svn: 147955
-
Rafael Espindola authored
This is a comparison of two addresses, and GCC does the comparison unsigned. Patch by Brian Anderson. llvm-svn: 147954
-
http://llvm.org/bugs/show_bug.cgi?id=11395Kostya Serebryany authored
[asan] extend the workaround for http://llvm.org/bugs/show_bug.cgi?id=11395: don't instrument the function at all on x86_32 if it has a large asm blob llvm-svn: 147953
-
Rafael Espindola authored
Patch by Brian Anderson. llvm-svn: 147952
-
Kevin Enderby authored
directives was in the wrong place and getting triggered incorectly with a cpp .file directive. This change fixes that and adds a test case. llvm-svn: 147951
-
Jan Sjödin authored
llvm-svn: 147949
-
Nadav Rotem authored
Fix a bug in the lowering of BUILD_VECTOR for AVX. SCALAR_TO_VECTOR does not zero untouched elements. Use INSERT_VECTOR_ELT instead. llvm-svn: 147948
-
Evgeniy Stepanov authored
Also remove the svn:eol-style property from the test file. llvm-svn: 147947
-
Duncan Sands authored
are invalid). Fixes a crash on array1.C from the GCC testsuite when compiled with dragonegg. llvm-svn: 147946
-
Chandler Carruth authored
strange build bot failures that look like a miscompile into an infloop. I'll investigate this tomorrow, but I'd both like to know whether my patch is the culprit, and get the bots back to green. llvm-svn: 147945
-
Evgeniy Stepanov authored
- Support gcc-compatible vfpv3 name in addition to vfp3. - Support vfpv3-d16. - Disable neon feature for -mfpu=vfp* (yes, we were emitting Neon instructions for those!). llvm-svn: 147943
-
Chandler Carruth authored
lots of lines of code. No functionality changed. llvm-svn: 147942
-
Chandler Carruth authored
SRL-rooted code. llvm-svn: 147941
-
Chandler Carruth authored
factor the differences that were hiding in one of them into its other caller, the SRL handling code. No change in behavior. llvm-svn: 147940
-
Chandler Carruth authored
mask+shift pairs at the beginning of the ISD::AND case block, and then hoist the final pattern into a helper function, simplifying and reflowing it appropriately. This should have no observable behavior change, but several simplifications fell out of this such as directly computing the new mask constant, etc. llvm-svn: 147939
-
Jakob Stoklund Olesen authored
I don't think the compact encoding code is right, but at least is has defined behavior now. llvm-svn: 147938
-
Chandler Carruth authored
extracts and scaled addressing modes into its own helper function. No functionality changed here, just hoisting and layout fixes falling out of that hoisting. llvm-svn: 147937
-
Chandler Carruth authored
detect a pattern which can be implemented with a small 'shl' embedded in the addressing mode scale. This happens in real code as follows: unsigned x = my_accelerator_table[input >> 11]; Here we have some lookup table that we look into using the high bits of 'input'. Each entity in the table is 4-bytes, which means this implicitly gets turned into (once lowered out of a GEP): *(unsigned*)((char*)my_accelerator_table + ((input >> 11) << 2)); The shift right followed by a shift left is canonicalized to a smaller shift right and masking off the low bits. That hides the shift right which x86 has an addressing mode designed to support. We now detect masks of this form, and produce the longer shift right followed by the proper addressing mode. In addition to saving a (rather large) instruction, this also reduces stalls in Intel chips on benchmarks I've measured. In order for all of this to work, one part of the DAG needs to be canonicalized *still further* than it currently is. This involves removing pointless 'trunc' nodes between a zextload and a zext. Without that, we end up generating spurious masks and hiding the pattern. llvm-svn: 147936
-
Stepan Dyatkovskiy authored
1. Size heuristics changed. Now we calculate number of unswitching branches only once per loop. 2. Some checks was moved from UnswitchIfProfitable to processCurrentLoop, since it is not changed during processCurrentLoop iteration. It allows decide to skip some loops at an early stage. Extended statistics: - Added total number of instructions analyzed. llvm-svn: 147935
-