- Mar 28, 2012
-
-
Akira Hatanaka authored
any side effects. llvm-svn: 153551
-
Chandler Carruth authored
flag as GCC uses: -fstrict-enums). There is a *lot* of code making unwarranted assumptions about the underlying type of enums, and it doesn't seem entirely reasonable to eagerly break all of it. Much more importantly, the current state of affairs is *very* good at optimizing based upon this information, which causes failures that are very distant from the actual enum. Before we push for enabling this by default, I think we need to implement -fcatch-undefined-behavior support for instrumenting and trapping whenever we store or load a value outside of the range. That way we can track down the misbehaving code very quickly. I discussed this with Rafael, and currently the only important cases he is aware of are the bool range-based optimizations which are staying hard enabled. We've not seen any issue with those either, and they are much more important for performance. llvm-svn: 153550
-
Francois Pichet authored
llvm-svn: 153549
-
Douglas Gregor authored
completion item. For example, if the code completion itself represents a declaration in a namespace (say, std::vector), then this API retrieves the cursor kind and name of the namespace (std). Implements <rdar://problem/11121951>. llvm-svn: 153545
-
Richard Smith authored
necessarily mean we've found a function declarator. If the next token is not a ')', this is actually a parenthesized pack expansion. llvm-svn: 153544
-
Benjamin Kramer authored
llvm-svn: 153543
-
Benjamin Kramer authored
llvm-svn: 153542
-
- Mar 27, 2012
-
-
Enrico Granata authored
llvm-svn: 153541
-
Johnny Chen authored
llvm-svn: 153540
-
Argyrios Kyrtzidis authored
disables all compiler warnings. rdar://11059556 llvm-svn: 153539
-
Bill Wendling authored
executable has been moved to another machine). If that's not available (read-only or something), then exit gracefully. <rdar://problem/11111686> llvm-svn: 153538
-
Greg Clayton authored
indicates that the section is thread specific. Any functions the load a module given a slide, will currently ignore any sections that are thread specific. lldb_private::Section now has: bool Section::IsThreadSpecific () const { return m_thread_specific; } void Section::SetIsThreadSpecific (bool b) { m_thread_specific = b; } The ELF plug-in has been modified to set this for the ".tdata" and the ".tbss" sections. Eventually we need to have each lldb_private::Thread subclass be able to resolve a thread specific section, but for now they will just not resolve. The code for that should be trivual to add, but the address resolving functions will need to be changed to take a "ExecutionContext" object instead of just a target so that thread specific sections can be resolved. llvm-svn: 153537
-
Akira Hatanaka authored
llvm-svn: 153536
-
Fariborz Jahanian authored
// rdar://11124775 llvm-svn: 153535
-
Anna Zaks authored
The analyzer gives up path exploration under certain conditions. For example, when the same basic block has been visited more than 4 times. With inlining turned on, this could lead to decrease in code coverage. Specifically, if we give up inside the inlined function, the rest of parent's basic blocks will not get analyzed. This commit introduces an option to enable re-run along the failed path, in which we do not inline the last inlined call site. This is done by enqueueing the node before the processing of the inlined call site with a special policy encoded in the state. The policy tells us not to inline the call site along the path. This lead to ~10% increase in the number of paths analyzed. Even though we expected a much greater coverage improvement. The option is turned off by default for now. llvm-svn: 153534
-
Anna Zaks authored
llvm-svn: 153533
-
Anna Zaks authored
Report root function name with exhausted block diagnostic. Also, use stack frames, not just any location context when checking if the basic block is in the same context. llvm-svn: 153532
-
Anna Zaks authored
analyzes. (This method can be called twice on the same function.) llvm-svn: 153531
-
Eric Christopher authored
Patch by Jack Carter. llvm-svn: 153530
-
Lang Hames authored
will always be tiny sets, so DenseSet is overkill (SmallSet won't work as we need iteration support). llvm-svn: 153529
-
Akira Hatanaka authored
If EmitNOAT is true, directives ".set noat" and ".set at" are emitted at the beginning and end of a function. llvm-svn: 153528
-
Argyrios Kyrtzidis authored
"#include MACRO(STUFF)". -As an inclusion position for the included file, use the file location of the file where it was included but *after* the macro expansions. We want the macro expansions to be considered as before-in-translation-unit for everything in the included file. -In the preprocessing record take into account that only inclusion directives can be encountered as "out-of-order" (by comparing the start of the range which for inclusions is the hash location) and use binary search if there is an extreme number of macro expansions in the include directive. Fixes rdar://11111779 llvm-svn: 153527
-
Fariborz Jahanian authored
// rdar://11124354 llvm-svn: 153526
-
Eric Christopher authored
testing a) the wrong behavior or b) something that I'm already testing in the new test. llvm-svn: 153525
-
Eric Christopher authored
Fixes PR10105 llvm-svn: 153524
-
Sebastian Redl authored
llvm-svn: 153523
-
Douglas Gregor authored
list of identifiers that that 'public' names at the end of the translation unit, e.g., defined macros or identifiers with top-level names, in sorted order. Meant to support <rdar://problem/10921596>. llvm-svn: 153522
-
Chad Rosier authored
undefined behavior, which Rafael was kind enough to fix. Original commit message for r153423: Use the new range metadata in computeMaskedBits and add a new optimization to instruction simplify that lets us remove an and when loding a boolean value. llvm-svn: 153521
-
Jakob Stoklund Olesen authored
This pass tries to update kill flags, but there are still many bugs. Passes after the load/store optimizer don't need accurate liveness, so don't even try. <rdar://problem/11101911> llvm-svn: 153519
-
Jakob Stoklund Olesen authored
llvm-svn: 153518
-
Jakob Stoklund Olesen authored
Branch folding can use a register scavenger to update liveness information when required. Don't do that if liveness information is already invalid. llvm-svn: 153517
-
Jakob Stoklund Olesen authored
llvm-svn: 153516
-
Fariborz Jahanian authored
same. pr12357. llvm-svn: 153515
-
Alexander Potapenko authored
llvm-svn: 153514
-
Chris Lattner authored
llvm-svn: 153513
-
Fariborz Jahanian authored
case that I forgot to check in. llvm-svn: 153512
-
Jakob Stoklund Olesen authored
Late optimization passes like branch folding and tail duplication can transform the machine code in a way that makes it expensive to keep the register liveness information up to date. There is a fuzzy line between register allocation and late scheduling where the liveness information degrades. The MRI::tracksLiveness() flag makes the line clear: While true, liveness information is accurate, and can be used for register scavenging. Once the flag is false, liveness information is not accurate, and can only be used as a hint. Late passes generally don't need the liveness information, but they will sometimes use the register scavenger to help update it. The scavenger enforces strict correctness, and we have to spend a lot of code to update register liveness that may never be used. llvm-svn: 153511
-
NAKAMURA Takumi authored
llvm-svn: 153508
-
Chandler Carruth authored
size bloat. Unfortunately, I expect this to disable the majority of the benefit from r152737. I'm hopeful at least that it will fix PR12345. To explain this requires... quite a bit of backstory I'm afraid. TL;DR: The change in r152737 actually did The Wrong Thing for linkonce-odr functions. This change makes it do the right thing. The benefits we saw were simple luck, not any actual strategy. Benchmark numbers after a mini-blog-post so that I've written down my thoughts on why all of this works and doesn't work... To understand what's going on here, you have to understand how the "bottom-up" inliner actually works. There are two fundamental modes to the inliner: 1) Standard fixed-cost bottom-up inlining. This is the mode we usually think about. It walks from the bottom of the CFG up to the top, looking at callsites, taking information about the callsite and the called function and computing th expected cost of inlining into that callsite. If the cost is under a fixed threshold, it inlines. It's a touch more complicated than that due to all the bonuses, weights, etc. Inlining the last callsite to an internal function gets higher weighth, etc. But essentially, this is the mode of operation. 2) Deferred bottom-up inlining (a term I just made up). This is the interesting mode for this patch an r152737. Initially, this works just like mode #1, but once we have the cost of inlining into the callsite, we don't just compare it with a fixed threshold. First, we check something else. Let's give some names to the entities at this point, or we'll end up hopelessly confused. We're considering inlining a function 'A' into its callsite within a function 'B'. We want to check whether 'B' has any callers, and whether it might be inlined into those callers. If so, we also check whether inlining 'A' into 'B' would block any of the opportunities for inlining 'B' into its callers. We take the sum of the costs of inlining 'B' into its callers where that inlining would be blocked by inlining 'A' into 'B', and if that cost is less than the cost of inlining 'A' into 'B', then we skip inlining 'A' into 'B'. Now, in order for #2 to make sense, we have to have some confidence that we will actually have the opportunity to inline 'B' into its callers when cheaper, *and* that we'll be able to revisit the decision and inline 'A' into 'B' if that ever becomes the correct tradeoff. This often isn't true for external functions -- we can see very few of their callers, and we won't be able to re-consider inlining 'A' into 'B' if 'B' is external when we finally see more callers of 'B'. There are two cases where we believe this to be true for C/C++ code: functions local to a translation unit, and functions with an inline definition in every translation unit which uses them. These are represented as internal linkage and linkonce-odr (resp.) in LLVM. I enabled this logic for linkonce-odr in r152737. Unfortunately, when I did that, I also introduced a subtle bug. There was an implicit assumption that the last caller of the function within the TU was the last caller of the function in the program. We want to bonus the last caller of the function in the program by a huge amount for inlining because inlining that callsite has very little cost. Unfortunately, the last caller in the TU of a linkonce-odr function is *not* the last caller in the program, and so we don't want to apply this bonus. If we do, we can apply it to one callsite *per-TU*. Because of the way deferred inlining works, when it sees this bonus applied to one callsite in the TU for 'B', it decides that inlining 'B' is of the *utmost* importance just so we can get that final bonus. It then proceeds to essentially force deferred inlining regardless of the actual cost tradeoff. The result? PR12345: code bloat, code bloat, code bloat. Another result is getting *damn* lucky on a few benchmarks, and the over-inlining exposing critically important optimizations. I would very much like a list of benchmarks that regress after this change goes in, with bitcode before and after. This will help me greatly understand what opportunities the current cost analysis is missing. Initial benchmark numbers look very good. WebKit files that exhibited the worst of PR12345 went from growing to shrinking compared to Clang with r152737 reverted. - Bootstrapped Clang is 3% smaller with this change. - Bootstrapped Clang -O0 over a single-source-file of lib/Lex is 4% faster with this change. Please let me know about any other performance impact you see. Thanks to Nico for reporting and urging me to actually fix, Richard Smith, Duncan Sands, Manuel Klimek, and Benjamin Kramer for talking through the issues today. llvm-svn: 153506
-
Hongbin Zheng authored
instead of loading the "LLVMConfig.cmake" which is only installed when llvm configured by cmake. llvm-svn: 153503
-