- Dec 05, 2011
-
-
Hal Finkel authored
llvm-svn: 145819
-
Hal Finkel authored
llvm-svn: 145818
-
Hal Finkel authored
llvm-svn: 145817
-
Hal Finkel authored
llvm-svn: 145816
-
Greg Clayton authored
llvm-svn: 145814
-
Douglas Gregor authored
(i.e., 'export *'), to better match the semantics of headers. llvm-svn: 145813
-
Douglas Gregor authored
llvm-svn: 145812
-
Douglas Gregor authored
to re-export anything that it imports. This opt-in feature makes a module behave more like a header, because it can be used to re-export the transitive closure of a (sub)module's dependencies. llvm-svn: 145811
-
Benjamin Kramer authored
- Calling getUser in a loop is much more expensive than iterating over a few instructions. - Use it instead of the open-coded loop in AddrModeMatcher. - 5% speedup on ARMDisassembler.cpp Release builds. llvm-svn: 145810
-
Douglas Gregor authored
llvm-svn: 145809
-
Douglas Gregor authored
it imports, establishing dependencies at the (sub)module granularity. This is not a user-visible change (yet). llvm-svn: 145808
-
NAKAMURA Takumi authored
llvm-svn: 145805
-
Craig Topper authored
llvm-svn: 145804
-
Craig Topper authored
llvm-svn: 145803
-
Nadav Rotem authored
Add support for vectors of pointers. llvm-svn: 145801
-
NAKAMURA Takumi authored
llvm-svn: 145800
-
Greg Clayton authored
and fixes we did. Now that objective C classes are represented by symbols with their own type, there were a few more places in the objective C code that needed to be fixed when searching for dynamic types. Cleaned up the objective C runtime plug-in a bit to not keep having to create constant strings and make one less memory access when we find an "isa" in the objective C cache. llvm-svn: 145799
-
Howard Hinnant authored
Starting using murmur2 when combining multiple size_t's into a single hash, and also for basic_string. Also made hash<thread::id> ever so slighly more portable. I had to tweak one test which is questionable (definitely not portable) anyway. llvm-svn: 145795
-
- Dec 04, 2011
-
-
Jakub Staszak authored
llvm-svn: 145793
-
Jakub Staszak authored
llvm-svn: 145792
-
Tobias Grosser authored
We forgot to include the unistd.h header file that defines the functions mentioned above. This was not a problem with gnu C++ library, however it did not work for libc++. llvm-svn: 145790
-
Eric Christopher authored
not get there any other way. llvm-svn: 145789
-
David Blaikie authored
llvm-svn: 145785
-
Bob Wilson authored
llvm-svn: 145783
-
Fariborz Jahanian authored
Function or array lvalue conversions happens. llvm-svn: 145782
-
Anton Korobeynikov authored
Maybe some targets should use this as well. Patch by Evgeniy Stepanov! llvm-svn: 145781
-
- Dec 03, 2011
-
-
Venkatraman Govindaraju authored
AnalyzeBranch doesn't change the successor, just the order. llvm-svn: 145779
-
Howard Hinnant authored
Version #next on the hash functions for scalars. This builds on Dave's work, extends it to T*, and changes the way double and long double are handled (no longer convert to float on 32 bit). I also picked up a minor bug with uninitialized bits on the upper end of size_t when sizeof(size_t) > sizeof(T), e.g. in hash<float>. Most of the functionality has been put in one place: __scalar_hash in <memory>. Unfortunately I could not reuse __scalar_hash for hash<long double> on x86 because of the padding bits which need to be zeroed. I didn't want to add this zeroing step to the more general __scalar_hash when it isn't needed (in the absence of padding bits). I'm not ignoring the hash<string> issue (possibly changing that to a better hash). I just haven't gotten there yet. llvm-svn: 145778
-
Greg Clayton authored
add them to a fast lookup map. lldb_private::Symtab now export the following public typedefs: namespace lldb_private { class Symtab { typedef std::vector<uint32_t> IndexCollection; typedef UniqueCStringMap<uint32_t> NameToIndexMap; }; } Clients can then find symbols by name and or type and end up with a Symtab::IndexCollection that is filled with indexes. These indexes can then be put into a name to index lookup map and control if the mangled and demangled names get added to the map: bool add_demangled = true; bool add_mangled = true; Symtab::NameToIndexMap name_to_index; symtab->AppendSymbolNamesToMap (indexes, add_demangled, add_mangled, name_to_index). This can be repeated as many times as needed to get a lookup table that you are happy with, and then this can be sorted: name_to_index.Sort(); Now name lookups can be done using a subset of the symbols you extracted from the symbol table. This is currently being used to extract objective C types from object files when there is no debug info in SymbolFileSymtab. Cleaned up how the objective C types were being vended to be more efficient and fixed some errors in the regular expression that was being used. llvm-svn: 145777
-
Douglas Gregor authored
types. Patch from Dmitri Rubinstein! llvm-svn: 145776
-
Douglas Gregor authored
a class is marked 'final', from Alberto Ganesh Barbati! Fixes PR11462. llvm-svn: 145775
-
Fariborz Jahanian authored
inferred from return types. All the return statements have to agree about the type. // rdar://10466373 llvm-svn: 145774
-
Benjamin Kramer authored
-3% on ARMDissasembler.cpp. llvm-svn: 145773
-
Francois Pichet authored
In Microsoft mode, don't perform typo correction in a template member function dependent context because it interferes with the "lookup into dependent bases of class templates" feature. Basically typo correction will try to offer a correction instead of looking into type dependent base classes. I found this problem while parsing Microsoft ATL code with clang. llvm-svn: 145772
-
Benjamin Kramer authored
llvm-svn: 145771
-
Benjamin Kramer authored
Add a "seen blocks" cache to LVI to avoid a linear scan over the whole cache just to remove no blocks from the maps. -15% on ARMDisassembler.cpp (Release build). It's not that great to add another layer of caching to the caching-heavy LVI but I don't see a better way. llvm-svn: 145770
-
Sebastian Redl authored
llvm-svn: 145769
-
Sanjoy Das authored
libgcc sets the stack limit field in TCB to 256 bytes above the actual allocated stack limit. This means if the function's stack frame needs less than 256 bytes, we can just compare the stack pointer with the stack limit. This should result in lesser calls to __morestack. llvm-svn: 145766
-
Sanjoy Das authored
Currently LLVM pads the call to __morestack with a add and sub of 8 bytes to esp. This isn't correct since __morestack expects the call to be followed directly by a ret. This commit also adjusts the relevant test-case. llvm-svn: 145765
-
Greg Clayton authored
class. The thing with Objective C classes is the debug info might have a definition that isn't just a forward decl, but it is incomplete. So we need to look and see if we can find the complete definition and avoid recursing a lot due to the fact that our accelerator tables will have many versions of the type, but only one complete one. We might not also have the complete type and we need to deal with this correctly. llvm-svn: 145759
-