- Sep 05, 2010
-
-
Chris Lattner authored
llvm-svn: 113117
-
Chris Lattner authored
llvm-svn: 113116
-
Chris Lattner authored
llvm-svn: 113115
-
Chris Lattner authored
llvm-svn: 113114
-
Chris Lattner authored
llvm-svn: 113113
-
Howard Hinnant authored
llvm-svn: 113110
-
Chris Lattner authored
llvm-svn: 113109
-
Lang Hames authored
llvm-svn: 113108
-
Nick Lewycky authored
llvm-svn: 113106
-
Nick Lewycky authored
This reduces malloc traffic (yay!) and removes MergeFunctionsEqualityInfo. llvm-svn: 113105
-
Nick Lewycky authored
strong functions first to make sure they're the canonical definitions and then do a second pass looking only for weak functions. llvm-svn: 113104
-
Nick Lewycky authored
David Vandevoorde's name correctly. llvm-svn: 113103
-
rdar://6653118Chris Lattner authored
Since mem2reg isn't run at -O0, we get a ton of reloads from the stack, for example, before, this code: int foo(int x, int y, int z) { return x+y+z; } used to compile into: _foo: ## @foo subq $12, %rsp movl %edi, 8(%rsp) movl %esi, 4(%rsp) movl %edx, (%rsp) movl 8(%rsp), %edx movl 4(%rsp), %esi addl %edx, %esi movl (%rsp), %edx addl %esi, %edx movl %edx, %eax addq $12, %rsp ret Now we produce: _foo: ## @foo subq $12, %rsp movl %edi, 8(%rsp) movl %esi, 4(%rsp) movl %edx, (%rsp) movl 8(%rsp), %edx addl 4(%rsp), %edx ## Folded load addl (%rsp), %edx ## Folded load movl %edx, %eax addq $12, %rsp ret Fewer instructions and less register use = faster compiles. llvm-svn: 113102
-
Howard Hinnant authored
llvm-svn: 113101
-
Howard Hinnant authored
llvm-svn: 113100
-
Howard Hinnant authored
llvm-svn: 113099
-
Howard Hinnant authored
llvm-svn: 113098
-
Howard Hinnant authored
llvm-svn: 113097
-
Chris Lattner authored
I think this wraps up all the legal cases. llvm-svn: 113096
-
Chris Lattner authored
llvm-svn: 113095
-
Chris Lattner authored
llvm-svn: 113094
-
Chris Lattner authored
llvm-svn: 113093
-
Chris Lattner authored
llvm-svn: 113092
-
Chris Lattner authored
llvm-svn: 113091
-
Chris Lattner authored
which is should have done from the beginning. As usual, the most fun with this sort of change is updating all the testcases. llvm-svn: 113090
-
Howard Hinnant authored
llvm-svn: 113089
-
Chris Lattner authored
llvm-svn: 113088
-
Chris Lattner authored
llvm-svn: 113087
-
Howard Hinnant authored
Changed __config to react to all of clang's currently documented has_feature flags, and renamed _LIBCPP_MOVE to _LIBCPP_HAS_NO_RVALUE_REFERENCES to be more consistent with the rest of the libc++'s flags, and with clang's nomenclature. llvm-svn: 113086
-
Chris Lattner authored
check in the "typedef for anonymous type" check should have been a getAs. llvm-svn: 113085
-
- Sep 04, 2010
-
-
Jakob Stoklund Olesen authored
Clobber ranges are no longer used when joining physical registers. Instead, all aliases are checked for interference. llvm-svn: 113084
-
Fariborz Jahanian authored
generate the necessary code. This patch fixes it. // rdar://8389655 llvm-svn: 113079
-
Chris Lattner authored
that diagnose invalid references to references. llvm-svn: 113078
-
Chris Lattner authored
llvm-svn: 113077
-
Chris Lattner authored
llvm-svn: 113076
-
Chris Lattner authored
llvm-svn: 113075
-
Chris Lattner authored
llvm-svn: 113074
-
Chris Lattner authored
llvm-svn: 113073
-
Chris Lattner authored
not SelectAddr llvm-svn: 113072
-
Chris Lattner authored
llvm-svn: 113071
-