- Sep 05, 2010
-
-
Chris Lattner authored
llvm-svn: 113117
-
Chris Lattner authored
llvm-svn: 113116
-
Chris Lattner authored
llvm-svn: 113115
-
Chris Lattner authored
llvm-svn: 113114
-
Chris Lattner authored
llvm-svn: 113113
-
Chris Lattner authored
llvm-svn: 113109
-
Lang Hames authored
llvm-svn: 113108
-
Nick Lewycky authored
llvm-svn: 113106
-
Nick Lewycky authored
This reduces malloc traffic (yay!) and removes MergeFunctionsEqualityInfo. llvm-svn: 113105
-
Nick Lewycky authored
strong functions first to make sure they're the canonical definitions and then do a second pass looking only for weak functions. llvm-svn: 113104
-
rdar://6653118Chris Lattner authored
Since mem2reg isn't run at -O0, we get a ton of reloads from the stack, for example, before, this code: int foo(int x, int y, int z) { return x+y+z; } used to compile into: _foo: ## @foo subq $12, %rsp movl %edi, 8(%rsp) movl %esi, 4(%rsp) movl %edx, (%rsp) movl 8(%rsp), %edx movl 4(%rsp), %esi addl %edx, %esi movl (%rsp), %edx addl %esi, %edx movl %edx, %eax addq $12, %rsp ret Now we produce: _foo: ## @foo subq $12, %rsp movl %edi, 8(%rsp) movl %esi, 4(%rsp) movl %edx, (%rsp) movl 8(%rsp), %edx addl 4(%rsp), %edx ## Folded load addl (%rsp), %edx ## Folded load movl %edx, %eax addq $12, %rsp ret Fewer instructions and less register use = faster compiles. llvm-svn: 113102
-
- Sep 04, 2010
-
-
Jakob Stoklund Olesen authored
Clobber ranges are no longer used when joining physical registers. Instead, all aliases are checked for interference. llvm-svn: 113084
-
Chris Lattner authored
that diagnose invalid references to references. llvm-svn: 113078
-
Chris Lattner authored
llvm-svn: 113077
-
Chris Lattner authored
llvm-svn: 113075
-
Chris Lattner authored
llvm-svn: 113073
-
Chris Lattner authored
not SelectAddr llvm-svn: 113072
-
Chris Lattner authored
llvm-svn: 113071
-
Bruno Cardoso Lopes authored
llvm-svn: 113059
-
Bruno Cardoso Lopes authored
llvm-svn: 113058
-
Dan Gohman authored
into an inner loop, as the new loop iteration may differ substantially. This fixes PR8078. llvm-svn: 113057
-
Bruno Cardoso Lopes authored
llvm-svn: 113056
-
Bruno Cardoso Lopes authored
llvm-svn: 113055
-
Bruno Cardoso Lopes authored
llvm-svn: 113050
-
Bruno Cardoso Lopes authored
llvm-svn: 113048
-
Bruno Cardoso Lopes authored
llvm-svn: 113047
-
Bruno Cardoso Lopes authored
llvm-svn: 113045
-
Bruno Cardoso Lopes authored
llvm-svn: 113044
-
Bruno Cardoso Lopes authored
llvm-svn: 113043
-
Chris Lattner authored
location is being re-stored to the memory location. We would get a dangling pointer from the SSAUpdate data structure and miss a use. This fixes PR8068 llvm-svn: 113042
-
Bruno Cardoso Lopes authored
llvm-svn: 113035
-
Bruno Cardoso Lopes authored
llvm-svn: 113034
-
Bruno Cardoso Lopes authored
checking each standalone condition and decide whether emit target specific nodes or remove the condition if it's already matched before. llvm-svn: 113031
-
Owen Anderson authored
llvm-svn: 113025
-
Eric Christopher authored
various breakages appear to be dealt with. Patch by Pekka Jääskeläinen. llvm-svn: 113024
-
Dan Gohman authored
invertible. ScalarEvolution's folding routines don't always succeed in canonicalizing equal expressions to a single canonical form, and this can cause these asserts to fail, even though there's no actual correctness problem. This fixes PR8066. llvm-svn: 113021
-
Bruno Cardoso Lopes authored
"Use target specific nodes instead of relying in unpckl and unpckh pattern fragments during isel time. Also place a depth limit in getShuffleScalarElt. llvm-svn: 113020
-
- Sep 03, 2010
-
-
Jim Grosbach authored
overload UserInInstr. Explicitly check Allocatable. The early exit in the condition will mean the performance impact of the extra test should be minimal. llvm-svn: 113016
-
Dale Johannesen authored
Bruno, please review. llvm-svn: 113014
-
David Greene authored
Generalize getFieldType to work on all TypedInits. Add a couple of testcases from Amaury Pouly. llvm-svn: 113010
-