"git@repo.hca.bsc.es:rferrer/llvm-epi-0.8.git" did not exist on "9141b611ad75f49db72ac5b1c2b703262d4e294f"
- Aug 31, 2010
-
-
Owen Anderson authored
llvm-svn: 112543
-
Owen Anderson authored
Fixes and cleanups pointed out by Chris. In general, be careful to handle 0 results from ComputeValueKnownInPredecessors (indicating undef), and re-use existing constant folding APIs. llvm-svn: 112539
-
- Aug 29, 2010
-
-
Chris Lattner authored
instead of PromoteMemToReg. This allows it to stop using DF and DT, eliminating a computation of DT and DF from clang -O3. Clang is now down to 2 runs of DomFrontier. llvm-svn: 112457
-
Chris Lattner authored
assertingvh so we get a violent explosion if the pointer dangles. 2) Fix AliasSetTracker::deleteValue to remove call sites with by-pointer comparisons instead of by-alias queries. Using findAliasSetForCallSite can cause alias sets to get merged when they shouldn't, and can also miss alias sets when the call is readonly. #2 fixes PR6889, which only repros with a .c file :( llvm-svn: 112452
-
Chris Lattner authored
out of loops, just delete them. llvm-svn: 112451
-
Chris Lattner authored
symtab manipulation, so its faster (in addition to being more elegant) llvm-svn: 112450
-
Chris Lattner authored
llvm-svn: 112449
-
Chris Lattner authored
of AST to remove the hoisted instruction from the AST, since it is no longer in the loop. llvm-svn: 112448
-
Chris Lattner authored
LICM correctly. When sinking an instruction, it should not add entries for the sunk instruction to the AST, it should remove the entry for the sunk instruction. The blocks being sunk to are not in the loop, so their instructions shouldn't be in the AST (yet)! llvm-svn: 112447
-
Chris Lattner authored
keeping them around until the pass is destroyed, keep them around a) just when useful (not for outer loops) and b) destroy them right after we use them. This should reduce memory use and fixes potential bugs where a loop is deleted and another loop gets allocated to the same address. llvm-svn: 112446
-
Chris Lattner authored
claims that it preserves domfrontier if it doesn't really. llvm-svn: 112445
-
Chris Lattner authored
for the unroller to pretend it supports updating it. It still has a horrible hack for DomTree. llvm-svn: 112444
-
Dan Gohman authored
other filtering techniques, as those may allow it to filter out more obviously unprofitable candidates. llvm-svn: 112441
-
Dan Gohman authored
LSRInstance data structures up to date. This fixes some pessimizations caused by stale data which will be exposed in an upcoming change. llvm-svn: 112440
-
Dan Gohman authored
NarrowSearchSpaceUsingHeuristics into separate functions. llvm-svn: 112439
-
Dan Gohman authored
llvm-svn: 112438
-
Dan Gohman authored
llvm-svn: 112437
-
Dan Gohman authored
the high-level logic. llvm-svn: 112436
-
Dan Gohman authored
llvm-svn: 112435
-
Dan Gohman authored
llvm-svn: 112434
-
Chris Lattner authored
preserves domfrontier. It does preserve AA though. llvm-svn: 112419
-
Chris Lattner authored
require DomFrontier. Dropping this doesn't actually save any runs of the pass though. llvm-svn: 112418
-
Chris Lattner authored
Among other things, this uses SSAUpdater instead of PromoteMemToReg. llvm-svn: 112417
-
Chris Lattner authored
llvm-svn: 112412
-
Chris Lattner authored
This leads to much simpler code. llvm-svn: 112410
-
Chris Lattner authored
llvm-svn: 112409
-
Chris Lattner authored
llvm-svn: 112408
-
Chris Lattner authored
getUniqueExitBlocks instead of getExitBlocks and a manual set to eliminate dupes. llvm-svn: 112405
-
Chris Lattner authored
llvm-svn: 112404
-
- Aug 28, 2010
-
-
Chris Lattner authored
being actively maintained, improved, or extended. llvm-svn: 112356
-
Chris Lattner authored
I'm aware of, aren't maintained, and LVI will be replacing their value. nlewycky approved this on irc. llvm-svn: 112355
-
Chris Lattner authored
llvm-svn: 112351
-
Chris Lattner authored
llvm-svn: 112350
-
Chris Lattner authored
like this: struct S { float A, B, C, D; }; struct S g; struct S bar() { struct S A = g; ++A.B; A.A = 42; return A; } we now generate: _bar: ## @bar ## BB#0: ## %entry movq _g@GOTPCREL(%rip), %rax movss 12(%rax), %xmm0 pshufd $16, %xmm0, %xmm0 movss 4(%rax), %xmm2 movss 8(%rax), %xmm1 pshufd $16, %xmm1, %xmm1 unpcklps %xmm0, %xmm1 addss LCPI1_0(%rip), %xmm2 pshufd $16, %xmm2, %xmm2 movss LCPI1_1(%rip), %xmm0 pshufd $16, %xmm0, %xmm0 unpcklps %xmm2, %xmm0 ret instead of: _bar: ## @bar ## BB#0: ## %entry movq _g@GOTPCREL(%rip), %rax movss 12(%rax), %xmm0 pshufd $16, %xmm0, %xmm0 movss 4(%rax), %xmm2 movss 8(%rax), %xmm1 pshufd $16, %xmm1, %xmm1 unpcklps %xmm0, %xmm1 addss LCPI1_0(%rip), %xmm2 movd %xmm2, %eax shlq $32, %rax addq $1109917696, %rax ## imm = 0x42280000 movd %rax, %xmm0 ret llvm-svn: 112345
-
Chris Lattner authored
element insertion from the pieces that feed into the vector. This handles a pattern that occurs frequently due to code generated for the x86-64 abi. We now compile something like this: struct S { float A, B, C, D; }; struct S g; struct S bar() { struct S A = g; ++A.A; ++A.C; return A; } into all nice vector operations: _bar: ## @bar ## BB#0: ## %entry movq _g@GOTPCREL(%rip), %rax movss LCPI1_0(%rip), %xmm1 movss (%rax), %xmm0 addss %xmm1, %xmm0 pshufd $16, %xmm0, %xmm0 movss 4(%rax), %xmm2 movss 12(%rax), %xmm3 pshufd $16, %xmm2, %xmm2 unpcklps %xmm2, %xmm0 addss 8(%rax), %xmm1 pshufd $16, %xmm1, %xmm1 pshufd $16, %xmm3, %xmm2 unpcklps %xmm2, %xmm1 ret instead of icky integer operations: _bar: ## @bar movq _g@GOTPCREL(%rip), %rax movss LCPI1_0(%rip), %xmm1 movss (%rax), %xmm0 addss %xmm1, %xmm0 movd %xmm0, %ecx movl 4(%rax), %edx movl 12(%rax), %esi shlq $32, %rdx addq %rcx, %rdx movd %rdx, %xmm0 addss 8(%rax), %xmm1 movd %xmm1, %eax shlq $32, %rsi addq %rax, %rsi movd %rsi, %xmm1 ret This resolves rdar://8360454 llvm-svn: 112343
-
Benjamin Kramer authored
llvm-svn: 112332
-
Owen Anderson authored
Add a prototype of a new peephole optimizing pass that uses LazyValue info to simplify PHIs and select's. This pass addresses the missed optimizations from PR2581 and PR4420. llvm-svn: 112325
-
Chris Lattner authored
A = shl x, 42 ... B = lshr ..., 38 which can be transformed into: A = shl x, 4 ... iff we can prove that the would-be-shifted-in bits are already zero. This eliminates two shifts in the testcase and allows eliminate of the whole i128 chain in the real example. llvm-svn: 112314
-
Chris Lattner authored
framework, which is good at ripping through bitfield operations. This generalize a bunch of the existing xforms that instcombine does, such as (x << c) >> c -> and to handle intermediate logical nodes. This is useful for ripping up the "promote to large integer" code produced by SRoA. llvm-svn: 112304
-
- Aug 27, 2010
-
-
Chris Lattner authored
more general simplify demanded bits logic. llvm-svn: 112291
-