- Jan 15, 2011
-
-
Chris Lattner authored
to use it. llvm-svn: 123501
-
- Jan 14, 2011
-
-
Chris Lattner authored
llvm-svn: 123457
-
Chris Lattner authored
and one that uses SSAUpdater (-scalarrepl-ssa) llvm-svn: 123436
-
Chris Lattner authored
instead of DomTree/DomFrontier. This may be interesting for reducing compile time. This is currently disabled, but seems to work just fine. When this is enabled, we eliminate two runs of dominator frontier, one in the "early per-function" optimizations and one in the "interlaced with inliner" function passes. llvm-svn: 123434
-
- Jan 13, 2011
-
-
Bob Wilson authored
llvm-svn: 123396
-
Bob Wilson authored
llvm-svn: 123383
-
Bob Wilson authored
This is a minor extension of SROA to handle a special case that is important for some ARM NEON operations. Some of the NEON intrinsics return multiple values, which are handled as struct types containing multiple elements of the same vector type. The corresponding return types declared in the arm_neon.h header have equivalent arrays. We need SROA to recognize that it can split up those arrays and structs into separate vectors, even though they are not always accessed with the same type. SROA already handles loads and stores of an entire alloca by using insertvalue/extractvalue to access the individual pieces, and that code works the same regardless of whether the type is a struct or an array. So, all that needs to be done is to check for compatible arrays and homogeneous structs. llvm-svn: 123381
-
Bob Wilson authored
SROA only split up structs and arrays one level at a time, so padding can only cause trouble if it is located in between the struct or array elements. llvm-svn: 123380
-
- Jan 12, 2011
-
-
Devang Patel authored
llvm-svn: 123318
-
Chris Lattner authored
llvm-svn: 123302
-
Chris Lattner authored
of the bootstrap miscompare issue. llvm-svn: 123299
-
Chris Lattner authored
the source of the bootstrap problem. llvm-svn: 123298
-
- Jan 11, 2011
-
-
Jakob Stoklund Olesen authored
llvm-svn: 123288
-
Cameron Zwarich authored
once at the beginning of GVN instead of once per iteration. llvm-svn: 123278
-
Cameron Zwarich authored
llvm-svn: 123270
-
Chris Lattner authored
actually reached in the testcase in PR8954, but it's safe and good practice. llvm-svn: 123224
-
Chris Lattner authored
phi nodes. It is called from MergeBlockIntoPredecessor which is called from GVN, which claims to preserve these. I'm skeptical that this is the actual problem behind PR8954, but this is a stab in the right direction. llvm-svn: 123222
-
Chris Lattner authored
llvm-svn: 123221
-
Chris Lattner authored
neccesarily an uncond branch to the header. This fixes PR8955 (the assertion tripping). llvm-svn: 123219
-
- Jan 10, 2011
-
-
Chris Lattner authored
llvm-svn: 123149
-
Chris Lattner authored
back to life. llvm-svn: 123146
-
Chris Lattner authored
llvm-svn: 123144
-
- Jan 09, 2011
-
-
Chris Lattner authored
without informing memdep. This could cause nondeterminstic weirdness based on where instructions happen to get allocated, and will hopefully breath some life into some broken testers. llvm-svn: 123124
-
Cameron Zwarich authored
llvm-svn: 123117
-
Chris Lattner authored
that have the bit set. llvm-svn: 123104
-
- Jan 08, 2011
-
-
Chris Lattner authored
updating memdep when fusing stores together. This fixes the crash optimizing the bullet benchmark. llvm-svn: 123091
-
Chris Lattner authored
llvm-svn: 123090
-
Chris Lattner authored
larger memsets. Among other things, this fixes rdar://8760394 and allows us to handle "Example 2" from http://blog.regehr.org/archives/320, compiling it into a single 4096-byte memset: _mad_synth_mute: ## @mad_synth_mute ## BB#0: ## %entry pushq %rax movl $4096, %esi ## imm = 0x1000 callq ___bzero popq %rax ret llvm-svn: 123089
-
Chris Lattner authored
P and P+1 are relative to the same base pointer. llvm-svn: 123087
-
Chris Lattner authored
memset into a single larger memset. llvm-svn: 123086
-
Chris Lattner authored
Split memset formation logic out into its own "tryMergingIntoMemset" helper function. llvm-svn: 123081
-
Chris Lattner authored
to be foldable into an uncond branch. When this happens, we can make a much simpler CFG for the loop, which is important for nested loop cases where we want the outer loop to be aggressively optimized. Handle this case more aggressively. For example, previously on phi-duplicate.ll we would get this: define void @test(i32 %N, double* %G) nounwind ssp { entry: %cmp1 = icmp slt i64 1, 1000 br i1 %cmp1, label %bb.nph, label %for.end bb.nph: ; preds = %entry br label %for.body for.body: ; preds = %bb.nph, %for.cond %j.02 = phi i64 [ 1, %bb.nph ], [ %inc, %for.cond ] %arrayidx = getelementptr inbounds double* %G, i64 %j.02 %tmp3 = load double* %arrayidx %sub = sub i64 %j.02, 1 %arrayidx6 = getelementptr inbounds double* %G, i64 %sub %tmp7 = load double* %arrayidx6 %add = fadd double %tmp3, %tmp7 %arrayidx10 = getelementptr inbounds double* %G, i64 %j.02 store double %add, double* %arrayidx10 %inc = add nsw i64 %j.02, 1 br label %for.cond for.cond: ; preds = %for.body %cmp = icmp slt i64 %inc, 1000 br i1 %cmp, label %for.body, label %for.cond.for.end_crit_edge for.cond.for.end_crit_edge: ; preds = %for.cond br label %for.end for.end: ; preds = %for.cond.for.end_crit_edge, %entry ret void } Now we get the much nicer: define void @test(i32 %N, double* %G) nounwind ssp { entry: br label %for.body for.body: ; preds = %entry, %for.body %j.01 = phi i64 [ 1, %entry ], [ %inc, %for.body ] %arrayidx = getelementptr inbounds double* %G, i64 %j.01 %tmp3 = load double* %arrayidx %sub = sub i64 %j.01, 1 %arrayidx6 = getelementptr inbounds double* %G, i64 %sub %tmp7 = load double* %arrayidx6 %add = fadd double %tmp3, %tmp7 %arrayidx10 = getelementptr inbounds double* %G, i64 %j.01 store double %add, double* %arrayidx10 %inc = add nsw i64 %j.01, 1 %cmp = icmp slt i64 %inc, 1000 br i1 %cmp, label %for.body, label %for.end for.end: ; preds = %for.body ret void } With all of these recent changes, we are now able to compile: void foo(char *X) { for (int i = 0; i != 100; ++i) for (int j = 0; j != 100; ++j) X[j+i*100] = 0; } into a single memset of 10000 bytes. This series of changes should also be helpful for other nested loop scenarios as well. llvm-svn: 123079
-
Chris Lattner authored
moving the OrigHeader block anymore: we just merge it away anyway so its code layout doesn't matter. llvm-svn: 123077
-
Chris Lattner authored
that it was leaving in loops after rotation (between the original latch block and the original header. With this change, it is possible for rotated loops to have just a single basic block, which is useful. llvm-svn: 123075
-
Chris Lattner authored
llvm-svn: 123073
-
Chris Lattner authored
1. Rip out LoopRotate's domfrontier updating code. It isn't needed now that LICM doesn't use DF and it is super complex and gross. 2. Make DomTree updating code a lot simpler and faster. The old loop over all the blocks was just to find a block?? 3. Change the code that inserts the new preheader to just use SplitCriticalEdge instead of doing an overcomplex reimplementation of it. No behavior change, except for the name of the inserted preheader. llvm-svn: 123072
-
Chris Lattner authored
and latch blocks. Reorder entry conditions to make hte pass faster and more logical. llvm-svn: 123069
-
Chris Lattner authored
llvm-svn: 123068
-
Chris Lattner authored
that are just passed to one function. llvm-svn: 123067
-
Chris Lattner authored
to violate LCSSA form llvm-svn: 123066
-