- Feb 01, 2009
-
-
Owen Anderson authored
llvm-svn: 63492
-
Eli Friedman authored
constants. llvm-svn: 63491
-
Owen Anderson authored
Fix an issue in PHI construction that was exposed by GCC 4.2 producing a different set iteration order for the reg_iterator. llvm-svn: 63490
-
Evan Cheng authored
llvm-svn: 63489
-
- Jan 31, 2009
-
-
Dale Johannesen authored
llvm-svn: 63488
-
Nick Lewycky authored
turn icmp eq a+x, b+x into icmp eq a, b if a+x or b+x has other uses. This may have been increasing register pressure leading to the bzip2 slowdown. llvm-svn: 63487
-
Dale Johannesen authored
llvm-svn: 63486
-
Dale Johannesen authored
llvm-svn: 63485
-
Anders Carlsson authored
llvm-svn: 63484
-
Chris Lattner authored
improvements to the EvaluateInDifferentType code. This code works by just inserted a bunch of new code and then seeing if it is useful. Instcombine is not allowed to do this: it can only insert new code if it is useful, and only when it is converging to a more canonical fixed point. Now that we iterate when DCE makes progress, this causes an infinite loop when the code ends up not being used. llvm-svn: 63483
-
Duncan Sands authored
returned by getShiftAmountTy may be too small to hold shift values (it is an i8 on x86-32). Before and during type legalization, use a large but legal type for shift amounts: getPointerTy; afterwards use getShiftAmountTy, fixing up any shift amounts with a big type during operation legalization. Thanks to Dan for writing the original patch (which I shamelessly pillaged). llvm-svn: 63482
-
Chris Lattner authored
simplifydemandedbits to simplify instructions with *multiple uses* in contexts where it can get away with it. This allows it to simplify the code in multi-use-or.ll into a single 'add double'. This change is particularly interesting because it will cover up for some common codegen bugs with large integers created due to the recent SROA patch. When working on fixing those bugs, this should be disabled. llvm-svn: 63481
-
Chris Lattner authored
llvm-svn: 63480
-
Chris Lattner authored
Now, if it detects that "V" is the same as some other value, SimplifyDemandedBits returns the new value instead of RAUW'ing it immediately. This has two benefits: 1) simpler code in the recursive SimplifyDemandedBits routine. 2) it allows future fun stuff in instcombine where an operation has multiple uses and can be simplified in one context, but not all. #2 isn't implemented yet, this patch should have no functionality change. llvm-svn: 63479
-
Chris Lattner authored
llvm-svn: 63478
-
Chris Lattner authored
llvm-svn: 63477
-
Chris Lattner authored
not doing so prevents it from properly iterating and prevents it from deleting the entire body of dce-iterate.ll llvm-svn: 63476
-
Mon P Wang authored
llvm-svn: 63475
-
Mon P Wang authored
when A==B, -0.0 != +0.0. llvm-svn: 63474
-
Bill Wendling authored
llvm-svn: 63473
-
Daniel Dunbar authored
llvm-svn: 63472
-
Daniel Dunbar authored
llvm-svn: 63471
-
Fariborz Jahanian authored
alignment. llvm-svn: 63470
-
Chris Lattner authored
be able to handle *ANY* alloca that is poked by loads and stores of bitcasts and GEPs with constant offsets. Before the code had a number of annoying limitations and caused it to miss cases such as storing into holes in structs and complex casts (as in bitfield-sroa) where we had unions of bitfields etc. This also handles a number of important cases that are exposed due to the ABI lowering stuff we do to pass stuff by value. One case that is pretty great is that we compile 2006-11-07-InvalidArrayPromote.ll into: define i32 @func(<4 x float> %v0, <4 x float> %v1) nounwind { %tmp10 = call <4 x i32> @llvm.x86.sse2.cvttps2dq(<4 x float> %v1) %tmp105 = bitcast <4 x i32> %tmp10 to i128 %tmp1056 = zext i128 %tmp105 to i256 %tmp.upgrd.43 = lshr i256 %tmp1056, 96 %tmp.upgrd.44 = trunc i256 %tmp.upgrd.43 to i32 ret i32 %tmp.upgrd.44 } which turns into: _func: subl $28, %esp cvttps2dq %xmm1, %xmm0 movaps %xmm0, (%esp) movl 12(%esp), %eax addl $28, %esp ret Which is pretty good code all things considering :). One effect of this is that SROA will start generating arbitrary bitwidth integers that are a multiple of 8 bits. In the case above, we got a 256 bit integer, but the codegen guys assure me that it can handle the simple and/or/shift/zext stuff that we're doing on these operations. This addresses rdar://6532315 llvm-svn: 63469
-
Dale Johannesen authored
llvm-svn: 63468
-
Daniel Dunbar authored
llvm-svn: 63467
-
Daniel Dunbar authored
function/call info. llvm-svn: 63466
-
Ted Kremenek authored
llvm-svn: 63464
-
Gabor Greif authored
llvm-svn: 63463
-
Anders Carlsson authored
llvm-svn: 63462
-
Fariborz Jahanian authored
nonfragile abi). llvm-svn: 63461
-
Fariborz Jahanian authored
to private extern (in objc2 nonfragile abi). llvm-svn: 63460
-
Gabor Greif authored
llvm-svn: 63459
-
Dale Johannesen authored
Complete (modulo bugs). llvm-svn: 63458
-
Dale Johannesen authored
(modulo bugs) llvm-svn: 63457
-
Dale Johannesen authored
couple of things that use it. llvm-svn: 63456
-
Daniel Dunbar authored
in terms of where the type resides in the containing object. This is a more clear embodiement of the spec & fixes a merging issue with unions. Down to 3/1000 failures. llvm-svn: 63455
-
Bill Wendling authored
llvm-svn: 63454
-
Fariborz Jahanian authored
llvm-svn: 63453
-
Bill Wendling authored
llvm-svn: 63452
-