- Oct 31, 2007
-
-
Duncan Sands authored
llvm-svn: 43550
-
Owen Anderson authored
llvm-svn: 43542
-
Owen Anderson authored
llvm-svn: 43541
-
Dale Johannesen authored
llvm-svn: 43535
-
Evan Cheng authored
At end of LSR, replace uses of now constant (as result of SplitCriticalEdge) PHI node with the constant value. llvm-svn: 43533
-
- Oct 30, 2007
-
-
Evan Cheng authored
It's not safe to tell SplitCriticalEdge to merge identical edges. It may delete the phi instruction that's being processed. llvm-svn: 43524
-
Dale Johannesen authored
CVTTPD2PI, CVTTPS2PI, CVTPI2PD, CVTPI2PS. llvm-svn: 43523
-
Evan Cheng authored
llvm-svn: 43511
-
Dan Gohman authored
llvm-svn: 43510
-
Duncan Sands authored
llvm-svn: 43500
-
Duncan Sands authored
storing an i170 on a 32 bit machine. This is first promoted to a trunc-i170 store of an i256. On a little-endian machine this expands to a store of an i128 and a trunc-i42 store of an i128. The trunc-i42 store is further expanded to a trunc-i42 store of an i64, then to a store of an i32 and a trunc-i10 store of an i32. At this point the operand type is legal (i32) and expansion stops (legalization of the trunc-i10 needs to be handled in LegalizeDAG.cpp). On big-endian machines the high bits are stored first, and some bit-fiddling is needed in order to generate aligned stores. llvm-svn: 43499
-
Duncan Sands authored
offload to getStore rather than trying to handle both cases at once (the assertions for example assume the store really is truncating). llvm-svn: 43498
-
Dale Johannesen authored
llvm-svn: 43488
-
- Oct 29, 2007
-
-
Evan Cheng authored
- Allow icmp rewrite using an iv / stride of a smaller integer type. llvm-svn: 43480
-
Dan Gohman authored
llvm-svn: 43470
-
Dan Gohman authored
lowering load and store instructions. llvm-svn: 43468
-
Dan Gohman authored
llvm-svn: 43467
-
Dan Gohman authored
of just printing to cerr. llvm-svn: 43466
-
Evan Cheng authored
transformation. Previously, it's restricted by ensuring the number of load uses is one. Now the restriction is loosened up by allowing setcc uses to be "extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq). llvm-svn: 43465
-
Dan Gohman authored
llvm-svn: 43464
-
Dan Gohman authored
llvm-svn: 43463
-
Dan Gohman authored
llvm-svn: 43462
-
Dan Gohman authored
llvm-svn: 43461
-
Dan Gohman authored
llvm-svn: 43460
-
Ted Kremenek authored
constant to an unsigned int. We now just directly assign the literal 0. llvm-svn: 43459
-
Evan Cheng authored
llvm-svn: 43446
-
Chris Lattner authored
llvm-svn: 43444
-
Chris Lattner authored
now. It conflicts with clang's -pedantic flag. llvm-svn: 43431
-
Chris Lattner authored
b/h/w/k/q inline asm memory modifiers, which are just ignored. This fixes PR1748 and CodeGen/X86/2007-10-28-inlineasm-q-modifier.ll llvm-svn: 43430
-
Chris Lattner authored
zero-length fields better. llvm-svn: 43427
-
Chris Lattner authored
can have uses too. Wouldn't it be nice if invoke didn't exist? :) llvm-svn: 43426
-
Ted Kremenek authored
pointers that were not backpatched (previously checked the wrong invariant). llvm-svn: 43425
-
- Oct 28, 2007
-
-
Anton Korobeynikov authored
llvm-svn: 43424
-
Ted Kremenek authored
eager backpatching instead of waithing until all objects have been deserialized. This allows us to reduce the memory footprint needed for backpatching. llvm-svn: 43422
-
Duncan Sands authored
of offset and the alignment of ptr if these are both powers of 2. While the ptr alignment is guaranteed to be a power of 2, there is no reason to think that offset is. For example, if offset is 12 (the size of a long double on x86-32 linux) and the alignment of ptr is 8, then the alignment of ptr+offset will in general be 4, not 8. Introduce a function MinAlign, lifted from gcc, for computing the minimum guaranteed alignment. I've tried to fix up everywhere under lib/CodeGen/SelectionDAG/. I also changed some places that weren't wrong (because both values were a power of 2), as a defensive change against people copying and pasting the code. Hopefully someone who cares about alignment will review the rest of LLVM and fix up the remaining places. Since I'm on x86 I'm not very motivated to do this myself... llvm-svn: 43421
-
Evan Cheng authored
llvm-svn: 43420
-
- Oct 27, 2007
-
-
Evan Cheng authored
- ChangeCompareStride only reuse stride that is larger than current stride. It will let the general reuse mechanism to try to reuse a smaller stride. - Watch out for multiplication overflow in ChangeCompareStride. - Replace std::set with SmallPtrSet. llvm-svn: 43408
-
- Oct 26, 2007
-
-
Ted Kremenek authored
llvm-svn: 43405
-
Bill Wendling authored
FE. - Explicitly pass in the alignment of the load & store. - XFAIL 2007-10-23-UnalignedMemcpy.ll because llc has a bug that crashes on unaligned pointers. llvm-svn: 43398
-
Evan Cheng authored
llvm-svn: 43384
-