- Jan 06, 2008
-
-
Duncan Sands authored
direct calls bails out unless caller and callee have essentially equivalent parameter attributes. This is illogical - the callee's attributes should be of no relevance here. Rework the logic, which incidentally fixes a crash when removed arguments have attributes. llvm-svn: 45658
-
Duncan Sands authored
a direct call with cast parameters and cast return value (if any), instcombine was prepared to cast any non-void return value into any other, whether castable or not. Add a new predicate for testing whether casting is valid, and check it both for the return value and (as a cleanup) for the parameters. llvm-svn: 45657
-
Chris Lattner authored
llvm-svn: 45656
-
Duncan Sands authored
llvm-svn: 45655
-
Chris Lattner authored
instead of "ISD::STORE". This allows us to mark target-specific dag nodes as storing (such as ppc byteswap stores). This allows us to remove more explicit isStore flags from the .td files. Finally, add a warning for when a .td file contains an explicit isStore and tblgen is able to infer it. llvm-svn: 45654
-
Chris Lattner authored
llvm-svn: 45653
-
Chris Lattner authored
llvm-svn: 45652
-
Bill Wendling authored
llvm-svn: 45638
-
Chris Lattner authored
llvm-svn: 45637
-
- Jan 05, 2008
-
-
Nate Begeman authored
the target independent legalizer. llvm-svn: 45631
-
Nate Begeman authored
Don't overwrite a variable used by the fallthrough code path in this case. llvm-svn: 45630
-
Chris Lattner authored
llvm-svn: 45629
-
Gordon Henriksen authored
unifying the copied algorithms and saving over 500 LOC. There should be no functionality change, but please test on your favorite x86 target. llvm-svn: 45627
-
Bill Wendling authored
checking that there was a from a global instead of a load from the stub for a global, which is the one that's safe to hoist. Consider this program: volatile char G[100]; int B(char *F, int N) { for (; N > 0; --N) F[N] = G[N]; } In static mode, we shouldn't be hoisting the load from G: $ llc -relocation-model=static -o - a.bc -march=x86 -machine-licm LBB1_1: # bb.preheader leal -1(%eax), %edx testl %edx, %edx movl $1, %edx cmovns %eax, %edx xorl %esi, %esi LBB1_2: # bb movb _G(%eax), %bl movb %bl, (%ecx,%eax) llvm-svn: 45626
-
Chris Lattner authored
llvm-svn: 45625
-
Chris Lattner authored
llvm-svn: 45624
-
Chris Lattner authored
for remat, but can't due to an RA bug. llvm-svn: 45623
-
Chris Lattner authored
llvm-svn: 45622
-
Chris Lattner authored
llvm-svn: 45621
-
Chris Lattner authored
isReallySideEffectFree and isReallyTriviallyReMaterializable. Why is a load from a global considered side-effect-free but not rematable? llvm-svn: 45620
-
Chris Lattner authored
llvm-svn: 45618
-
Chris Lattner authored
llvm-svn: 45617
-
Evan Cheng authored
llvm-svn: 45616
-
Chris Lattner authored
llvm-svn: 45614
-
Chris Lattner authored
llvm-svn: 45613
-
Chris Lattner authored
things that are not equality comparisons, for example: (2147479553+4096)-2147479553 < 0 != (2147479553+4096) < 2147479553 llvm-svn: 45612
-
Owen Anderson authored
llvm-svn: 45608
-
Evan Cheng authored
llvm-svn: 45605
-
Owen Anderson authored
llvm-svn: 45603
-
- Jan 04, 2008
-
-
Bill Wendling authored
preserved. llvm-svn: 45596
-
Wojciech Matyjewicz authored
llvm-svn: 45594
-
Evan Cheng authored
llvm-svn: 45576
-
Evan Cheng authored
llvm-svn: 45575
-
Bill Wendling authored
llvm-svn: 45574
-
Bill Wendling authored
llvm-svn: 45573
-
Bill Wendling authored
llvm-svn: 45572
-
Bill Wendling authored
llvm-svn: 45571
-
Chris Lattner authored
It is missing validity checks, so it is known broken. However, it is powerful enough to compile this contrived code: void test1(int C, double A, double B, double *P) { double Tmp = A*A+B*B; *P = C ? Tmp : A; } into: _test1: movsd 8(%esp), %xmm0 cmpl $0, 4(%esp) je LBB1_2 # entry LBB1_1: # entry movsd 16(%esp), %xmm1 mulsd %xmm1, %xmm1 mulsd %xmm0, %xmm0 addsd %xmm1, %xmm0 LBB1_2: # entry movl 24(%esp), %eax movsd %xmm0, (%eax) ret instead of: _test1: movsd 16(%esp), %xmm0 mulsd %xmm0, %xmm0 movsd 8(%esp), %xmm1 movapd %xmm1, %xmm2 mulsd %xmm2, %xmm2 addsd %xmm0, %xmm2 cmpl $0, 4(%esp) je LBB1_2 # entry LBB1_1: # entry movapd %xmm2, %xmm1 LBB1_2: # entry movl 24(%esp), %eax movsd %xmm1, (%eax) ret woo. llvm-svn: 45570
-
Chris Lattner authored
llvm-svn: 45569
-
Chris Lattner authored
llvm-svn: 45568
-