- Jan 23, 2005
-
-
Chris Lattner authored
llvm-svn: 19792
-
Chris Lattner authored
llvm-svn: 19791
-
Chris Lattner authored
llvm-svn: 19789
-
Chris Lattner authored
llvm-svn: 19787
-
Chris Lattner authored
The first half of correct chain insertion for libcalls. This is not enough to fix Fhourstones yet though. llvm-svn: 19781
-
Chris Lattner authored
the new TLI that is available. Implement support for handling out of range shifts. This allows us to compile this code (a 64-bit rotate): unsigned long long f3(unsigned long long x) { return (x << 32) | (x >> (64-32)); } into this: f3: mov %EDX, DWORD PTR [%ESP + 4] mov %EAX, DWORD PTR [%ESP + 8] ret GCC produces this: $ gcc t.c -masm=intel -O3 -S -o - -fomit-frame-pointer .. f3: push %ebx mov %ebx, DWORD PTR [%esp+12] mov %ecx, DWORD PTR [%esp+8] mov %eax, %ebx mov %edx, %ecx pop %ebx ret The Simple ISEL produces (eww gross): f3: sub %ESP, 4 mov DWORD PTR [%ESP], %ESI mov %EDX, DWORD PTR [%ESP + 8] mov %ECX, DWORD PTR [%ESP + 12] mov %EAX, 0 mov %ESI, 0 or %EAX, %ECX or %EDX, %ESI mov %ESI, DWORD PTR [%ESP] add %ESP, 4 ret llvm-svn: 19780
-
Chris Lattner authored
llvm-svn: 19779
-
Chris Lattner authored
llvm-svn: 19763
-
- Jan 22, 2005
-
-
Chris Lattner authored
This fixes the return-address-not-being-saved problem in the Alpha backend. llvm-svn: 19741
-
Chris Lattner authored
llvm-svn: 19739
-
Chris Lattner authored
llvm-svn: 19738
-
Chris Lattner authored
llvm-svn: 19737
-
Chris Lattner authored
llvm-svn: 19736
-
Chris Lattner authored
llvm-svn: 19735
-
- Jan 21, 2005
-
-
Chris Lattner authored
llvm-svn: 19727
-
Chris Lattner authored
operations for 64-bit integers. llvm-svn: 19724
-
- Jan 20, 2005
-
-
Chris Lattner authored
llvm-svn: 19721
-
Chris Lattner authored
llvm-svn: 19715
-
Chris Lattner authored
llvm-svn: 19714
-
Chris Lattner authored
llvm-svn: 19712
-
- Jan 19, 2005
-
-
Chris Lattner authored
llvm-svn: 19707
-
Chris Lattner authored
llvm-svn: 19704
-
Chris Lattner authored
llvm-svn: 19703
-
Chris Lattner authored
llvm-svn: 19701
-
Chris Lattner authored
independent of each other. llvm-svn: 19700
-
Chris Lattner authored
llvm-svn: 19699
-
Chris Lattner authored
llvm-svn: 19698
-
Chris Lattner authored
llvm-svn: 19696
-
Chris Lattner authored
well as all of teh other stuff in livevar. This fixes the compiler crash on fourinarow last night. llvm-svn: 19695
-
Chris Lattner authored
instead of doing it manually. llvm-svn: 19685
-
Chris Lattner authored
select operations or to shifts that are by a constant. This automatically implements (with no special code) all of the special cases for shift by 32, shift by < 32 and shift by > 32. llvm-svn: 19679
-
- Jan 18, 2005
-
-
Chris Lattner authored
llvm-svn: 19675
-
Chris Lattner authored
of zero and sign extends. llvm-svn: 19671
-
Chris Lattner authored
llvm-svn: 19670
-
Chris Lattner authored
do it. This results in better code on X86 for floats (because if strict precision is not required, we can elide some more expensive double -> float conversions like the old isel did), and allows other targets to emit CopyFromRegs that are not legal for arguments. llvm-svn: 19668
-
Chris Lattner authored
llvm-svn: 19657
-
Chris Lattner authored
llvm-svn: 19656
-
Chris Lattner authored
llvm-svn: 19651
-
- Jan 17, 2005
-
-
Chris Lattner authored
X86/reg-pressure.ll again, and allows us to do nice things in other cases. For example, we now codegen this sort of thing: int %loadload(int *%X, int* %Y) { %Z = load int* %Y %Y = load int* %X ;; load between %Z and store %Q = add int %Z, 1 store int %Q, int* %Y ret int %Y } Into this: loadload: mov %EAX, DWORD PTR [%ESP + 4] mov %EAX, DWORD PTR [%EAX] mov %ECX, DWORD PTR [%ESP + 8] inc DWORD PTR [%ECX] ret where we weren't able to form the 'inc [mem]' before. This also lets the instruction selector emit loads in any order it wants to, which can be good for register pressure as well. llvm-svn: 19644
-
Chris Lattner authored
llvm-svn: 19642
-