- Jan 02, 2005
-
-
Chris Lattner authored
1. Add new instructions for checking parity flags: JP, JNP, SETP, SETNP. 2. Set the isCommutable and isPromotableTo3Address bits on several instructions. llvm-svn: 19246
-
- Dec 17, 2004
-
-
Chris Lattner authored
llvm-svn: 19024
-
Chris Lattner authored
llvm-svn: 19007
-
Chris Lattner authored
save small amounts of time for functions that don't call llvm.returnaddress or llvm.frameaddress (which is almost all functions). llvm-svn: 19006
-
- Dec 16, 2004
-
-
Chris Lattner authored
llvm-svn: 18987
-
- Dec 13, 2004
-
-
Chris Lattner authored
don't support long double anyway, and this gives us FP results closer to other targets. This also speeds up 179.art from 41.4s to 18.32s, by eliminating a problem with extra precision that causes an FP == comparison to fail (leading to extra loop iterations). llvm-svn: 18895
-
- Dec 12, 2004
-
-
Chris Lattner authored
llvm-svn: 18830
-
- Dec 03, 2004
-
-
Chris Lattner authored
llvm-svn: 18449
-
- Dec 02, 2004
-
-
Chris Lattner authored
instead of 80-bits of precision. This fixes PR467. This change speeds up fldry on X86 with LLC from 7.32s on apoc to 4.68s. llvm-svn: 18433
-
Chris Lattner authored
llvm-svn: 18432
-
- Dec 01, 2004
-
-
Tanya Lattner authored
http://mail.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20041122/021428.html It broke Mutlisource/Applications/obsequi llvm-svn: 18407
-
- Nov 29, 2004
-
-
Chris Lattner authored
to Brian and the Sun compiler for pointing out that the obvious works :) This also enables folding all long comparisons into setcc and branch instructions: before we could only do == and != For example, for: void test(unsigned long long A, unsigned long long B) { if (A < B) foo(); } We now generate: test: subl $4, %esp movl %esi, (%esp) movl 8(%esp), %eax movl 12(%esp), %ecx movl 16(%esp), %edx movl 20(%esp), %esi subl %edx, %eax sbbl %esi, %ecx jae .LBBtest_2 # UnifiedReturnBlock .LBBtest_1: # then call foo movl (%esp), %esi addl $4, %esp ret .LBBtest_2: # UnifiedReturnBlock movl (%esp), %esi addl $4, %esp ret Instead of: test: subl $12, %esp movl %esi, 8(%esp) movl %ebx, 4(%esp) movl 16(%esp), %eax movl 20(%esp), %ecx movl 24(%esp), %edx movl 28(%esp), %esi cmpl %edx, %eax setb %al cmpl %esi, %ecx setb %bl cmove %ax, %bx testb %bl, %bl je .LBBtest_2 # UnifiedReturnBlock .LBBtest_1: # then call foo movl 4(%esp), %ebx movl 8(%esp), %esi addl $12, %esp ret .LBBtest_2: # UnifiedReturnBlock movl 4(%esp), %ebx movl 8(%esp), %esi addl $12, %esp ret llvm-svn: 18330
-
- Nov 22, 2004
-
-
Chris Lattner authored
Do not push two return addresses on the stack when we call external functions who have their addresses taken. This fixes test-call.ll llvm-svn: 18134
-
- Nov 21, 2004
-
-
Chris Lattner authored
llvm-svn: 18082
-
Chris Lattner authored
llvm-svn: 18073
-
Chris Lattner authored
relocations for global references. llvm-svn: 18068
-
Chris Lattner authored
llvm-svn: 18067
-
Chris Lattner authored
llvm-svn: 18066
-
Chris Lattner authored
llvm-svn: 18065
-
- Nov 19, 2004
-
-
Chris Lattner authored
llvm-svn: 18010
-
- Nov 16, 2004
-
-
Chris Lattner authored
llvm-svn: 17902
-
Chris Lattner authored
hold your nose!) llvm-svn: 17869
-
Chris Lattner authored
already been emitted, we don't have to remember it and deal with it later, just emit it directly. llvm-svn: 17868
-
Chris Lattner authored
* Get rid of "emitMaybePCRelativeValue", either we want to emit a PC relative value or not: drop the maybe BS. As it turns out, the only places where the bool was a variable coming in, the bool was a dynamic constant. llvm-svn: 17867
-
Chris Lattner authored
set up. llvm-svn: 17862
-
Chris Lattner authored
llvm-svn: 17861
-
- Nov 14, 2004
-
-
Misha Brukman authored
llvm-svn: 17750
-
Chris Lattner authored
llvm-svn: 17714
-
- Nov 13, 2004
-
-
Chris Lattner authored
shld is a very high latency operation. Instead of emitting it for shifts of two or three, open code the equivalent operation which is faster on athlon and P4 (by a substantial margin). For example, instead of compiling this: long long X2(long long Y) { return Y << 2; } to: X3_2: movl 4(%esp), %eax movl 8(%esp), %edx shldl $2, %eax, %edx shll $2, %eax ret Compile it to: X2: movl 4(%esp), %eax movl 8(%esp), %ecx movl %eax, %edx shrl $30, %edx leal (%edx,%ecx,4), %edx shll $2, %eax ret Likewise, for << 3, compile to: X3: movl 4(%esp), %eax movl 8(%esp), %ecx movl %eax, %edx shrl $29, %edx leal (%edx,%ecx,8), %edx shll $3, %eax ret This matches icc, except that icc open codes the shifts as adds on the P4. llvm-svn: 17707
-
Chris Lattner authored
llvm-svn: 17706
-
Chris Lattner authored
long long X3_2(long long Y) { return Y+Y; } int X(int Y) { return Y+Y; } into: X3_2: movl 4(%esp), %eax movl 8(%esp), %edx addl %eax, %eax adcl %edx, %edx ret X: movl 4(%esp), %eax addl %eax, %eax ret instead of: X3_2: movl 4(%esp), %eax movl 8(%esp), %edx shldl $1, %eax, %edx shll $1, %eax ret X: movl 4(%esp), %eax shll $1, %eax ret llvm-svn: 17705
-
- Nov 10, 2004
-
-
John Criswell authored
It's stosl (l for long == 32 bit). llvm-svn: 17658
-
- Nov 05, 2004
-
-
John Criswell authored
llvm-svn: 17488
-
Chris Lattner authored
llvm-svn: 17484
-
- Nov 02, 2004
-
-
Chris Lattner authored
llvm-svn: 17431
-
- Nov 01, 2004
-
-
Chris Lattner authored
llvm-svn: 17406
-
- Oct 28, 2004
-
-
Reid Spencer authored
llvm-svn: 17286
-
- Oct 22, 2004
-
-
Reid Spencer authored
llvm-svn: 17167
-
Reid Spencer authored
llvm-svn: 17155
-
- Oct 19, 2004
-
-
Reid Spencer authored
llvm-svn: 17136
-