- Aug 19, 2005
-
-
Chris Lattner authored
llvm-svn: 22888
-
Chris Lattner authored
only take one operand. The other comes implicitly in through CL. llvm-svn: 22887
-
Nate Begeman authored
passed. llvm-svn: 22886
-
- Aug 16, 2005
-
-
Chris Lattner authored
llvm-svn: 22807
-
Nate Begeman authored
fixme from the PowerPC backend. Emit slightly better code for legalizing select_cc. llvm-svn: 22805
-
- Aug 14, 2005
-
-
Nate Begeman authored
block. nur. llvm-svn: 22788
-
Nate Begeman authored
now generate the relatively good code sequences: unsigned short foo(float a) { return a; } _foo: movss 4(%esp), %xmm0 cvttss2si %xmm0, %eax movzwl %ax, %eax ret and unsigned bar(float a) { return a; } _bar: movss .CPI_bar_0, %xmm0 movss 4(%esp), %xmm1 movapd %xmm1, %xmm2 subss %xmm0, %xmm2 cvttss2si %xmm2, %eax xorl $-2147483648, %eax cvttss2si %xmm1, %ecx ucomiss %xmm0, %xmm1 cmovb %ecx, %eax ret llvm-svn: 22786
-
- Aug 09, 2005
-
-
Chris Lattner authored
llvm-svn: 22729
-
- Aug 05, 2005
-
-
Chris Lattner authored
llvm-svn: 22687
-
- Aug 04, 2005
-
-
Nate Begeman authored
llvm-svn: 22644
-
Nate Begeman authored
Scalar SSE: a < b ? c : 0.0 -> cmpss, andps Scalar SSE: float -> i16 needs to be promoted llvm-svn: 22637
-
- Aug 02, 2005
-
-
Chris Lattner authored
Patch contributed by Jim Laskey! llvm-svn: 22594
-
- Jul 30, 2005
-
-
Jeff Cohen authored
llvm-svn: 22565
-
Chris Lattner authored
llvm-svn: 22561
-
Chris Lattner authored
1 byte loads and other operations. This is bad for store-forwarding on common CPUs. We now do this: fnstcw WORD PTR [%ESP] mov %AX, WORD PTR [%ESP] instead of: fnstcw WORD PTR [%ESP] mov %AL, BYTE PTR [%ESP + 1] llvm-svn: 22559
-
Chris Lattner authored
FP-to-int-in-memory: this exposes the load from the stored slot to the selection dag, allowing it to be folded into other operaions. llvm-svn: 22556
-
Andrew Lenharth authored
llvm-svn: 22553
-
- Jul 29, 2005
-
-
Chris Lattner authored
that the X86 does not support to the legalizer. This allows it to be better optimized, etc, and will help with SSE support. llvm-svn: 22551
-
Chris Lattner authored
llvm-svn: 22550
-
Chris Lattner authored
long %test4(double %X) { %tmp.1 = cast double %X to long ; <long> [#uses=1] ret long %tmp.1 } to this: _test4: sub %ESP, 12 fld QWORD PTR [%ESP + 16] fistp QWORD PTR [%ESP] mov %EDX, DWORD PTR [%ESP + 4] mov %EAX, DWORD PTR [%ESP] add %ESP, 12 ret instead of this: _test4: sub %ESP, 28 fld QWORD PTR [%ESP + 32] fstp QWORD PTR [%ESP] call ___fixdfdi add %ESP, 28 ret llvm-svn: 22549
-
- Jul 27, 2005
-
-
Jeff Cohen authored
llvm-svn: 22523
-
Jeff Cohen authored
llvm-svn: 22520
-
- Jul 22, 2005
-
-
Andrew Lenharth authored
llvm-svn: 22498
-
- Jul 19, 2005
-
-
Reid Spencer authored
This is the first incremental patch to implement this feature. It adds no functionality to LLVM but setup up the information needed from targets in order to implement the optimization correctly. Each target needs to specify the maximum number of store operations for conversion of the llvm.memset, llvm.memcpy, and llvm.memmove intrinsics into a sequence of store operations. The limit needs to be chosen at the threshold of performance for such an optimization (generally smallish). The target also needs to specify whether the target can support unaligned stores for multi-byte store operations. This helps ensure the optimization doesn't generate code that will trap on an alignment errors. More patches to follow. llvm-svn: 22468
-
- Jul 16, 2005
-
-
Nate Begeman authored
the target natively supports. This eliminates some special-case code from the x86 backend and generates better code as well. For an i8 to f64 conversion, before & after: _x87 before: subl $2, %esp movb 6(%esp), %al movsbw %al, %ax movw %ax, (%esp) filds (%esp) addl $2, %esp ret _x87 after: subl $2, %esp movsbw 6(%esp), %ax movw %ax, (%esp) filds (%esp) addl $2, %esp ret _sse before: subl $12, %esp movb 16(%esp), %al movsbl %al, %eax cvtsi2sd %eax, %xmm0 addl $12, %esp ret _sse after: subl $12, %esp movsbl 16(%esp), %eax cvtsi2sd %eax, %xmm0 addl $12, %esp ret llvm-svn: 22452
-
Nate Begeman authored
llvm-svn: 22451
-
Nate Begeman authored
llvm-svn: 22450
-
Chris Lattner authored
legalizer to eliminate them. With this comes the expected code quality improvements, such as, for this: double foo(unsigned short X) { return X; } we now generate this: _foo: subl $4, %esp movzwl 8(%esp), %eax movl %eax, (%esp) fildl (%esp) addl $4, %esp ret instead of this: _foo: subl $4, %esp movw 8(%esp), %ax movzwl %ax, %eax ;; Load not folded into this. movl %eax, (%esp) fildl (%esp) addl $4, %esp ret -Chris llvm-svn: 22449
-
- Jul 15, 2005
-
-
Nate Begeman authored
working, and Olden/power. llvm-svn: 22441
-
Nate Begeman authored
llvm-svn: 22440
-
- Jul 12, 2005
-
-
Nate Begeman authored
working before modifying the asm printer to use the subtarget info. llvm-svn: 22408
-
Nate Begeman authored
to the constructor. llvm-svn: 22392
-
Chris Lattner authored
llvm-svn: 22391
-
Chris Lattner authored
llvm-svn: 22390
-
Nate Begeman authored
Implement the X86 Subtarget. This consolidates the checks for target triple, and setting options based on target triple into one place. This allows us to convert the asm printer and isel over from being littered with "forDarwin", "forCygwin", etc. into just having the appropriate flags for each subtarget feature controlling the code for that feature. This patch also implements indirect external and weak references in the X86 pattern isel, for darwin. Next up is to convert over the asm printers to use this new interface. llvm-svn: 22389
-
Nate Begeman authored
llvm-svn: 22388
-
- Jul 11, 2005
-
-
Chris Lattner authored
llvm-svn: 22381
-
Chris Lattner authored
llvm-svn: 22380
-
Chris Lattner authored
after itself. llvm-svn: 22376
-
Chris Lattner authored
llvm-svn: 22372
-