- Sep 14, 2005
-
-
Chris Lattner authored
llvm-svn: 23347
-
- Sep 13, 2005
-
-
Chris Lattner authored
llvm-svn: 23332
-
- Sep 02, 2005
-
-
Chris Lattner authored
llvm-svn: 23202
-
- Sep 01, 2005
-
-
Jim Laskey authored
1. Use SubtargetFeatures in llc/lli. 2. Propagate feature "string" to all targets. 3. Implement use of SubtargetFeatures in PowerPCTargetSubtarget. llvm-svn: 23192
-
- Aug 27, 2005
-
-
Reid Spencer authored
llvm-svn: 23119
-
- Aug 26, 2005
-
-
Chris Lattner authored
llvm-svn: 23082
-
Chris Lattner authored
putting it into the constant pool. This allows the isel machinery to create constants that it will end up deciding are not needed, without them ending up in the resultant function constant pool. llvm-svn: 23081
-
- Aug 25, 2005
-
-
Chris Lattner authored
llvm-svn: 23031
-
- Aug 24, 2005
-
-
Chris Lattner authored
llvm-svn: 22991
-
Chris Lattner authored
llvm-svn: 22988
-
- Aug 19, 2005
-
-
Chris Lattner authored
llvm-svn: 22929
-
Chris Lattner authored
llvm-svn: 22925
-
Chris Lattner authored
llvm-svn: 22914
-
Chris Lattner authored
llvm-svn: 22891
-
Chris Lattner authored
Give a whole bunch of other stuff variable operands, particularly FP. The FP stackifier is playing fast and loose with operands here, so we have to mark them all as variable. This will have to be fixed before we can dag->dag the X86 backend. The solution is for the pre-stackifier and post-stackifier instructions to all be disjoint. llvm-svn: 22890
-
Chris Lattner authored
llvm-svn: 22888
-
Chris Lattner authored
only take one operand. The other comes implicitly in through CL. llvm-svn: 22887
-
Nate Begeman authored
passed. llvm-svn: 22886
-
- Aug 16, 2005
-
-
Chris Lattner authored
llvm-svn: 22807
-
Nate Begeman authored
fixme from the PowerPC backend. Emit slightly better code for legalizing select_cc. llvm-svn: 22805
-
- Aug 14, 2005
-
-
Nate Begeman authored
block. nur. llvm-svn: 22788
-
Nate Begeman authored
now generate the relatively good code sequences: unsigned short foo(float a) { return a; } _foo: movss 4(%esp), %xmm0 cvttss2si %xmm0, %eax movzwl %ax, %eax ret and unsigned bar(float a) { return a; } _bar: movss .CPI_bar_0, %xmm0 movss 4(%esp), %xmm1 movapd %xmm1, %xmm2 subss %xmm0, %xmm2 cvttss2si %xmm2, %eax xorl $-2147483648, %eax cvttss2si %xmm1, %ecx ucomiss %xmm0, %xmm1 cmovb %ecx, %eax ret llvm-svn: 22786
-
- Aug 09, 2005
-
-
Chris Lattner authored
llvm-svn: 22729
-
- Aug 05, 2005
-
-
Chris Lattner authored
llvm-svn: 22687
-
- Aug 04, 2005
-
-
Nate Begeman authored
llvm-svn: 22644
-
Nate Begeman authored
Scalar SSE: a < b ? c : 0.0 -> cmpss, andps Scalar SSE: float -> i16 needs to be promoted llvm-svn: 22637
-
- Aug 02, 2005
-
-
Chris Lattner authored
Patch contributed by Jim Laskey! llvm-svn: 22594
-
- Jul 30, 2005
-
-
Jeff Cohen authored
llvm-svn: 22565
-
Chris Lattner authored
llvm-svn: 22561
-
Chris Lattner authored
1 byte loads and other operations. This is bad for store-forwarding on common CPUs. We now do this: fnstcw WORD PTR [%ESP] mov %AX, WORD PTR [%ESP] instead of: fnstcw WORD PTR [%ESP] mov %AL, BYTE PTR [%ESP + 1] llvm-svn: 22559
-
Chris Lattner authored
FP-to-int-in-memory: this exposes the load from the stored slot to the selection dag, allowing it to be folded into other operaions. llvm-svn: 22556
-
Andrew Lenharth authored
llvm-svn: 22553
-
- Jul 29, 2005
-
-
Chris Lattner authored
that the X86 does not support to the legalizer. This allows it to be better optimized, etc, and will help with SSE support. llvm-svn: 22551
-
Chris Lattner authored
llvm-svn: 22550
-
Chris Lattner authored
long %test4(double %X) { %tmp.1 = cast double %X to long ; <long> [#uses=1] ret long %tmp.1 } to this: _test4: sub %ESP, 12 fld QWORD PTR [%ESP + 16] fistp QWORD PTR [%ESP] mov %EDX, DWORD PTR [%ESP + 4] mov %EAX, DWORD PTR [%ESP] add %ESP, 12 ret instead of this: _test4: sub %ESP, 28 fld QWORD PTR [%ESP + 32] fstp QWORD PTR [%ESP] call ___fixdfdi add %ESP, 28 ret llvm-svn: 22549
-
- Jul 27, 2005
-
-
Jeff Cohen authored
llvm-svn: 22523
-
Jeff Cohen authored
llvm-svn: 22520
-
- Jul 22, 2005
-
-
Andrew Lenharth authored
llvm-svn: 22498
-
- Jul 19, 2005
-
-
Reid Spencer authored
This is the first incremental patch to implement this feature. It adds no functionality to LLVM but setup up the information needed from targets in order to implement the optimization correctly. Each target needs to specify the maximum number of store operations for conversion of the llvm.memset, llvm.memcpy, and llvm.memmove intrinsics into a sequence of store operations. The limit needs to be chosen at the threshold of performance for such an optimization (generally smallish). The target also needs to specify whether the target can support unaligned stores for multi-byte store operations. This helps ensure the optimization doesn't generate code that will trap on an alignment errors. More patches to follow. llvm-svn: 22468
-
- Jul 16, 2005
-
-
Nate Begeman authored
the target natively supports. This eliminates some special-case code from the x86 backend and generates better code as well. For an i8 to f64 conversion, before & after: _x87 before: subl $2, %esp movb 6(%esp), %al movsbw %al, %ax movw %ax, (%esp) filds (%esp) addl $2, %esp ret _x87 after: subl $2, %esp movsbw 6(%esp), %ax movw %ax, (%esp) filds (%esp) addl $2, %esp ret _sse before: subl $12, %esp movb 16(%esp), %al movsbl %al, %eax cvtsi2sd %eax, %xmm0 addl $12, %esp ret _sse after: subl $12, %esp movsbl 16(%esp), %eax cvtsi2sd %eax, %xmm0 addl $12, %esp ret llvm-svn: 22452
-