- Apr 08, 2009
-
-
Rafael Espindola authored
Tested by bootstrapping llvm-gcc and using that to build llvm. llvm-svn: 68645
-
Dan Gohman authored
with SUBREG_TO_REG, teach SimpleRegisterCoalescing to coalesce SUBREG_TO_REG instructions (which are similar to INSERT_SUBREG instructions), and teach the DAGCombiner to take advantage of this on targets which support it. This eliminates many redundant zero-extension operations on x86-64. This adds a new TargetLowering hook, isZExtFree. It's similar to isTruncateFree, except it only applies to actual definitions, and not no-op truncates which may not zero the high bits. Also, this adds a new optimization to SimplifyDemandedBits: transform operations like x+y into (zext (add (trunc x), (trunc y))) on targets where all the casts are no-ops. In contexts where the high part of the add is explicitly masked off, this allows the mask operation to be eliminated. Fix the DAGCombiner to avoid undoing these transformations to eliminate casts on targets where the casts are no-ops. Also, this adds a new two-address lowering heuristic. Since two-address lowering runs before coalescing, it helps to be able to look through copies when deciding whether commuting and/or three-address conversion are profitable. Also, fix a bug in LiveInterval::MergeInClobberRanges. It didn't handle the case that a clobber range extended both before and beyond an existing live range. In that case, multiple live ranges need to be added. This was exposed by the new subreg coalescing code. Remove 2008-05-06-SpillerBug.ll. It was bugpoint-reduced, and the spiller behavior it was looking for no longer occurrs with the new instruction selection. llvm-svn: 68576
-
Bill Wendling authored
builds. --- Reverse-merging (from foreign repository) r68552 into '.': U test/CodeGen/X86/tls8.ll U test/CodeGen/X86/tls10.ll U test/CodeGen/X86/tls2.ll U test/CodeGen/X86/tls6.ll U lib/Target/X86/X86Instr64bit.td U lib/Target/X86/X86InstrSSE.td U lib/Target/X86/X86InstrInfo.td U lib/Target/X86/X86RegisterInfo.cpp U lib/Target/X86/X86ISelLowering.cpp U lib/Target/X86/X86CodeEmitter.cpp U lib/Target/X86/X86FastISel.cpp U lib/Target/X86/X86InstrInfo.h U lib/Target/X86/X86ISelDAGToDAG.cpp U lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.cpp U lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.cpp U lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.h U lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.h U lib/Target/X86/X86ISelLowering.h U lib/Target/X86/X86InstrInfo.cpp U lib/Target/X86/X86InstrBuilder.h U lib/Target/X86/X86RegisterInfo.td llvm-svn: 68560
-
- Apr 07, 2009
-
-
Rafael Espindola authored
This introduces a small regression on the generated code quality in the case we are just computing addresses, not loading values. Will work on it and on X86-64 support. llvm-svn: 68552
-
- Mar 30, 2009
-
-
Evan Cheng authored
When optimzing a mul by immediate into two, the resulting mul's should get a x86 specific node to avoid dag combiner from hacking on them further. llvm-svn: 68066
-
- Mar 26, 2009
-
-
Bill Wendling authored
llvm-svn: 67727
-
- Mar 23, 2009
-
-
Dan Gohman authored
llvm-svn: 67518
-
- Mar 12, 2009
-
-
Chris Lattner authored
llvm-svn: 66778
-
- Mar 07, 2009
-
-
Dan Gohman authored
the same say the "test" instruction does in overflow cases, so eliminating the test is only safe when those bits aren't needed, as is the case for COND_E and COND_NE, or if it can be proven that no overflow will occur. For now, just restrict the optimization to COND_E and COND_NE and don't do any overflow analysis. llvm-svn: 66318
-
- Mar 04, 2009
-
-
Dan Gohman authored
llvm-svn: 66058
-
Dan Gohman authored
llvm-svn: 66008
-
Dan Gohman authored
result from add, sub, inc, and dec instructions in simple cases. llvm-svn: 66004
-
- Feb 23, 2009
-
-
Nate Begeman authored
Generate better code for v16i8 shuffles on SSE2 (avoids stack) Generate pshufb for v8i16 and v16i8 shuffles on SSSE3 where it is fewer uops. Document the shuffle matching logic and add some FIXMEs for later further cleanups. New tests that test the above. Examples: New: _shuf2: pextrw $7, %xmm0, %eax punpcklqdq %xmm1, %xmm0 pshuflw $128, %xmm0, %xmm0 pinsrw $2, %eax, %xmm0 Old: _shuf2: pextrw $2, %xmm0, %eax pextrw $7, %xmm0, %ecx pinsrw $2, %ecx, %xmm0 pinsrw $3, %eax, %xmm0 movd %xmm1, %eax pinsrw $4, %eax, %xmm0 ret ========= New: _shuf4: punpcklqdq %xmm1, %xmm0 pshufb LCPI1_0, %xmm0 Old: _shuf4: pextrw $3, %xmm0, %eax movsd %xmm1, %xmm0 pextrw $3, %xmm1, %ecx pinsrw $4, %ecx, %xmm0 pinsrw $5, %eax, %xmm0 ======== New: _shuf1: pushl %ebx pushl %edi pushl %esi pextrw $1, %xmm0, %eax rolw $8, %ax movd %xmm0, %ecx rolw $8, %cx pextrw $5, %xmm0, %edx pextrw $4, %xmm0, %esi pextrw $3, %xmm0, %edi pextrw $2, %xmm0, %ebx movaps %xmm0, %xmm1 pinsrw $0, %ecx, %xmm1 pinsrw $1, %eax, %xmm1 rolw $8, %bx pinsrw $2, %ebx, %xmm1 rolw $8, %di pinsrw $3, %edi, %xmm1 rolw $8, %si pinsrw $4, %esi, %xmm1 rolw $8, %dx pinsrw $5, %edx, %xmm1 pextrw $7, %xmm0, %eax rolw $8, %ax movaps %xmm1, %xmm0 pinsrw $7, %eax, %xmm0 popl %esi popl %edi popl %ebx ret Old: _shuf1: subl $252, %esp movaps %xmm0, (%esp) movaps %xmm0, 16(%esp) movaps %xmm0, 32(%esp) movaps %xmm0, 48(%esp) movaps %xmm0, 64(%esp) movaps %xmm0, 80(%esp) movaps %xmm0, 96(%esp) movaps %xmm0, 224(%esp) movaps %xmm0, 208(%esp) movaps %xmm0, 192(%esp) movaps %xmm0, 176(%esp) movaps %xmm0, 160(%esp) movaps %xmm0, 144(%esp) movaps %xmm0, 128(%esp) movaps %xmm0, 112(%esp) movzbl 14(%esp), %eax movd %eax, %xmm1 movzbl 22(%esp), %eax movd %eax, %xmm2 punpcklbw %xmm1, %xmm2 movzbl 42(%esp), %eax movd %eax, %xmm1 movzbl 50(%esp), %eax movd %eax, %xmm3 punpcklbw %xmm1, %xmm3 punpcklbw %xmm2, %xmm3 movzbl 77(%esp), %eax movd %eax, %xmm1 movzbl 84(%esp), %eax movd %eax, %xmm2 punpcklbw %xmm1, %xmm2 movzbl 104(%esp), %eax movd %eax, %xmm1 punpcklbw %xmm1, %xmm0 punpcklbw %xmm2, %xmm0 movaps %xmm0, %xmm1 punpcklbw %xmm3, %xmm1 movzbl 127(%esp), %eax movd %eax, %xmm0 movzbl 135(%esp), %eax movd %eax, %xmm2 punpcklbw %xmm0, %xmm2 movzbl 155(%esp), %eax movd %eax, %xmm0 movzbl 163(%esp), %eax movd %eax, %xmm3 punpcklbw %xmm0, %xmm3 punpcklbw %xmm2, %xmm3 movzbl 188(%esp), %eax movd %eax, %xmm0 movzbl 197(%esp), %eax movd %eax, %xmm2 punpcklbw %xmm0, %xmm2 movzbl 217(%esp), %eax movd %eax, %xmm4 movzbl 225(%esp), %eax movd %eax, %xmm0 punpcklbw %xmm4, %xmm0 punpcklbw %xmm2, %xmm0 punpcklbw %xmm3, %xmm0 punpcklbw %xmm1, %xmm0 addl $252, %esp ret llvm-svn: 65311
-
- Feb 07, 2009
-
-
Dan Gohman authored
ScheduleDAG's TLI member to use const. llvm-svn: 64018
-
- Feb 04, 2009
-
-
Dale Johannesen authored
Adjust the many callers of those versions. llvm-svn: 63767
-
- Feb 03, 2009
-
-
Dale Johannesen authored
llvm-svn: 63674
-
Dale Johannesen authored
llvm-svn: 63650
-
- Jan 24, 2009
-
-
Nate Begeman authored
llvm-svn: 62940
-
- Jan 17, 2009
-
-
Bill Wendling authored
X86. This code: void f() { uint32_t x; float y = (float)x; } used to be: movl %eax, -8(%ebp) movl [2^52 double], -4(%ebp) movsd -8(%ebp), %xmm0 subsd [2^52 double], %xmm0 cvtsd2ss %xmm0, %xmm0 Is now: movsd [2^52 double], %xmm0 movsd %xmm0, %xmm1 movd %ecx, %xmm2 orps %xmm2, %xmm1 subsd %xmm0, %xmm1 cvtsd2ss %xmm1, %xmm0 This is faster on X86. Note that there's an extra load of %xmm0 into %xmm1. That will be fixed in a later coalescer fix. llvm-svn: 62404
-
- Jan 15, 2009
-
-
Dan Gohman authored
llvm-svn: 62265
-
- Jan 13, 2009
-
-
Devang Patel authored
Use DebugInfo interface to lower dbg_* intrinsics. llvm-svn: 62127
-
- Jan 01, 2009
-
-
Duncan Sands authored
promote from i1 all the way up to the canonical SetCC type. In order to discover an appropriate type to use, pass MVT::Other to getSetCCResultType. In order to be able to do this, change getSetCCResultType to take a type as an argument, not a value (this is also more logical). llvm-svn: 61542
-
- Dec 23, 2008
-
-
Dan Gohman authored
llvm-svn: 61400
-
- Dec 18, 2008
-
-
Mon P Wang authored
llvm-svn: 61211
-
- Dec 12, 2008
-
-
Bill Wendling authored
which are identical to the original patterns. - Change the multiply with overflow so that we distinguish between signed and unsigned multiplication. Currently, unsigned multiplication with overflow isn't working! llvm-svn: 60963
-
Bill Wendling authored
ISD::ADD to emit an implicit EFLAGS. This was horribly broken. Instead, replace the intrinsic with an ISD::SADDO node. Then custom lower that into an X86ISD::ADD node with a associated SETCC that checks the correct condition code (overflow or carry). Then that gets lowered into the correct X86::ADDOvf instruction. Similar for SUB and MUL instructions. llvm-svn: 60915
-
- Dec 09, 2008
-
-
Bill Wendling authored
target-independent way of determining overflow on multiplication. It's very tricky. Patch by Zoltan Varga! llvm-svn: 60800
-
- Dec 02, 2008
-
-
Bill Wendling authored
- LowerXADDO lowers [SU]ADDO into an ADD with an implicit EFLAGS define. The EFLAGS are fed into a SETCC node which has the conditional COND_O or COND_C, depending on the type of ADDO requested. - LowerBRCOND now recognizes if it's coming from a SETCC node with COND_O or COND_C set. llvm-svn: 60388
-
- Dec 01, 2008
-
-
Duncan Sands authored
ReplaceNodeResults: rather than returning a node which must have the same number of results as the original node (which means mucking around with MERGE_VALUES, and which is also easy to get wrong since SelectionDAG folding may mean you don't get the node you expect), return the results in a vector. llvm-svn: 60348
-
- Nov 24, 2008
-
-
Bill Wendling authored
- Mark "add with overflow" as having a custom lowering for X86. Give it a null lowering representation for now. llvm-svn: 59971
-
- Oct 30, 2008
-
-
Mon P Wang authored
One will only see an effect if legalizetype is not active. Will move support to LegalizeType soon. llvm-svn: 58426
-
- Oct 21, 2008
-
-
Dale Johannesen authored
The same one Apple gcc uses, faster. Also gets the extreme case in gcc.c-torture/execute/ieee/rbug.c correct which we weren't before; this is not sufficient to get the test to pass though, there is another bug. llvm-svn: 57926
-
- Oct 18, 2008
-
-
Dan Gohman authored
and add a TargetLowering hook for it to use to determine when this is legal (i.e. not in PIC mode, etc.) This allows instruction selection to emit folded constant offsets in more cases, such as the included testcase, eliminating the need for explicit arithmetic instructions. This eliminates the need for the C++ code in X86ISelDAGToDAG.cpp that attempted to achieve the same effect, but wasn't as effective. Also, fix handling of offsets in GlobalAddressSDNodes in several places, including changing GlobalAddressSDNode's offset from int to int64_t. The Mips, Alpha, Sparc, and CellSPU targets appear to be unaware of GlobalAddress offsets currently, so set the hook to false on those targets. llvm-svn: 57748
-
- Oct 15, 2008
-
-
Dan Gohman authored
- Move the EH landing-pad code and adjust it so that it works with FastISel as well as with SDISel. - Add FastISel support for @llvm.eh.exception and @llvm.eh.selector. llvm-svn: 57539
-
- Oct 04, 2008
-
-
Dale Johannesen authored
Make it all work in non-pic mode. llvm-svn: 57034
-
- Oct 02, 2008
-
-
Dale Johannesen authored
llvm-svn: 56963
-
- Oct 01, 2008
-
-
Bill Wendling authored
llvm-svn: 56900
-
- Sep 30, 2008
-
-
Bill Wendling authored
`-fno-builtin' flag. Currently, it's used to replace "memset" with "_bzero" instead of "__bzero" on Darwin10+. This arguably violates the meaning of this flag, but is currently sufficient. The meaning of this flag should become more specific over time. llvm-svn: 56885
-
Dale Johannesen authored
valid types. No functional change. llvm-svn: 56808
-
- Sep 25, 2008
-
-
Evan Cheng authored
With sse3 and when the source is a load or has multiple uses, favors movddup over shuffp*, pshufd, etc. Without sse3 or when the source is from a register, make use of movlhps llvm-svn: 56620
-