- Jan 31, 2011
-
-
Devang Patel authored
llvm-svn: 124611
-
- Jan 27, 2011
-
-
David Greene authored
[AVX] Clean up the code to configure target lowering for AVX. Specify how to lower more/new operations. This is a prerequisite for adding additional AVX lowering. llvm-svn: 124447
-
- Jan 26, 2011
-
-
David Greene authored
[AVX] Add INSERT_SUBVECTOR and support it on x86. This provides a default implementation for x86, going through the stack in a similr fashion to how the codegen implements BUILD_VECTOR. Eventually this will get matched to VINSERTF128 if AVX is available. llvm-svn: 124307
-
David Greene authored
[AVX] Support EXTRACT_SUBVECTOR on x86. This provides a default implementation of EXTRACT_SUBVECTOR for x86, going through the stack in a similr fashion to how the codegen implements BUILD_VECTOR. Eventually this will get matched to VEXTRACTF128 if AVX is available. llvm-svn: 124292
-
NAKAMURA Takumi authored
llvm-svn: 124272
-
NAKAMURA Takumi authored
llvm-svn: 124270
-
- Jan 16, 2011
-
-
Chris Lattner authored
llvm-svn: 123560
-
- Jan 10, 2011
-
-
Anton Korobeynikov authored
Rename TargetFrameInfo into TargetFrameLowering. Also, put couple of FIXMEs and fixes here and there. llvm-svn: 123170
-
Jakob Stoklund Olesen authored
These functions not longer assert when passed 0, but simply return false instead. No functional change intended. llvm-svn: 123155
-
- Jan 08, 2011
-
-
Evan Cheng authored
llvm-svn: 123048
-
- Jan 07, 2011
-
-
Evan Cheng authored
Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit. llvm-svn: 123015
-
- Jan 06, 2011
-
-
Evan Cheng authored
The theory is it's still faster than a pair of movq / a quad of movl. This will probably hurt older chips like P4 but should run faster on current and future Intel processors. rdar://8817010 llvm-svn: 122955
-
Evan Cheng authored
etc. takes an option OptSize. If OptSize is true, it would return the inline limit for functions with attribute OptSize. llvm-svn: 122952
-
- Dec 23, 2010
-
-
Benjamin Kramer authored
int test(unsigned long a, unsigned long b) { return -(a < b); } compiles to _test: ## @test cmpq %rsi, %rdi ## encoding: [0x48,0x39,0xf7] sbbl %eax, %eax ## encoding: [0x19,0xc0] ret ## encoding: [0xc3] instead of _test: ## @test xorl %ecx, %ecx ## encoding: [0x31,0xc9] cmpq %rsi, %rdi ## encoding: [0x48,0x39,0xf7] movl $-1, %eax ## encoding: [0xb8,0xff,0xff,0xff,0xff] cmovael %ecx, %eax ## encoding: [0x0f,0x43,0xc1] ret ## encoding: [0xc3] llvm-svn: 122451
-
- Dec 21, 2010
-
-
Benjamin Kramer authored
(add Y, (sete X, 0)) -> cmp X, 1; adc 0, Y (add Y, (setne X, 0)) -> cmp X, 1; sbb -1, Y (sub (sete X, 0), Y) -> cmp X, 1; sbb 0, Y (sub (setne X, 0), Y) -> cmp X, 1; adc -1, Y for unsigned foo(unsigned a, unsigned b) { if (a == 0) b++; return b; } we now get: foo: cmpl $1, %edi movl %esi, %eax adcl $0, %eax ret instead of: foo: testl %edi, %edi sete %al movzbl %al, %eax addl %esi, %eax ret llvm-svn: 122364
-
Chris Lattner authored
something that just glues two nodes together, even if it is sometimes used for flags. llvm-svn: 122310
-
- Dec 20, 2010
-
-
Nate Begeman authored
Implement feedback from Bruno on making pblendvb an x86-specific ISD node in addition to being an intrinsic, and convert lowering to use it. Hopefully the pattern fragment is doing the right thing with XMM0, looks correct in testing. llvm-svn: 122277
-
Chris Lattner authored
the same as setcc. Optimize ADDC(0,0,FLAGS) -> SET_CARRY(FLAGS). This is a step towards finishing off PR5443. In the testcase in that bug we now get: movq %rdi, %rax addq %rsi, %rax sbbq %rcx, %rcx testb $1, %cl setne %dl ret instead of: movq %rdi, %rax addq %rsi, %rax movl $0, %ecx adcq $0, %rcx testq %rcx, %rcx setne %dl ret llvm-svn: 122219
-
Chris Lattner authored
llvm-svn: 122214
-
Chris Lattner authored
their carry depenedencies with MVT::Flag operands) and use clean and beautiful EFLAGS dependences instead. We do this by changing the modelling of SBB/ADC to have EFLAGS input and outputs (which is what requires the previous scheduler change) and change X86 ISelLowering to custom lower ADDC and friends down to X86ISD::ADD/ADC/SUB/SBB nodes. With the previous series of changes, this causes no changes in the testsuite, woo. llvm-svn: 122213
-
Mon P Wang authored
has run, e.g., prevent creating an i64 node from a v2i64 when i64 is not a legal type. llvm-svn: 122206
-
- Dec 19, 2010
-
-
Chris Lattner authored
consistently by moving it out of lowering into dag combine. Add some missing patterns for matching away extended versions of setcc_c. llvm-svn: 122201
-
Chris Lattner authored
going through the CSE maps to get it. llvm-svn: 122196
-
Chris Lattner authored
we don't need -disable-mmx anymore. llvm-svn: 122189
-
Chris Lattner authored
llvm-svn: 122187
-
Chris Lattner authored
generate them. Now we compile: define zeroext i8 @X(i8 signext %a, i8 signext %b) nounwind ssp { entry: %0 = tail call %0 @llvm.sadd.with.overflow.i8(i8 %a, i8 %b) %cmp = extractvalue %0 %0, 1 br i1 %cmp, label %if.then, label %if.end into: _X: ## @X ## BB#0: ## %entry subl $12, %esp movb 16(%esp), %al addb 20(%esp), %al jo LBB0_2 Before we were generating: _X: ## @X ## BB#0: ## %entry pushl %ebp movl %esp, %ebp subl $8, %esp movb 12(%ebp), %al testb %al, %al setge %cl movb 8(%ebp), %dl testb %dl, %dl setge %ah cmpb %cl, %ah sete %cl addb %al, %dl testb %dl, %dl setge %al cmpb %al, %ah setne %al andb %cl, %al testb %al, %al jne LBB0_2 llvm-svn: 122186
-
- Dec 17, 2010
-
-
Nate Begeman authored
Remove unnecessary pandn patterns, 'vnot' patfrag looks through bitcasts llvm-svn: 122098
-
- Dec 10, 2010
-
-
Nate Begeman authored
Formalize the notion that AVX and SSE are non-overlapping extensions from the compiler's point of view. Per email discussion, we either want to always use VEX-prefixed instructions or never use them, and are taking "HasAVX" to mean "Always use VEX". Passing -mattr=-avx,+sse42 should serve to restore legacy SSE support when desirable. llvm-svn: 121439
-
- Dec 09, 2010
-
-
Eric Christopher authored
the output to the correct register. Fixes a hidden problem uncovered by the last patch where we'd try to DAG combine our MVT::Other node oddly. llvm-svn: 121358
-
Eric Christopher authored
llvm-svn: 121340
-
Eric Christopher authored
popping up at O0 when it wasn't folded and the fast allocator would complain. llvm-svn: 121330
-
- Dec 05, 2010
-
-
Chris Lattner authored
result. This allows us to compile: void *test12(long count) { return new int[count]; } into: test12: movl $4, %ecx movq %rdi, %rax mulq %rcx movq $-1, %rdi cmovnoq %rax, %rdi jmp __Znam ## TAILCALL instead of: test12: movl $4, %ecx movq %rdi, %rax mulq %rcx seto %cl testb %cl, %cl movq $-1, %rdi cmoveq %rax, %rdi jmp __Znam Of course it would be even better if the regalloc inverted the cmov to 'cmovoq', which would eliminate the need for the 'movq %rdi, %rax'. llvm-svn: 120936
-
Chris Lattner authored
backend that they were all implemented except umul. This one fell back to the default implementation that did a hi/lo multiply and compared the top. Fix this to check the overflow flag that the 'mul' instruction sets, so we can avoid an explicit test. Now we compile: void *func(long count) { return new int[count]; } into: __Z4funcl: ## @_Z4funcl movl $4, %ecx ## encoding: [0xb9,0x04,0x00,0x00,0x00] movq %rdi, %rax ## encoding: [0x48,0x89,0xf8] mulq %rcx ## encoding: [0x48,0xf7,0xe1] seto %cl ## encoding: [0x0f,0x90,0xc1] testb %cl, %cl ## encoding: [0x84,0xc9] movq $-1, %rdi ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff] cmoveq %rax, %rdi ## encoding: [0x48,0x0f,0x44,0xf8] jmp __Znam ## TAILCALL instead of: __Z4funcl: ## @_Z4funcl movl $4, %ecx ## encoding: [0xb9,0x04,0x00,0x00,0x00] movq %rdi, %rax ## encoding: [0x48,0x89,0xf8] mulq %rcx ## encoding: [0x48,0xf7,0xe1] testq %rdx, %rdx ## encoding: [0x48,0x85,0xd2] movq $-1, %rdi ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff] cmoveq %rax, %rdi ## encoding: [0x48,0x0f,0x44,0xf8] jmp __Znam ## TAILCALL Other than the silly seto+test, this is using the o bit directly, so it's going in the right direction. llvm-svn: 120935
-
Chris Lattner authored
select, inserting a not to compensate. Add a missing isZero check that I lost somehow. This improves codegen of: void *func(long count) { return new int[count]; } from: __Z4funcl: ## @_Z4funcl movl $4, %ecx ## encoding: [0xb9,0x04,0x00,0x00,0x00] movq %rdi, %rax ## encoding: [0x48,0x89,0xf8] mulq %rcx ## encoding: [0x48,0xf7,0xe1] testq %rdx, %rdx ## encoding: [0x48,0x85,0xd2] movq $-1, %rdi ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff] cmoveq %rax, %rdi ## encoding: [0x48,0x0f,0x44,0xf8] jmp __Znam ## TAILCALL ## encoding: [0xeb,A] to: __Z4funcl: ## @_Z4funcl movl $4, %ecx ## encoding: [0xb9,0x04,0x00,0x00,0x00] movq %rdi, %rax ## encoding: [0x48,0x89,0xf8] mulq %rcx ## encoding: [0x48,0xf7,0xe1] cmpq $1, %rdx ## encoding: [0x48,0x83,0xfa,0x01] sbbq %rdi, %rdi ## encoding: [0x48,0x19,0xff] notq %rdi ## encoding: [0x48,0xf7,0xd7] orq %rax, %rdi ## encoding: [0x48,0x09,0xc7] jmp __Znam ## TAILCALL ## encoding: [0xeb,A] llvm-svn: 120932
-
Chris Lattner authored
1. generalize (select (x == 0), -1, 0) -> (sign_bit (x - 1)) to: (select (x == 0), -1, y) -> (sign_bit (x - 1)) | y 2. Handle the identical pattern that happens with !=: (select (x != 0), y, -1) -> (sign_bit (x - 1)) | y cmov is often high latency and can't fold immediates or memory operands. For example for (x == 0) ? -1 : 1, before we got: < testb %sil, %sil < movl $-1, %ecx < movl $1, %eax < cmovel %ecx, %eax now we get: > cmpb $1, %sil > sbbl %eax, %eax > orl $1, %eax llvm-svn: 120929
-
- Dec 04, 2010
-
-
Benjamin Kramer authored
- Also adds a new POPCNT subtarget feature that is currently enabled if the target supports SSE4.2 (nehalem) or SSE4A (barcelona). llvm-svn: 120917
-
Benjamin Kramer authored
llvm-svn: 120907
-
- Dec 01, 2010
-
-
Evan Cheng authored
llvm-svn: 120622
-
Duncan Sands authored
The user (i.e. whoever generated a call to the intrinsic in the first place) is essentially asking for a particular instruction to be placed in the assembler. If that instruction won't execute on the target machine, that's their problem not ours. Two buildbots with processors that don't support SSE3 were barfing on the apm.ll test in CodeGen/X86 because of this assertion. llvm-svn: 120574
-
Evan Cheng authored
llvm-svn: 120549
-