- Dec 23, 2010
-
-
Chris Lattner authored
llvm-svn: 122513
-
Benjamin Kramer authored
llvm-svn: 122495
-
Benjamin Kramer authored
int test(unsigned long a, unsigned long b) { return -(a < b); } compiles to _test: ## @test cmpq %rsi, %rdi ## encoding: [0x48,0x39,0xf7] sbbl %eax, %eax ## encoding: [0x19,0xc0] ret ## encoding: [0xc3] instead of _test: ## @test xorl %ecx, %ecx ## encoding: [0x31,0xc9] cmpq %rsi, %rdi ## encoding: [0x48,0x39,0xf7] movl $-1, %eax ## encoding: [0xb8,0xff,0xff,0xff,0xff] cmovael %ecx, %eax ## encoding: [0x0f,0x43,0xc1] ret ## encoding: [0xc3] llvm-svn: 122451
-
- Dec 21, 2010
-
-
Benjamin Kramer authored
(add Y, (sete X, 0)) -> cmp X, 1; adc 0, Y (add Y, (setne X, 0)) -> cmp X, 1; sbb -1, Y (sub (sete X, 0), Y) -> cmp X, 1; sbb 0, Y (sub (setne X, 0), Y) -> cmp X, 1; adc -1, Y for unsigned foo(unsigned a, unsigned b) { if (a == 0) b++; return b; } we now get: foo: cmpl $1, %edi movl %esi, %eax adcl $0, %eax ret instead of: foo: testl %edi, %edi sete %al movzbl %al, %eax addl %esi, %eax ret llvm-svn: 122364
-
Chris Lattner authored
something that just glues two nodes together, even if it is sometimes used for flags. llvm-svn: 122310
-
- Dec 20, 2010
-
-
Nate Begeman authored
Implement feedback from Bruno on making pblendvb an x86-specific ISD node in addition to being an intrinsic, and convert lowering to use it. Hopefully the pattern fragment is doing the right thing with XMM0, looks correct in testing. llvm-svn: 122277
-
Daniel Dunbar authored
llvm-svn: 122247
-
Daniel Dunbar authored
llvm-svn: 122246
-
Chris Lattner authored
the same as setcc. Optimize ADDC(0,0,FLAGS) -> SET_CARRY(FLAGS). This is a step towards finishing off PR5443. In the testcase in that bug we now get: movq %rdi, %rax addq %rsi, %rax sbbq %rcx, %rcx testb $1, %cl setne %dl ret instead of: movq %rdi, %rax addq %rsi, %rax movl $0, %ecx adcq $0, %rcx testq %rcx, %rcx setne %dl ret llvm-svn: 122219
-
Chris Lattner authored
doesn't, match it back to setb. On a 64-bit version of the testcase before we'd get: movq %rdi, %rax addq %rsi, %rax sbbb %dl, %dl andb $1, %dl ret now we get: movq %rdi, %rax addq %rsi, %rax setb %dl ret llvm-svn: 122217
-
Chris Lattner authored
llvm-svn: 122214
-
Chris Lattner authored
their carry depenedencies with MVT::Flag operands) and use clean and beautiful EFLAGS dependences instead. We do this by changing the modelling of SBB/ADC to have EFLAGS input and outputs (which is what requires the previous scheduler change) and change X86 ISelLowering to custom lower ADDC and friends down to X86ISD::ADD/ADC/SUB/SBB nodes. With the previous series of changes, this causes no changes in the testsuite, woo. llvm-svn: 122213
-
Mon P Wang authored
has run, e.g., prevent creating an i64 node from a v2i64 when i64 is not a legal type. llvm-svn: 122206
-
- Dec 19, 2010
-
-
Chris Lattner authored
consistently by moving it out of lowering into dag combine. Add some missing patterns for matching away extended versions of setcc_c. llvm-svn: 122201
-
Chris Lattner authored
going through the CSE maps to get it. llvm-svn: 122196
-
Chris Lattner authored
we don't need -disable-mmx anymore. llvm-svn: 122189
-
Chris Lattner authored
llvm-svn: 122187
-
Chris Lattner authored
generate them. Now we compile: define zeroext i8 @X(i8 signext %a, i8 signext %b) nounwind ssp { entry: %0 = tail call %0 @llvm.sadd.with.overflow.i8(i8 %a, i8 %b) %cmp = extractvalue %0 %0, 1 br i1 %cmp, label %if.then, label %if.end into: _X: ## @X ## BB#0: ## %entry subl $12, %esp movb 16(%esp), %al addb 20(%esp), %al jo LBB0_2 Before we were generating: _X: ## @X ## BB#0: ## %entry pushl %ebp movl %esp, %ebp subl $8, %esp movb 12(%ebp), %al testb %al, %al setge %cl movb 8(%ebp), %dl testb %dl, %dl setge %ah cmpb %cl, %ah sete %cl addb %al, %dl testb %dl, %dl setge %al cmpb %al, %ah setne %al andb %cl, %al testb %al, %al jne LBB0_2 llvm-svn: 122186
-
- Dec 18, 2010
-
-
Rafael Espindola authored
llvm-svn: 122147
-
Rafael Espindola authored
llvm-svn: 122134
-
Rafael Espindola authored
llvm-svn: 122121
-
- Dec 17, 2010
-
-
Nate Begeman authored
Remove unnecessary pandn patterns, 'vnot' patfrag looks through bitcasts llvm-svn: 122098
-
Rafael Espindola authored
llvm-svn: 122067
-
Rafael Espindola authored
llvm-svn: 122064
-
Daniel Dunbar authored
IsSymbolRefDifferenceFullyResolved, it turns out this does change behavior on enough cases for x86-32 that I would rather wait a bit on it. - In practice, we will want to change this eventually because it only means we generate less relocations (it also eliminates the need for the horrible '.set' hack that Darwin requires in some places). llvm-svn: 122042
-
Daniel Dunbar authored
superceded and was effectively dead. llvm-svn: 122024
-
- Dec 16, 2010
-
-
Rafael Espindola authored
llvm-svn: 122005
-
Daniel Dunbar authored
interface. llvm-svn: 121981
-
Daniel Dunbar authored
llvm-svn: 121973
-
Daniel Dunbar authored
llvm-svn: 121971
-
Daniel Dunbar authored
the MCCodeEmitter, which seems like a better organization. - Also, cleaned up some magic constants while in the area. llvm-svn: 121953
-
- Dec 15, 2010
-
-
Evan Cheng authored
llvm-svn: 121908
-
- Dec 13, 2010
-
-
Evan Cheng authored
llvm-svn: 121677
-
- Dec 11, 2010
-
-
Benjamin Kramer authored
to catch cases where n != m with a shift. llvm-svn: 121608
-
- Dec 10, 2010
-
-
Rafael Espindola authored
llvm-svn: 121471
-
Rafael Espindola authored
llvm-svn: 121461
-
Nate Begeman authored
llvm-svn: 121445
-
Nate Begeman authored
Formalize the notion that AVX and SSE are non-overlapping extensions from the compiler's point of view. Per email discussion, we either want to always use VEX-prefixed instructions or never use them, and are taking "HasAVX" to mean "Always use VEX". Passing -mattr=-avx,+sse42 should serve to restore legacy SSE support when desirable. llvm-svn: 121439
-
Rafael Espindola authored
f: .cfi_startproc nop .cfi_endproc assembled (on ELF). llvm-svn: 121434
-
- Dec 09, 2010
-
-
Nate Begeman authored
llvm-svn: 121415
-