- Dec 20, 2011
-
-
Chandler Carruth authored
use the zero-undefined variants of CTTZ and CTLZ. These are just simple patterns for now, there is more to be done to make real world code using these constructs be optimized and codegen'ed properly on X86. The existing tests are spiffed up to check that we no longer generate unnecessary cmov instructions, and that we generate the very important 'xor' to transform bsr which counts the index of the most significant one bit to the number of leading (most significant) zero bits. Also they now check that when the variant with defined zero result is used, the cmov is still produced. llvm-svn: 146974
-
- Oct 26, 2011
-
-
Rafael Espindola authored
Patch by Sanjoy Das. llvm-svn: 143064
-
Rafael Espindola authored
MORESTACK_RET_RESTORE_R10; which are lowered to a RET and a RET followed by a MOV respectively. Having a fake instruction prevents the verifier from seeing a MachineBasicBlock end with a non-terminator (MOV). It also prevents the rather eccentric case of a MachineBasicBlock ending with RET but having successors nevertheless. Patch by Sanjoy Das. llvm-svn: 143062
-
- Sep 13, 2011
-
-
Eli Friedman authored
Fix the assembler strings for a couple of atomic instructions. Doesn't really matter much in practice, but it's a bit cleaner. llvm-svn: 139563
-
- Sep 07, 2011
-
-
Eli Friedman authored
Fix atomic load and store on x86 to pass -verify-machineinstrs (and possibly fix some subtle bugs involving passes which check mayStore()). This isn't exactly ideal, but it is good enough for the moment. llvm-svn: 139245
-
- Sep 03, 2011
-
-
Jakob Stoklund Olesen authored
The explanation about a 0 argument being materialized as xor is no longer valid. Rematerialization will check if EFLAGS is live before clobbering it. The code produced by X86TargetLowering::EmitLoweredSelect does not clobber EFLAGS. This causes one less testb instruction to be generated in the cmov.ll test case. llvm-svn: 139057
-
- Aug 30, 2011
-
-
Rafael Espindola authored
from DYNAMIC_STACKALLOC. Two new pseudo instructions (SEG_ALLOCA_32 and SEG_ALLOCA_64) which will match X86SegAlloca (based on word size) are also added. They will be custom emitted to inject the actual stack handling code. Patch by Sanjoy Das. llvm-svn: 138814
-
- Aug 26, 2011
-
-
Eli Friedman authored
llvm-svn: 138660
-
- Aug 24, 2011
-
-
Eli Friedman authored
llvm-svn: 138478
-
- Aug 10, 2011
-
-
Bruno Cardoso Lopes authored
llvm-svn: 137179
-
-
- Jul 27, 2011
-
-
Eli Friedman authored
X86ISD::MEMBARRIER does not require SSE2; it doesn't actually generate any code, and all x86 processors will honor the required semantics. llvm-svn: 136249
-
- Jun 16, 2011
-
-
Dan Gohman authored
considered safe enough in this context. llvm-svn: 133159
-
- May 21, 2011
-
-
Benjamin Kramer authored
llvm-svn: 131801
-
- May 20, 2011
-
-
Stuart Hastings authored
rdar://problem/8614450 llvm-svn: 131746
-
- May 19, 2011
-
-
Stuart Hastings authored
llvm-svn: 131654
-
Stuart Hastings authored
pseudos. rdar://problem/8614450 llvm-svn: 131641
-
- May 17, 2011
-
-
Eric Christopher authored
Finishes off rdar://8470697 llvm-svn: 131458
-
- May 11, 2011
-
-
Eric Christopher authored
Next up: xor and and. Part of rdar://8470697 llvm-svn: 131171
-
- May 10, 2011
-
-
Eric Christopher authored
cut and paste. llvm-svn: 131139
-
- May 08, 2011
-
-
Benjamin Kramer authored
"b + ((a < b) ? 1 : 0)" compiles into cmpl %esi, %edi adcl $0, %esi instead of cmpl %esi, %edi sbbl %eax, %eax andl $1, %eax addl %esi, %eax This saves a register, a false dependency on %eax (Intel's CPUs still don't ignore it) and it's shorter. llvm-svn: 131070
-
- Feb 17, 2011
-
-
Dan Gohman authored
these patterns. llvm-svn: 125759
-
- Jan 26, 2011
-
-
NAKAMURA Takumi authored
llvm-svn: 124272
-
NAKAMURA Takumi authored
llvm-svn: 124270
-
- Jan 18, 2011
-
-
Eric Christopher authored
the flags. llvm-svn: 123712
-
- Dec 20, 2010
-
-
Chris Lattner authored
doesn't, match it back to setb. On a 64-bit version of the testcase before we'd get: movq %rdi, %rax addq %rsi, %rax sbbb %dl, %dl andb $1, %dl ret now we get: movq %rdi, %rax addq %rsi, %rax setb %dl ret llvm-svn: 122217
-
- Dec 19, 2010
-
-
Chris Lattner authored
consistently by moving it out of lowering into dag combine. Add some missing patterns for matching away extended versions of setcc_c. llvm-svn: 122201
-
- Dec 15, 2010
-
-
Evan Cheng authored
llvm-svn: 121908
-
- Dec 09, 2010
-
-
Eric Christopher authored
llvm-svn: 121328
-
- Nov 28, 2010
-
-
Rafael Espindola authored
llvm-svn: 120263
-
- Nov 27, 2010
-
-
Rafael Espindola authored
llvm-svn: 120225
-
- Nov 01, 2010
-
-
Chris Lattner authored
various X86 and ARM instructions that are bitten by this as isCodeGenOnly, as they are. llvm-svn: 117884
-
- Oct 31, 2010
-
-
Chris Lattner authored
and make it a hard error for instructions to not have an asm string. These instructions should be marked isCodeGenOnly. llvm-svn: 117861
-
- Oct 21, 2010
-
-
Michael J. Spencer authored
llvm-svn: 116984
-
Michael J. Spencer authored
llvm-svn: 116972
-
- Oct 13, 2010
-
-
Rafael Espindola authored
immediates instead of 8 bits ones. llvm-svn: 116410
-
Rafael Espindola authored
8 bit constants can be used. llvm-svn: 116403
-
- Oct 12, 2010
-
-
Dan Gohman authored
llvm-svn: 116319
-
- Oct 08, 2010
-
-
Chris Lattner authored
reapply: reimplement the second half of the or/add optimization. We should now with no changes. Turns out that one missing "Defs = [EFLAGS]" can upset things a bit. llvm-svn: 116040
-
Chris Lattner authored
"Reimplement (part of) the or -> add optimization. Matching 'or' into 'add'" With a critical fix: the add pseudos clobber EFLAGS. llvm-svn: 116039
-