- Jan 16, 2012
-
-
Jakob Stoklund Olesen authored
It is safe to move uses of such registers. llvm-svn: 148259
-
Jakob Stoklund Olesen authored
llvm-svn: 148251
-
Jakob Stoklund Olesen authored
Register masks will be used as a compact representation of large clobber lists. Currently, an x86 call instruction has some 40 operands representing call-clobbered registers. That's more than 1kB of useless operands per call site. A register mask operand references a bit mask of call-preserved registers, everything else is clobbered. The bit mask will typically come from TargetRegisterInfo::getCallPreservedMask(). By abandoning ImplicitDefs for call-clobbered registers, it also becomes possible to share call instruction descriptions between calling conventions, and we can get rid of the WINCALL* instructions. This patch introduces the new operand kind. Future patches will add RegMask support to target-independent passes before finally the fixed clobber lists can be removed from call instruction descriptions. llvm-svn: 148250
-
David Blaikie authored
llvm-svn: 148230
-
Pete Cooper authored
Changed intrinsic ID operand to a target constant as its not used in any arithmetic so should not be checked in legalisation llvm-svn: 148228
-
- Jan 15, 2012
-
-
Nadav Rotem authored
We know that the blend instructions only use the MSB, so if the mask is sign-extended then we can convert it into a SHL instruction. This is a common pattern because the type-legalizer sign-extends the i1 type which is used by the LLVM-IR for the condition. Added a new optimization in SimplifyDemandedBits for SIGN_EXTEND_INREG -> SHL. llvm-svn: 148225
-
Benjamin Kramer authored
llvm-svn: 148218
-
Benjamin Kramer authored
llvm-svn: 148217
-
Craig Topper authored
llvm-svn: 148205
-
- Jan 14, 2012
-
-
Duncan Sands authored
non-determinism in the 32 bit dragonegg buildbot. Original commit message: Only emit the Leh_func_endN symbol when needed. llvm-svn: 148191
-
Rafael Espindola authored
llvm-svn: 148175
-
Andrew Trick authored
llvm-svn: 148174
-
Andrew Trick authored
llvm-svn: 148173
-
Andrew Trick authored
llvm-svn: 148172
-
Andrew Trick authored
llvm-svn: 148171
-
Andrew Trick authored
llvm-svn: 148170
-
Evan Cheng authored
live across BBs before register allocation. This miscompiled 197.parser when a cmp + b are optimized to a cbnz instruction even though the CPSR def is live-in a successor. cbnz r6, LBB89_12 ... LBB89_12: ble LBB89_1 The fix consists of two parts. 1) Teach LiveVariables that some unallocatable registers might be liveouts so don't mark their last use as kill if they are. 2) ARM constantpool island pass shouldn't form cbz / cbnz if the conditional branch does not kill CPSR. rdar://10676853 llvm-svn: 148168
-
Rafael Espindola authored
llvm-svn: 148156
-
- Jan 13, 2012
-
-
Rafael Espindola authored
llvm-svn: 148150
-
Andrew Trick authored
llvm-svn: 148143
-
Andrew Trick authored
llvm-svn: 148105
-
Andrew Trick authored
llvm-svn: 148103
-
Andrew Trick authored
llvm-svn: 148102
-
Evan Cheng authored
overly conservative. It was concerned about cases where it would prohibit folding simple [r, c] addressing modes. e.g. ldr r0, [r2] ldr r1, [r2, #4] => ldr r0, [r2], #4 ldr r1, [r2] Change the logic to look for such cases which allows it to form indexed memory ops more aggressively. rdar://10674430 llvm-svn: 148086
-
Bill Wendling authored
llvm-svn: 148065
-
Bill Wendling authored
The registers are placed into the saved registers list in the reverse order, which is why the original loop was written to loop backwards. llvm-svn: 148064
-
- Jan 12, 2012
-
-
Pete Cooper authored
Added FPOW, FEXP, FLOG to PromoteNode so that custom actions can be set to Promote for those operations. Sorry, no test case yet llvm-svn: 148050
-
Evan Cheng authored
killed registers are needed below the insertion point, then unset the kill marker. Sorry I'm not able to find a reduced test case. rdar://10660944 llvm-svn: 148043
-
Evan Cheng authored
llvm-svn: 148033
-
Jakob Stoklund Olesen authored
llvm-svn: 148031
-
Jakob Stoklund Olesen authored
llvm-svn: 147979
-
- Jan 11, 2012
-
-
Jakob Stoklund Olesen authored
This helper method is too simplistic for RAGreedy. llvm-svn: 147976
-
Jakob Stoklund Olesen authored
llvm-svn: 147975
-
Jakob Stoklund Olesen authored
No functional change. llvm-svn: 147972
-
Nadav Rotem authored
When we load the v12i32 type, the GenWidenVectorLoads method generates two loads: v8i32 and v4i32 and attempts to use CONCAT_VECTORS to join them. In this fix I concat undef values to widen the smaller value. The test "widen_load-2.ll" also exposes this bug on AVX. llvm-svn: 147964
-
Chandler Carruth authored
detect a pattern which can be implemented with a small 'shl' embedded in the addressing mode scale. This happens in real code as follows: unsigned x = my_accelerator_table[input >> 11]; Here we have some lookup table that we look into using the high bits of 'input'. Each entity in the table is 4-bytes, which means this implicitly gets turned into (once lowered out of a GEP): *(unsigned*)((char*)my_accelerator_table + ((input >> 11) << 2)); The shift right followed by a shift left is canonicalized to a smaller shift right and masking off the low bits. That hides the shift right which x86 has an addressing mode designed to support. We now detect masks of this form, and produce the longer shift right followed by the proper addressing mode. In addition to saving a (rather large) instruction, this also reduces stalls in Intel chips on benchmarks I've measured. In order for all of this to work, one part of the DAG needs to be canonicalized *still further* than it currently is. This involves removing pointless 'trunc' nodes between a zextload and a zext. Without that, we end up generating spurious masks and hiding the pattern. llvm-svn: 147936
-
Jakob Stoklund Olesen authored
Consider this code: int h() { int x; try { x = f(); g(); } catch (...) { return x+1; } return x; } The variable x is undefined on the first edge to the landing pad, but it has the f() return value on the second edge to the landing pad. SplitAnalysis::getLastSplitPoint() would assume that the return value from f() was live into the landing pad when f() throws, which is of course impossible. Detect these cases, and treat them as if the landing pad wasn't there. This allows spill code to be inserted after the function call to f(). <rdar://problem/10664933> llvm-svn: 147912
-
Jakob Stoklund Olesen authored
Delete the alternative implementation in LiveIntervalAnalysis. These functions computed the same thing, but SplitAnalysis caches the result. llvm-svn: 147911
-
Evan Cheng authored
the physical registers are not allocatable. llvm-svn: 147902
-
- Jan 10, 2012
-
-
Evan Cheng authored
llvm-svn: 147884
-