- Nov 09, 2010
-
-
Andrew Trick authored
llvm-svn: 118613
-
Andrew Trick authored
llvm-svn: 118604
-
Dan Gohman authored
in order to fold it into a load. llvm-svn: 118471
-
Dale Johannesen authored
{i64, i64} from matching i128. llvm-svn: 118465
-
- Nov 08, 2010
-
-
Andrew Trick authored
handle cases in which a register is unavailable for spill code. Adds LiveIntervalUnion::extract. While processing interferences on a live virtual register, reuses the same Query object for each physcial reg. llvm-svn: 118423
-
Che-Liang Chiou authored
llvm-svn: 118394
-
- Nov 06, 2010
-
-
Benjamin Kramer authored
llvm-svn: 118342
-
- Nov 05, 2010
-
-
Duncan Sands authored
to perform the copy, which may be of lots of memory [*]. It would be good if the fall-back code generated something reasonable, i.e. did the copy in a loop, rather than vast numbers of loads and stores. Add a note about this. Currently target specific code seems to always kick in so this is more of a theoretical issue rather than a practical one now that X86 has been fixed. [*] It's amazing how often people pass mega-byte long arrays by copy... llvm-svn: 118275
-
- Nov 04, 2010
-
-
Rafael Espindola authored
llvm-svn: 118254
-
Rafael Espindola authored
they do :-( llvm-svn: 118250
-
Rafael Espindola authored
llvm-svn: 118249
-
Duncan Sands authored
and as such can be represented by an MVT - the more complicated EVT is not needed. Use MVT for ValVT everywhere. llvm-svn: 118245
-
Jakob Stoklund Olesen authored
This way, InlineSpiller does the same amount of splitting as the standard spiller. Splitting should really be guided by the register allocator, and doesn't belong in the spiller at all. llvm-svn: 118216
-
- Nov 03, 2010
-
-
Eric Christopher authored
just do it earlier too. llvm-svn: 118195
-
Jakob Stoklund Olesen authored
splitting needs them. llvm-svn: 118194
-
Jakob Stoklund Olesen authored
llvm-svn: 118193
-
Duncan Sands authored
with a SimpleValueType, while an EVT supports equality and inequality comparisons with SimpleValueType. llvm-svn: 118169
-
Duncan Sands authored
value type, so there is no point in passing it around using an EVT. Use the simpler MVT everywhere. Rather than trying to propagate this information maximally in all the code that using the calling convention stuff, I chose to do a mainly low impact change instead. llvm-svn: 118167
-
Eric Christopher authored
this by using an undef as a pointer. Fixes rdar://8625016 llvm-svn: 118164
-
Dan Gohman authored
encounters (and:i64 (shl:i64 (load:i64), 1), 0xffffffff). This fixes rdar://8606584. llvm-svn: 118143
-
Evan Cheng authored
1. Fix pre-ra scheduler so it doesn't try to push instructions above calls to "optimize for latency". Call instructions don't have the right latency and this is more likely to use introduce spills. 2. Fix if-converter cost function. For ARM, it should use instruction latencies, not # of micro-ops since multi-latency instructions is completely executed even when the predicate is false. Also, some instruction will be "slower" when they are predicated due to the register def becoming implicit input. rdar://8598427 llvm-svn: 118135
-
- Nov 02, 2010
-
-
rdar://problem/8612856Andrew Trick authored
breaker needs to check all definitions of the antidepenent register to avoid multiple defs of the same new register. llvm-svn: 118032
-
Devang Patel authored
llvm-svn: 118027
-
Devang Patel authored
llvm-svn: 118022
-
Devang Patel authored
llvm-svn: 118020
-
Jakob Stoklund Olesen authored
BB#1: derived from LLVM BB %bb.nph28 Live Ins: %AL Predecessors according to CFG: BB#0 TEST8rr %reg16384<kill>, %reg16384, %EFLAGS<imp-def>; GR8:%reg16384 JNE_4 <BB#2>, %EFLAGS<imp-use,kill> JMP_4 <BB#2> Successors according to CFG: BB#2 BB#2 These double CFG edges only ever occur in bugpoint-generated code, so there is no need to attempt something clever. llvm-svn: 117992
-
Jakob Stoklund Olesen authored
edges on demand. llvm-svn: 117982
-
Jakob Stoklund Olesen authored
It is legal for an instruction to have two operands using the same register, only one a kill. This is interpreted as a kill. llvm-svn: 117981
-
Jakob Stoklund Olesen authored
source, and let rewrite() clean it up. This way, kill flags on the inserted copies are fixed as well during rewrite(). We can't just assume that all the copies we insert are going to be kills since critical edges into loop headers sometimes require both source and dest to be live out of a block. llvm-svn: 117980
-
- Nov 01, 2010
-
-
Jakob Stoklund Olesen authored
At least X86FloatingPoint requires correct kill flags after register allocation, and targets using register scavenging benefit. Conservative kill flags are not enough. llvm-svn: 117960
-
Jakob Stoklund Olesen authored
llvm-svn: 117959
-
Bill Wendling authored
at more than those which define CPSR. You can have this situation: (1) subs ... (2) sub r6, r5, r4 (3) movge ... (4) cmp r6, 0 (5) movge ... We cannot convert (2) to "subs" because (3) is using the CPSR set by (1). There's an analogous situation here: (1) sub r1, r2, r3 (2) sub r4, r5, r6 (3) cmp r4, ... (5) movge ... (6) cmp r1, ... (7) movge ... We cannot convert (1) to "subs" because of the intervening use of CPSR. llvm-svn: 117950
-
Jakob Stoklund Olesen authored
give them individual stack slots once the are actually spilled. llvm-svn: 117945
-
Jakob Stoklund Olesen authored
When an instruction refers to a spill slot with a LiveStacks entry, check that the spill slot is live at the instruction. llvm-svn: 117944
-
Bill Wendling authored
llvm-svn: 117904
-
- Oct 31, 2010
-
-
Eric Christopher authored
llvm-svn: 117879
-
Bill Wendling authored
looks like is happening: Without the peephole optimizer: (1) sub r6, r6, #32 orr r12, r12, lr, lsl r9 orr r2, r2, r3, lsl r10 (x) cmp r6, #0 ldr r9, LCPI2_10 ldr r10, LCPI2_11 (2) sub r8, r8, #32 (a) movge r12, lr, lsr r6 (y) cmp r8, #0 LPC2_10: ldr lr, [pc, r10] (b) movge r2, r3, lsr r8 With the peephole optimizer: ldr r9, LCPI2_10 ldr r10, LCPI2_11 (1*) subs r6, r6, #32 (2*) subs r8, r8, #32 (a*) movge r12, lr, lsr r6 (b*) movge r2, r3, lsr r8 (1) is used by (x) for the conditional move at (a). (2) is used by (y) for the conditional move at (b). After the peephole optimizer, these the flags resulting from (1*) are ignored and only the flags from (2*) are considered for both conditional moves. llvm-svn: 117876
-
Nicolas Geoffray authored
llvm-svn: 117867
-
- Oct 30, 2010
-
-
Jakob Stoklund Olesen authored
llvm-svn: 117765
-
Jakob Stoklund Olesen authored
a basic block. llvm-svn: 117764
-