- Mar 27, 2009
-
-
Rafael Espindola authored
llvm-svn: 67848
-
- Mar 13, 2009
-
-
Evan Cheng authored
Fix some significant problems with constant pools that resulted in unnecessary paddings between constant pool entries, larger than necessary alignments (e.g. 8 byte alignment for .literal4 sections), and potentially other issues. 1. ConstantPoolSDNode alignment field is log2 value of the alignment requirement. This is not consistent with other SDNode variants. 2. MachineConstantPool alignment field is also a log2 value. 3. However, some places are creating ConstantPoolSDNode with alignment value rather than log2 values. This creates entries with artificially large alignments, e.g. 256 for SSE vector values. 4. Constant pool entry offsets are computed when they are created. However, asm printer group them by sections. That means the offsets are no longer valid. However, asm printer uses them to determine size of padding between entries. 5. Asm printer uses expensive data structure multimap to track constant pool entries by sections. 6. Asm printer iterate over SmallPtrSet when it's emitting constant pool entries. This is non-deterministic. Solutions: 1. ConstantPoolSDNode alignment field is changed to keep non-log2 value. 2. MachineConstantPool alignment field is also changed to keep non-log2 value. 3. Functions that create ConstantPool nodes are passing in non-log2 alignments. 4. MachineConstantPoolEntry no longer keeps an offset field. It's replaced with an alignment field. Offsets are not computed when constant pool entries are created. They are computed on the fly in asm printer and JIT. 5. Asm printer uses cheaper data structure to group constant pool entries. 6. Asm printer compute entry offsets after grouping is done. 7. Change JIT code to compute entry offsets on the fly. llvm-svn: 66875
-
- Mar 04, 2009
-
-
Dan Gohman authored
llvm-svn: 66057
-
Dan Gohman authored
of MachineInstr def operands must be subtracted out. This bug was uncovered by the recent x86 EFLAGS optimization. Before that, the only instructions that ever needed unfolding were things like CMP32rm, where NumDefs is zero. llvm-svn: 66056
-
- Feb 22, 2009
-
-
Evan Cheng authored
Do not consider MMX_MOVD64rr a move instructions. The source register is in GR32, the destination is VR64. They are not compatible. llvm-svn: 65273
-
- Feb 18, 2009
-
-
Dan Gohman authored
llvm-svn: 64891
-
- Feb 13, 2009
-
-
Dale Johannesen authored
There were some that might even matter in X86FastISel. llvm-svn: 64437
-
Dale Johannesen authored
Modify callers. llvm-svn: 64409
-
- Feb 11, 2009
-
-
Bill Wendling authored
llvm-svn: 64329
-
- Feb 10, 2009
-
-
Evan Cheng authored
llvm-svn: 64186
-
- Feb 09, 2009
-
-
Evan Cheng authored
suprise to some callers, e.g. register coalescer. For now, add an parameter that tells AnalyzeBranch whether it's safe to modify the mbb. A better solution is out there, but I don't have time to deal with it right now. llvm-svn: 64124
-
- Feb 06, 2009
-
-
Evan Cheng authored
llvm-svn: 63938
-
Evan Cheng authored
Add TargetInstrInfo::isSafeToMoveRegisterClassDefs. It returns true if it's safe to move an instruction which defines a value in the register class. Replace pre-splitting specific IgnoreRegisterClassBarriers with this new hook. llvm-svn: 63936
-
Dale Johannesen authored
its corresponding getTargetNode. Lots of caller changes. llvm-svn: 63904
-
- Feb 03, 2009
-
-
Bill Wendling authored
created. Specifically, those BuildMIs which use "DebugLoc::getUnknownLoc()". I'll remove them soon. llvm-svn: 63584
-
- Jan 20, 2009
-
-
Evan Cheng authored
llvm-svn: 62600
-
- Jan 15, 2009
-
-
Dan Gohman authored
llvm-svn: 62267
-
- Jan 09, 2009
-
-
Dan Gohman authored
llvm-svn: 61972
-
- Jan 07, 2009
-
-
Dan Gohman authored
llvm-svn: 61841
-
Dan Gohman authored
llvm-svn: 61836
-
Dan Gohman authored
X86_COND_B and X86_COND_AE, respectively. llvm-svn: 61835
-
Dan Gohman authored
converted to LEA64_32r in x86's convertToThreeAddress. This replaces code like this: movl %esi, %edi inc %edi with this: lea 1(%rsi), %edi which appears to be beneficial. llvm-svn: 61830
-
- Jan 05, 2009
-
-
Dan Gohman authored
llvm-svn: 61715
-
- Dec 23, 2008
-
-
Dan Gohman authored
llvm-svn: 61356
-
- Dec 18, 2008
-
-
Mon P Wang authored
llvm-svn: 61211
-
- Dec 05, 2008
-
-
Evan Cheng authored
Reason #3 from 60595 doesn't hold true. If we can fold a PIC load from constpool into a use, the rewrite happens at time of spill (not in VirtRegMap). Later on, if the GlobalBaseReg is spilled, the spiller can see the use uses GlobalBaseReg and do the right thing. llvm-svn: 60596
-
Evan Cheng authored
Effectively undo 60461 in PIC mode which simply transform V_SET0 / V_SETALLONES into a load from constpool in order to fold into restores. This is not safe to do when PIC base is being used for a number of reasons: 1. GlobalBaseReg may have been spilled. 2. It may not be live at the use. 3. Spiller doesn't know this is happening so it won't prevent GlobalBaseReg from being spilled later (That by itself is a nasty hack. It's needed because we don't insert the reload until later). llvm-svn: 60595
-
- Dec 03, 2008
-
-
Dan Gohman authored
parts, and add target-independent code to add/preserve MachineMemOperands. llvm-svn: 60488
-
Dan Gohman authored
foldMemoryOperand how to "fold" them, by converting them into constant-pool loads. When they aren't folded, they use xorps/cmpeqd, but for example when register pressure is high, they may now be folded as memory operands, which reduces register pressure. Also, mark V_SET0 isAsCheapAsAMove so that two-address-elimination will remat it instead of copying zeros around (V_SETALLONES was already marked). llvm-svn: 60461
-
- Dec 02, 2008
-
-
Bill Wendling authored
llvm-svn: 60385
-
Bill Wendling authored
llvm-svn: 60383
-
Bill Wendling authored
- Add support for seto, setno, setc, and setnc instructions. llvm-svn: 60382
-
- Nov 26, 2008
-
-
Bill Wendling authored
the conditional for the BRCOND statement. For instance, it will generate: addl %eax, %ecx jo LOF instead of addl %eax, %ecx ; About 10 instructions to compare the signs of LHS, RHS, and sum. jl LOF llvm-svn: 60123
-
Dan Gohman authored
llvm-svn: 60095
-
- Nov 18, 2008
-
-
Dan Gohman authored
llvm-svn: 59542
-
- Oct 27, 2008
-
-
Evan Cheng authored
For now, don't split live intervals around x87 stack register barriers. FpGET_ST0_80 must be right after a call instruction (and ADJCALLSTACKUP) so we need to find a way to prevent reload of x87 registers between them. llvm-svn: 58230
-
- Oct 25, 2008
-
-
Nicolas Geoffray authored
llvm-svn: 58141
-
- Oct 21, 2008
-
-
Dan Gohman authored
Where previously LLVM might emit code like this: ucomisd %xmm1, %xmm0 setne %al setp %cl orb %al, %cl jne .LBB4_2 it now emits this: ucomisd %xmm1, %xmm0 jne .LBB4_2 jp .LBB4_2 It has fewer instructions and uses fewer registers, but it does have more branches. And in the case that this code is followed by a non-fallthrough edge, it may be followed by a jmp instruction, resulting in three branch instructions in sequence. Some effort is made to avoid this situation. To achieve this, X86ISelLowering.cpp now recognizes FCMP_OEQ and FCMP_UNE in lowered form, and replace them with code that emits two branches, except in the case where it would require converting a fall-through edge to an explicit branch. Also, X86InstrInfo.cpp's branch analysis and transform code now knows now to handle blocks with multiple conditional branches. It uses loops instead of having fixed checks for up to two instructions. It can now analyze and transform code generated from FCMP_OEQ and FCMP_UNE. llvm-svn: 57873
-
Dan Gohman authored
the copy instruction from the instruction list before asking the target to create the new instruction. This gets the old instruction out of the way so that it doesn't interfere with the target's rematerialization code. In the case of x86, this helps it find more cases where EFLAGS is not live. Also, in the X86InstrInfo.cpp, teach isSafeToClobberEFLAGS to check to see if it reached the end of the block after scanning each instruction, instead of just before. This lets it notice when the end of the block is only two instructions away, without doing any additional scanning. These changes allow rematerialization to clobber EFLAGS in more cases, for example using xor instead of mov to set the return value to zero in the included testcase. llvm-svn: 57872
-
- Oct 17, 2008
-
-
Dan Gohman authored
shift counts, and patterns that match dynamic shift counts when the subtract is obscured by a truncate node. Add DAGCombiner support for recognizing rotate patterns when the shift counts are defined by truncate nodes. Fix and simplify the code for commuting shld and shrd instructions to work even when the given instruction doesn't have a parent, and when the caller needs a new instruction. These changes allow LLVM to use the shld, shrd, rol, and ror instructions on x86 to replace equivalent code using two shifts and an or in many more cases. llvm-svn: 57662
-