- Apr 08, 2009
-
-
Bill Wendling authored
builds. --- Reverse-merging (from foreign repository) r68552 into '.': U test/CodeGen/X86/tls8.ll U test/CodeGen/X86/tls10.ll U test/CodeGen/X86/tls2.ll U test/CodeGen/X86/tls6.ll U lib/Target/X86/X86Instr64bit.td U lib/Target/X86/X86InstrSSE.td U lib/Target/X86/X86InstrInfo.td U lib/Target/X86/X86RegisterInfo.cpp U lib/Target/X86/X86ISelLowering.cpp U lib/Target/X86/X86CodeEmitter.cpp U lib/Target/X86/X86FastISel.cpp U lib/Target/X86/X86InstrInfo.h U lib/Target/X86/X86ISelDAGToDAG.cpp U lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.cpp U lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.cpp U lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.h U lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.h U lib/Target/X86/X86ISelLowering.h U lib/Target/X86/X86InstrInfo.cpp U lib/Target/X86/X86InstrBuilder.h U lib/Target/X86/X86RegisterInfo.td llvm-svn: 68560
-
- Apr 07, 2009
-
-
Rafael Espindola authored
This introduces a small regression on the generated code quality in the case we are just computing addresses, not loading values. Will work on it and on X86-64 support. llvm-svn: 68552
-
- Mar 28, 2009
-
-
Rafael Espindola authored
llvm-svn: 67949
-
Rafael Espindola authored
of operands in an address in so many places. llvm-svn: 67945
-
- Mar 27, 2009
-
-
Rafael Espindola authored
llvm-svn: 67848
-
- Mar 13, 2009
-
-
Evan Cheng authored
Fix some significant problems with constant pools that resulted in unnecessary paddings between constant pool entries, larger than necessary alignments (e.g. 8 byte alignment for .literal4 sections), and potentially other issues. 1. ConstantPoolSDNode alignment field is log2 value of the alignment requirement. This is not consistent with other SDNode variants. 2. MachineConstantPool alignment field is also a log2 value. 3. However, some places are creating ConstantPoolSDNode with alignment value rather than log2 values. This creates entries with artificially large alignments, e.g. 256 for SSE vector values. 4. Constant pool entry offsets are computed when they are created. However, asm printer group them by sections. That means the offsets are no longer valid. However, asm printer uses them to determine size of padding between entries. 5. Asm printer uses expensive data structure multimap to track constant pool entries by sections. 6. Asm printer iterate over SmallPtrSet when it's emitting constant pool entries. This is non-deterministic. Solutions: 1. ConstantPoolSDNode alignment field is changed to keep non-log2 value. 2. MachineConstantPool alignment field is also changed to keep non-log2 value. 3. Functions that create ConstantPool nodes are passing in non-log2 alignments. 4. MachineConstantPoolEntry no longer keeps an offset field. It's replaced with an alignment field. Offsets are not computed when constant pool entries are created. They are computed on the fly in asm printer and JIT. 5. Asm printer uses cheaper data structure to group constant pool entries. 6. Asm printer compute entry offsets after grouping is done. 7. Change JIT code to compute entry offsets on the fly. llvm-svn: 66875
-
- Mar 04, 2009
-
-
Dan Gohman authored
llvm-svn: 66057
-
Dan Gohman authored
of MachineInstr def operands must be subtracted out. This bug was uncovered by the recent x86 EFLAGS optimization. Before that, the only instructions that ever needed unfolding were things like CMP32rm, where NumDefs is zero. llvm-svn: 66056
-
- Feb 22, 2009
-
-
Evan Cheng authored
Do not consider MMX_MOVD64rr a move instructions. The source register is in GR32, the destination is VR64. They are not compatible. llvm-svn: 65273
-
- Feb 18, 2009
-
-
Dan Gohman authored
llvm-svn: 64891
-
- Feb 13, 2009
-
-
Dale Johannesen authored
There were some that might even matter in X86FastISel. llvm-svn: 64437
-
Dale Johannesen authored
Modify callers. llvm-svn: 64409
-
- Feb 11, 2009
-
-
Bill Wendling authored
llvm-svn: 64329
-
- Feb 10, 2009
-
-
Evan Cheng authored
llvm-svn: 64186
-
- Feb 09, 2009
-
-
Evan Cheng authored
suprise to some callers, e.g. register coalescer. For now, add an parameter that tells AnalyzeBranch whether it's safe to modify the mbb. A better solution is out there, but I don't have time to deal with it right now. llvm-svn: 64124
-
- Feb 06, 2009
-
-
Evan Cheng authored
llvm-svn: 63938
-
Evan Cheng authored
Add TargetInstrInfo::isSafeToMoveRegisterClassDefs. It returns true if it's safe to move an instruction which defines a value in the register class. Replace pre-splitting specific IgnoreRegisterClassBarriers with this new hook. llvm-svn: 63936
-
Dale Johannesen authored
its corresponding getTargetNode. Lots of caller changes. llvm-svn: 63904
-
- Feb 03, 2009
-
-
Bill Wendling authored
created. Specifically, those BuildMIs which use "DebugLoc::getUnknownLoc()". I'll remove them soon. llvm-svn: 63584
-
- Jan 20, 2009
-
-
Evan Cheng authored
llvm-svn: 62600
-
- Jan 15, 2009
-
-
Dan Gohman authored
llvm-svn: 62267
-
- Jan 09, 2009
-
-
Dan Gohman authored
llvm-svn: 61972
-
- Jan 07, 2009
-
-
Dan Gohman authored
llvm-svn: 61841
-
Dan Gohman authored
llvm-svn: 61836
-
Dan Gohman authored
X86_COND_B and X86_COND_AE, respectively. llvm-svn: 61835
-
Dan Gohman authored
converted to LEA64_32r in x86's convertToThreeAddress. This replaces code like this: movl %esi, %edi inc %edi with this: lea 1(%rsi), %edi which appears to be beneficial. llvm-svn: 61830
-
- Jan 05, 2009
-
-
Dan Gohman authored
llvm-svn: 61715
-
- Dec 23, 2008
-
-
Dan Gohman authored
llvm-svn: 61356
-
- Dec 18, 2008
-
-
Mon P Wang authored
llvm-svn: 61211
-
- Dec 05, 2008
-
-
Evan Cheng authored
Reason #3 from 60595 doesn't hold true. If we can fold a PIC load from constpool into a use, the rewrite happens at time of spill (not in VirtRegMap). Later on, if the GlobalBaseReg is spilled, the spiller can see the use uses GlobalBaseReg and do the right thing. llvm-svn: 60596
-
Evan Cheng authored
Effectively undo 60461 in PIC mode which simply transform V_SET0 / V_SETALLONES into a load from constpool in order to fold into restores. This is not safe to do when PIC base is being used for a number of reasons: 1. GlobalBaseReg may have been spilled. 2. It may not be live at the use. 3. Spiller doesn't know this is happening so it won't prevent GlobalBaseReg from being spilled later (That by itself is a nasty hack. It's needed because we don't insert the reload until later). llvm-svn: 60595
-
- Dec 03, 2008
-
-
Dan Gohman authored
parts, and add target-independent code to add/preserve MachineMemOperands. llvm-svn: 60488
-
Dan Gohman authored
foldMemoryOperand how to "fold" them, by converting them into constant-pool loads. When they aren't folded, they use xorps/cmpeqd, but for example when register pressure is high, they may now be folded as memory operands, which reduces register pressure. Also, mark V_SET0 isAsCheapAsAMove so that two-address-elimination will remat it instead of copying zeros around (V_SETALLONES was already marked). llvm-svn: 60461
-
- Dec 02, 2008
-
-
Bill Wendling authored
llvm-svn: 60385
-
Bill Wendling authored
llvm-svn: 60383
-
Bill Wendling authored
- Add support for seto, setno, setc, and setnc instructions. llvm-svn: 60382
-
- Nov 26, 2008
-
-
Bill Wendling authored
the conditional for the BRCOND statement. For instance, it will generate: addl %eax, %ecx jo LOF instead of addl %eax, %ecx ; About 10 instructions to compare the signs of LHS, RHS, and sum. jl LOF llvm-svn: 60123
-
Dan Gohman authored
llvm-svn: 60095
-
- Nov 18, 2008
-
-
Dan Gohman authored
llvm-svn: 59542
-
- Oct 27, 2008
-
-
Evan Cheng authored
For now, don't split live intervals around x87 stack register barriers. FpGET_ST0_80 must be right after a call instruction (and ADJCALLSTACKUP) so we need to find a way to prevent reload of x87 registers between them. llvm-svn: 58230
-