- May 12, 2010
-
-
Nick Lewycky authored
on RAUW of functions, this is a correctness issue instead of a mere memory usage problem. No testcase until the new MergeFunctions can land. llvm-svn: 103653
-
Daniel Dunbar authored
llvm-svn: 103651
-
Daniel Dunbar authored
llvm-svn: 103649
-
Daniel Dunbar authored
llvm-svn: 103648
-
Evan Cheng authored
llvm-svn: 103642
-
Jakob Stoklund Olesen authored
The X86 floating point stack pass and others depend on good kill flags. llvm-svn: 103635
-
Daniel Dunbar authored
llvm-svn: 103627
-
Daniel Dunbar authored
llvm-svn: 103616
-
Nathan Jeffords authored
Made a stylistic changed to the code/comments related to the unsupported COMDAT selection type IMAGE_COMDAT_SELECT_LARGEST based on from Anton Korobeynikov. llvm-svn: 103590
-
Duncan Sands authored
llvm-svn: 103586
-
Rafael Espindola authored
llvm-svn: 103576
-
Nathan Jeffords authored
Now, the .linkonce directive is emitted as part of MCSectionCOFF::PrintSwitchToSection instead of AsmPrinter::EmitLinkage since it is an attribute of the section the symbol was placed into not the symbol itself. llvm-svn: 103568
-
Evan Cheng authored
v1024 = REG_SEQUENCE ... v1025 = EXTRACT_SUBREG v1024, 5 v1026 = EXTRACR_SUBREG v1024, 6 = VSTxx <addr>, v1025, v1026 The REG_SEQUENCE ensures the sources that feed into the VST instruction are getting the right register allocation so they form a large super- register. The extract_subreg will be coalesced away all would just work: v1024 = REG_SEQUENCE ... = VSTxx <addr>, v1024:5, v1024:6 The problem is if the coalescer isn't run, the extract_subreg instructions would stick around and there is no assurance v1025 and v1026 will get the right registers. As a short term workaround, teach the NEON pre-allocation pass to transfer the sub-register indices over. An alternative would be do it 2addr pass when reg_sequence's are eliminated. But that *seems* wrong and require updating liveness information. Another alternative is to do this in the scheduler when the instructions are created. But that would mean somehow the scheduler this has to be done for correctness reason. That's yucky as well. So for now, we are leaving this in the target specific pass. llvm-svn: 103540
-
Evan Cheng authored
llvm-svn: 103539
-
Evan Cheng authored
llvm-svn: 103538
-
Daniel Dunbar authored
llvm-svn: 103535
-
Daniel Dunbar authored
be diced into atoms, and adjust getAtom() to take this into account. - This fixes relocations to symbols in fixed size literal sections, for example. llvm-svn: 103532
-
Jakob Stoklund Olesen authored
llvm-svn: 103530
-
Dan Gohman authored
llvm-svn: 103529
-
Daniel Dunbar authored
llvm-svn: 103528
-
Daniel Dunbar authored
offset instead of the fixup address as intended. llvm-svn: 103527
-
Daniel Dunbar authored
llvm-svn: 103526
-
Daniel Dunbar authored
llvm-svn: 103525
-
Jakob Stoklund Olesen authored
llvm-svn: 103522
-
Jakob Stoklund Olesen authored
This allows us to add accurate kill markers, something the scavenger likes. Add some more tests from ARM that needed this. llvm-svn: 103521
-
- May 11, 2010
-
-
Dan Gohman authored
create separate virtual registers for CopyFromReg values, so uses of them don't necessarily kill the value. llvm-svn: 103519
-
Evan Cheng authored
llvm-svn: 103513
-
Jakob Stoklund Olesen authored
llvm-svn: 103508
-
Bill Wendling authored
llvm-svn: 103507
-
Jakob Stoklund Olesen authored
closure after allocating all blocks. Add a few more test cases for -regalloc=fast. llvm-svn: 103500
-
Dan Gohman authored
It works in simple cases, but it isn't a general solution. llvm-svn: 103499
-
Duncan Sands authored
to LLVM_LIBRARY_VISIBILITY and introduce LLVM_GLOBAL_VISIBILITY, which is the opposite, for future use by dragonegg. llvm-svn: 103495
-
Dan Gohman authored
llvm-svn: 103493
-
Dan Gohman authored
and the others use the regular addPassesToEmitFile hook now, and llc no longer needs a bunch of redundant code to handle the whole-file case. llvm-svn: 103492
-
Dan Gohman authored
llvm-svn: 103489
-
Jakob Stoklund Olesen authored
Sorry for the big change. The path leading up to this patch had some TableGen changes that I didn't want to commit before I knew they were useful. They weren't, and this version does not need them. The fast register allocator now does no liveness calculations. Instead it relies on kill flags provided by isel. (Currently those kill flags are also ignored due to isel bugs). The allocation algorithm is supposed to work with any subset of valid kill flags. More kill flags simply means fewer spills inserted. Registers are allocated from a working set that contains no aliases. That means most allocations can be done directly without expensive alias checks. When the working set runs out of registers we do the full alias check to find new free registers. llvm-svn: 103488
-
Dan Gohman authored
Move EmitTargetCodeForMemcpy, EmitTargetCodeForMemset, and EmitTargetCodeForMemmove out of TargetLowering and into SelectionDAGInfo to exercise this. llvm-svn: 103481
-
Daniel Dunbar authored
- This eliminates getAtomForAddress() (which was a linear search) and simplifies getAtom(). - This also fixes some correctness problems where local labels at the same address as non-local labels could be assigned to the wrong atom. llvm-svn: 103480
-
Dan Gohman authored
was unused. TargetMachine::getSubtarget() is used instead. llvm-svn: 103474
-
Kalle Raiskila authored
llvm-svn: 103466
-