- Apr 26, 2011
-
-
Devang Patel authored
Observed this while reading code, so I do not have a test case handy here. llvm-svn: 130167
-
- Apr 25, 2011
-
-
Devang Patel authored
A dbg.declare may not be in entry block, even if it is referring to an incoming argument. However, It is appropriate to emit DBG_VALUE referring to this incoming argument in entry block in MachineFunction. llvm-svn: 130129
-
- Apr 24, 2011
-
-
Rafael Espindola authored
llvm-svn: 130116
-
Rafael Espindola authored
Fixes PR9787. llvm-svn: 130115
-
Sebastian Redl authored
llvm-svn: 130095
-
- Apr 23, 2011
-
-
Jay Foad authored
llvm-svn: 130068
-
Owen Anderson authored
llvm-svn: 130033
-
Devang Patel authored
llvm-svn: 130028
-
Jakob Stoklund Olesen authored
Sometimes it is better to split per block, and we missed those cases. llvm-svn: 130025
-
- Apr 22, 2011
-
-
rdar://9289512Chris Lattner authored
fix bugs exposed by the gcc dejagnu testsuite: 1. The load may actually be used by a dead instruction, which would cause an assert. 2. The load may not be used by the current chain of instructions, and we could move it past a side-effecting instruction. Change how we process uses to define the problem away. llvm-svn: 130018
-
Benjamin Kramer authored
On x86 this allows to fold a load into the cmp, greatly reducing register pressure. movzbl (%rdi), %eax cmpl $47, %eax -> cmpb $47, (%rdi) This shaves 8k off gcc.o on i386. I'll leave applying the patch in README.txt to Chris :) llvm-svn: 130005
-
Devang Patel authored
llvm-svn: 130004
-
Evan Cheng authored
llvm-svn: 129970
-
Bill Wendling authored
An exception is thrown via a call to _cxa_throw, which we don't expect to return. Therefore, the "true" part of the invoke goes to a BB that has 'unreachable' as its only instruction. This is lowered into an empty MachineBB. The landing pad for this invoke, however, is directly after the "true" MBB. When the empty MBB is removed, the landing pad is directly below the BB with the invoke call. The unconditional branch is removed and then the two blocks are merged together. The testcase is too big for a regression test. <rdar://problem/9305728> llvm-svn: 129965
-
- Apr 21, 2011
-
-
Devang Patel authored
llvm-svn: 129938
-
Matt Beaumont-Gay authored
llvm-svn: 129928
-
Jakob Stoklund Olesen authored
These intervals are allocatable immediately after splitting, but they may be evicted because of later splitting. This is rare, but when it happens they should be split again. The remainder intervals that cannot be allocated after splitting still move directly to spilling. SplitEditor::finish can optionally provide a mapping from new live intervals back to the original interval indexes returned by openIntv(). Each original interval index can map to multiple new intervals after connected components have been separated. Dead code elimination may also add existing intervals to the list. The reverse mapping allows the SplitEditor client to treat the new intervals differently depending on the split region they came from. llvm-svn: 129925
-
Devang Patel authored
llvm-svn: 129921
-
rdar://9289512Daniel Dunbar authored
which broke a couple GCC test suite tests at -O0. llvm-svn: 129914
-
Jakob Stoklund Olesen authored
llvm-svn: 129883
-
Jakob Stoklund Olesen authored
TII::isTriviallyReMaterializable() shouldn't depend on any properties of the register being defined by the instruction. Rematerialization is going to create a new virtual register anyway. llvm-svn: 129882
-
- Apr 20, 2011
-
-
Jakob Stoklund Olesen authored
On the x86-64 and thumb2 targets, some registers are more expensive to encode than others in the same register class. Add a CostPerUse field to the TableGen register description, and make it available from TRI->getCostPerUse. This represents the cost of a REX prefix or a 32-bit instruction encoding required by choosing a high register. Teach the greedy register allocator to prefer cheap registers for busy live ranges (as indicated by spill weight). llvm-svn: 129864
-
-
Rafael Espindola authored
llvm-svn: 129844
-
Eric Christopher authored
manually and pass all (now) 4 arguments to the mul libcall. Add a new ExpandLibCall for just this (copied gratuitously from type legalization). Fixes rdar://9292577 llvm-svn: 129842
-
Daniel Dunbar authored
triple component. llvm-svn: 129838
-
- Apr 19, 2011
-
-
Daniel Dunbar authored
- There is a minor semantic change here (evidenced by the test change) for Darwin triples that have no version component. I debated changing the default behavior of isOSVersionLT, but decided it made more sense for triples to be explicit. llvm-svn: 129802
-
-
Bob Wilson authored
Add a avoidWriteAfterWrite() target hook to identify register classes that suffer from write-after-write hazards. For those register classes, try to avoid writing the same register in two consecutive instructions. This is currently disabled by default. We should not spill to avoid hazards! The command line flag -avoid-waw-hazard can be used to enable waw avoidance. llvm-svn: 129772
-
Jakob Stoklund Olesen authored
This means that the new register allocator can be used with 'clang -mllvm -regalloc=greedy'. llvm-svn: 129764
-
Eli Friedman authored
unnecessary work where possible. llvm-svn: 129763
-
-
Chris Lattner authored
en-mass for C++ PODs. On my c++ test file, this cuts the fast isel rejects by 10x and shrinks the generated .s file by 5% llvm-svn: 129755
-
- Apr 18, 2011
-
-
Eli Friedman authored
llvm-svn: 129720
-
Devang Patel authored
llvm-svn: 129715
-
Jakob Stoklund Olesen authored
the spilled register. This is quite common on ARM now that some stores have early-clobber defines. llvm-svn: 129714
-
Eric Christopher authored
registers for fast allocation a different way. This has us updating used registers only when we're using that exact register. Fixes rdar://9207598 llvm-svn: 129711
-
Chris Lattner authored
this fixes a few rejects on c++ iterator loops. llvm-svn: 129694
-
-
- Apr 17, 2011
-
-
Chris Lattner authored
2. implement rdar://9289501 - fast isel should fold trivial multiplies to shifts 3. teach tblgen to handle shift immediates that are different sizes than the shifted operands, eliminating some code from the X86 fast isel backend. 4. Have FastISel::SelectBinaryOp use (the poorly named) FastEmit_ri_ function instead of FastEmit_ri to simplify code. llvm-svn: 129666
-