- Oct 13, 2009
-
-
David Goodwin authored
llvm-svn: 84011
-
Sandeep Patel authored
llvm-svn: 84009
-
Ted Kremenek authored
llvm-svn: 84008
-
Devang Patel authored
llvm-svn: 84006
-
Devang Patel authored
llvm-svn: 84004
-
Benjamin Kramer authored
llvm-svn: 84003
-
Devang Patel authored
llvm-svn: 84002
-
Ted Kremenek authored
llvm-svn: 84001
-
Evan Cheng authored
llvm-svn: 84000
-
Dan Gohman authored
llvm-svn: 83999
-
Dan Gohman authored
llvm-svn: 83998
-
Dan Gohman authored
for purposes other than inlining. llvm-svn: 83997
-
Chris Lattner authored
this will increase the likelihood of common code getting sunk towards the unwind. llvm-svn: 83996
-
Dan Gohman authored
BasicBlocks, so that it doesn't blindly procede in the presence of large individual BasicBlocks. This addresses a class of code-size expansion problems. llvm-svn: 83992
-
Jeffrey Yasskin authored
GlobalValue is destroyed. Function destruction still leaks machine code and can crash on leaked stubs, but this is some progress. llvm-svn: 83987
-
Devang Patel authored
"there is not any instruction with attached debug info in this module" does not mean "there is no debug info in this module". :) llvm-svn: 83984
-
Bob Wilson authored
Patch by Johnny Chen. llvm-svn: 83983
-
Bob Wilson authored
llvm-svn: 83982
-
Devang Patel authored
Copy metadata when value is RAUW'd. It is debatable whether this is the right approach for custom metadata data in general. However, right now the only custom data user, "dbg", expects this behavior while FE is constructing llvm IR with debug info. llvm-svn: 83977
-
Bob Wilson authored
llvm-svn: 83973
-
Nick Lewycky authored
llvm-svn: 83960
-
Nick Lewycky authored
modify through the pointer they're given. llvm-svn: 83959
-
Daniel Dunbar authored
llvm-svn: 83950
-
Victor Hernandez authored
Memory dependence analysis was incorrectly stopping to scan for stores to a pointer at bitcast uses of a malloc call. It should continue scanning until the malloc call, and this patch fixes that. llvm-svn: 83931
-
Devang Patel authored
llvm-svn: 83922
-
Devang Patel authored
llvm-svn: 83921
-
Kevin Enderby authored
llvm-svn: 83917
-
Bob Wilson authored
before its reference is only supported on ARM has not been true for a while. In fact, until recently, that was only supported for Thumb. Besides that, CPEs are always a multiple of 4 bytes in size, so inserting a CPE should have no effect on Thumb alignment. llvm-svn: 83916
-
Kevin Enderby authored
should have been a pointer to a reference. llvm-svn: 83915
-
Evan Cheng authored
llvm-svn: 83908
-
- Oct 12, 2009
-
-
Bob Wilson authored
llvm-svn: 83905
-
Bob Wilson authored
MultiSource/Benchmarks/MiBench/automotive-susan test. The failure has since been masked by an unrelated change (just randomly), so I don't have a testcase for this now. Radar 7291928. The situation where this happened is that a constant pool entry (CPE) was placed at a lower address than the load that referenced it. There were in fact 2 CPEs placed at adjacent addresses and referenced by 2 loads that were close together in the code. The distance from the loads to the CPEs was right at the limit of what they could handle, so that only one of the CPEs could be placed within range. On every iteration, the first CPE was found to be out of range, causing a new CPE to be inserted. The second CPE had been in range but the newly inserted entry pushed it too far away. Thus the second CPE was also replaced by a new entry, which in turn pushed the first CPE out of range. Etc. Judging from some comments in the code, the initial implementation of this pass did not support CPEs placed _before_ their references. In the case where the CPE is placed at a higher address, the key to making the algorithm terminate is that new CPEs are only inserted at the end of a group of adjacent CPEs. This is implemented by removing a basic block from the "WaterList" once it has been used, and then adding the newly inserted CPE block to the list so that the next insertion will come after it. This avoids the ping-pong effect where CPEs are repeatedly moved to the beginning of a group of adjacent CPEs. This does not work when going backwards, however, because the entries at the end of an adjacent group of CPEs are closer than the CPEs earlier in the group. To make this pass terminate, we need to maintain a property that changes can only happen in some sort of monotonic fashion. The fix used here is to require that the CPE for a particular constant pool load can only move to lower addresses. This is a very simple change to the code and should not cause any significant degradation in the results. llvm-svn: 83902
-
Bob Wilson authored
llvm-svn: 83897
-
Bob Wilson authored
llvm-svn: 83894
-
Bob Wilson authored
llvm-svn: 83874
-
Bob Wilson authored
llvm-svn: 83873
-
Bob Wilson authored
llvm-svn: 83872
-
Dale Johannesen authored
bootstrap of FSF-style PPC, so there is some reason to believe the original bug (which was never analyzed) has been fixed, probably by 82266. llvm-svn: 83871
-
Dale Johannesen authored
llvm-svn: 83870
-
-