- Apr 16, 2012
-
-
Duncan Sands authored
through the use of 'fpmath' metadata. Currently this only provides a 'fpaccuracy' value, which may be a number in ULPs or the keyword 'fast', however the intent is that this will be extended with additional information about NaN's, infinities etc later. No optimizations have been hooked up to this so far. llvm-svn: 154822
-
Chandler Carruth authored
This is mostly to test the waters. I'd like to get results from FNT build bots and other bots running on non-x86 platforms. This feature has been pretty heavily tested over the last few months by me, and it fixes several of the execution time regressions caused by the inlining work by preventing inlining decisions from radically impacting block layout. I've seen very large improvements in yacr2 and ackermann benchmarks, along with the expected noise across all of the benchmark suite whenever code layout changes. I've analyzed all of the regressions and fixed them, or found them to be impossible to fix. See my email to llvmdev for more details. I'd like for this to be in 3.1 as it complements the inliner changes, but if any failures are showing up or anyone has concerns, it is just a flag flip and so can be easily turned off. I'm switching it on tonight to try and get at least one run through various folks' performance suites in case SPEC or something else has serious issues with it. I'll watch bots and revert if anything shows up. llvm-svn: 154816
-
Chandler Carruth authored
once we start changing the block layout, so just nuke it. If anyone has ideas about how to craft a code layout agnostic form of the test please let me know. llvm-svn: 154815
-
Duncan Sands authored
from instructions. Chandler doesn't like them being here. llvm-svn: 154813
-
Chandler Carruth authored
rotation. When there is a loop backedge which is an unconditional branch, we will end up with a branch somewhere no matter what. Try placing this backedge in a fallthrough position above the loop header as that will definitely remove at least one branch from the loop iteration, where whole loop rotation may not. I haven't seen any benchmarks where this is important but loop-blocks.ll tests for it, and so this will be covered when I flip the default. llvm-svn: 154812
-
Duncan Sands authored
and retrieving it from instructions. I don't have a use for this but is seems logical for it to exist. While there, remove some 'const' markings from methods which are in fact 'const' in practice, but aren't logically 'const'. llvm-svn: 154811
-
Hal Finkel authored
llvm-svn: 154810
-
Richard Barton authored
Add -disassemble support for -show-inst and -show-encode capability llvm-mc. Also refactor so all MC paraphernalia are created once for all uses as much as possible. The test change is to account for the fact that the default disassembler behaviour has changed with regards to specifying the assembly syntax to use. llvm-svn: 154809
-
Rafael Espindola authored
so we don't want it to show up in the stable 3.1 interface. While at it, add a comment about why LTOCodeGenerator manually creates the internalize pass. llvm-svn: 154807
-
Chandler Carruth authored
laid out in a form with a fallthrough into the header and a fallthrough out of the bottom. In that case, leave the loop alone because any rotation will introduce unnecessary branches. If either side looks like it will require an explicit branch, then the rotation won't add any, do it to ensure the branch occurs outside of the loop (if possible) and maximize the benefit of the fallthrough in the bottom. llvm-svn: 154806
-
Benjamin Kramer authored
To be used in printing unprintable source in clang diagnostics. Patch by Seth Cantrell, with a minor fix for mingw by me. llvm-svn: 154805
-
Eli Bendersky authored
llvm-svn: 154804
-
Argyrios Kyrtzidis authored
llvm-svn: 154802
-
Craig Topper authored
llvm-svn: 154801
-
Argyrios Kyrtzidis authored
To be used in printing unprintable source in clang diagnostics. Patch by Seth Cantrell! llvm-svn: 154800
-
Craig Topper authored
Change type profile for vpermv back to using operand type for the mask argument to match intrinsic behavior. Add a bitcast to the lowering code to convert mask from v8i32 to v8f32 for vpermps. llvm-svn: 154798
-
Craig Topper authored
Flip the arguments when converting vpermd/vpermps intrinsics into instructions. The intrinsic has the mask as the last operand, but the instruction has it as the second. llvm-svn: 154797
-
Bill Wendling authored
llvm-svn: 154796
-
Bill Wendling authored
llvm-svn: 154793
-
Sebastian Pop authored
llvm-svn: 154791
-
Hal Finkel authored
llvm-svn: 154788
-
Hal Finkel authored
llvm-svn: 154787
-
Hal Finkel authored
llvm-svn: 154786
-
Chandler Carruth authored
This is a complex change that resulted from a great deal of experimentation with several different benchmarks. The one which proved the most useful is included as a test case, but I don't know that it captures all of the relevant changes, as I didn't have specific regression tests for each, they were more the result of reasoning about what the old algorithm would possibly do wrong. I'm also failing at the moment to craft more targeted regression tests for these changes, if anyone has ideas, it would be welcome. The first big thing broken with the old algorithm is the idea that we can take a basic block which has a loop-exiting successor and a looping successor and use the looping successor as the layout top in order to get that particular block to be the bottom of the loop after layout. This happens to work in many cases, but not in all. The second big thing broken was that we didn't try to select the exit which fell into the nearest enclosing loop (to which we exit at all). As a consequence, even if the rotation worked perfectly, it would result in one of two bad layouts. Either the bottom of the loop would get fallthrough, skipping across a nearer enclosing loop and thereby making it discontiguous, or it would be forced to take an explicit jump over the nearest enclosing loop to earch its successor. The point of the rotation is to get fallthrough, so we need it to fallthrough to the nearest loop it can. The fix to the first issue is to actually layout the loop from the loop header, and then rotate the loop such that the correct exiting edge can be a fallthrough edge. This is actually much easier than I anticipated because we can handle all the hard parts of finding a viable rotation before we do the layout. We just store that, and then rotate after layout is finished. No inner loops get split across the post-rotation backedge because we check for them when selecting the rotation. That fix exposed a latent problem with our exitting block selection -- we should allow the backedge to point into the middle of some inner-loop chain as there is no real penalty to it, the whole point is that it *won't* be a fallthrough edge. This may have blocked the rotation at all in some cases, I have no idea and no test case as I've never seen it in practice, it was just noticed by inspection. Finally, all of these fixes, and studying the loops they produce, highlighted another problem: in rotating loops like this, we sometimes fail to align the destination of these backwards jumping edges. Fix this by actually walking the backwards edges rather than relying on loopinfo. This fixes regressions on heapsort if block placement is enabled as well as lots of other cases where the previous logic would introduce an abundance of unnecessary branches into the execution. llvm-svn: 154783
-
Craig Topper authored
llvm-svn: 154782
-
Craig Topper authored
llvm-svn: 154781
-
Craig Topper authored
Spacing fixes and 80 column fixes. Use 0 instead of 0x80 for undef indices in vpermps/vpermd. Hardware only looks at lower 3-bits. llvm-svn: 154780
-
Craig Topper authored
llvm-svn: 154778
-
Craig Topper authored
Make member variables of AsmToken private. Remove unnecessary forward declarations. Remove an unnecessary include. llvm-svn: 154775
-
- Apr 15, 2012
-
-
Jakub Staszak authored
llvm-svn: 154773
-
Nadav Rotem authored
Patch by nobled <nobled@dreamwidth.org> llvm-svn: 154772
-
Jakub Staszak authored
llvm-svn: 154771
-
Nadav Rotem authored
Use non-vex instructions for SSE4. llvm-svn: 154770
-
Duncan Sands authored
llvm-svn: 154766
-
Benjamin Kramer authored
As an example, attach range info to the "invalid instruction" message: $ clang -arch arm -c asm.c asm.c:2:11: error: invalid instruction __asm__("foo r0"); ^ <inline asm>:1:2: note: instantiated into assembly here foo r0 ^~~ llvm-svn: 154765
-
Nadav Rotem authored
llvm-svn: 154764
-
Elena Demikhovsky authored
llvm-svn: 154761
-
NAKAMURA Takumi authored
llvm-svn: 154759
-
NAKAMURA Takumi authored
llvm-svn: 154758
-
- Apr 14, 2012
-
-
Anshuman Dasgupta authored
llvm-svn: 154755
-