- Aug 04, 2014
-
-
David Blaikie authored
Reapply "DebugInfo: Ensure that all debug location scope chains from instructions within a function, lead to the function itself." Originally reverted in r213432 with flakey failures on an ASan self-host build. After reduction it seems to be the same issue fixed in r213805 (ArgPromo + DebugInfo: Handle updating debug info over multiple applications of argument promotion) and r213952 (by having LiveDebugVariables strip dbg_value intrinsics in functions that are not described by debug info). Though I cannot explain why this failure was flakey... llvm-svn: 214761
-
Matt Arsenault authored
These were just wrong, using the wrong register classes and store2 was missing an operand. llvm-svn: 214756
-
Joerg Sonnenberger authored
systems it represents. llvm-svn: 214755
-
Joerg Sonnenberger authored
llvm-svn: 214754
-
Alex Lorenz authored
This flag will be used by the coverage tool to help compute the execution counts for each line in a source file. Differential Revision: http://reviews.llvm.org/D4746 llvm-svn: 214740
-
Eric Christopher authored
nothing subtarget dependent about the intrinsic support in any backend as far as I can tell. llvm-svn: 214738
-
Justin Bogner authored
path::const_iterator claims that it's a bidirectional iterator, but it doesn't satisfy all of the contracts for a bidirectional iterator. For example, n3376 24.2.5 p6 says "If a and b are both dereferenceable, then a == b if and only if *a and *b are bound to the same object", but this doesn't work with how we stash and recreate Components. This means that our use of reverse_iterator on this type is invalid and leads to many of the valgrind errors we're hitting, as explained by Tilmann Scheller here: http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140728/228654.html Instead, we admit that path::const_iterator is only an input_iterator, and implement a second input_iterator for path::reverse_iterator (by changing const_iterator::operator-- to reverse_iterator::operator++). All of the uses of this just traverse once over the path in one direction or the other anyway. llvm-svn: 214737
-
Joerg Sonnenberger authored
llvm-svn: 214733
-
Akira Hatanaka authored
This corrects r214672, which was committed to silence a gcc warning. llvm-svn: 214732
-
Joerg Sonnenberger authored
llvm-svn: 214731
-
Matt Arsenault authored
llvm-svn: 214729
-
Matt Arsenault authored
llvm-svn: 214728
-
Eric Christopher authored
allow us to forward all of the standard TargetMachine calls to the subtarget and still return null as we were before. llvm-svn: 214727
-
Joerg Sonnenberger authored
Move the test cases for them into separate files. llvm-svn: 214724
-
Ulrich Weigand authored
This should hopefully fix build bots on other architectures. llvm-svn: 214721
-
Robert Khasanov authored
Instructions: VMOVAPD, VMOVAPS, VMOVDQA8, VMOVDQA16, VMOVDQA32,VMOVDQA64, VMOVDQU8, VMOVDQU16, VMOVDQU32,VMOVDQU64, VMOVUPD, VMOVUPS, Reviewed by Elena Demikhovsky <elena.demikhovsky@intel.com> llvm-svn: 214719
-
Ulrich Weigand authored
In commit r213915, Bill fixed little-endian usage of vmrgh* and vmrgl* by swapping the input arguments. As it turns out, the exact same fix is also required for the vpkuhum/vpkuwum patterns. This fixes another regression in llvmpipe when vector support is enabled. Reviewed by Bill Schmidt. llvm-svn: 214718
-
Aaron Ballman authored
Improving the name of the function parameter, which happens to solve two likely-less-than-useful MSVC warnings: warning C4258: 'I' : definition from the for loop is ignored; the definition from the enclosing scope is used. llvm-svn: 214717
-
Ulrich Weigand authored
I ran into some test failures where common code changed vector division by constant into a multiply-high operation (MULHU). But these are not implemented by the back-end, so we failed to recognize the insn. Fixed by marking MULHU/MULHS as Expand for vector types. llvm-svn: 214716
-
Daniel Sanders authored
llvm-svn: 214715
-
Ulrich Weigand authored
This patch refactors code generation of vector comparisons. This fixes a wrong code-gen bug for ISD::SETGE for floating-point types, and improves generated code for vector comparisons in general. Specifically, the patch moves all logic deciding how to implement vector comparisons into getVCmpInst, which gets two extra boolean outputs indicating to its caller whether its needs to swap the input operands and/or negate the result of the comparison. Apart from implementing these two modifications as directed by getVCmpInst, there is no need to ever implement vector comparisons in any other manner; in particular, there is never a need to perform two separate comparisons (e.g. one for equal and one for greater-than, as code used to do before this patch). Reviewed by Bill Schmidt. llvm-svn: 214714
-
Daniel Sanders authored
Summary: This patch also fixes an issue with the way the Mips assembler enables/disables architecture features. Before this patch, the assembler never disabled feature bits. For example, .set mips64 .set mips32r2 would result in the 'OR' of mips64 with mips32r2 feature bits which isn't right. Unfortunately this isn't trivial to fix because there's not an easy way to clear feature bits as the algorithm in MCSubtargetInfo (ToggleFeature) only clears the bits that imply the feature being cleared and not the implied bits by the feature (there's a better explanation to the code I added). Patch by Matheus Almeida and updated by Toma Tabacu Reviewers: vmedic, matheusalmeida, dsanders Reviewed By: dsanders Subscribers: tomatabacu, llvm-commits Differential Revision: http://reviews.llvm.org/D4123 llvm-svn: 214709
-
NAKAMURA Takumi authored
llvm-svn: 214708
-
Chandler Carruth authored
use of PACKUS. It's cleaner that way. I looked at implementing clever combine-based folding of PACKUS chains into PSHUFB but it is quite hard and doesn't seem likely to be worth it. The most annoying part would be detecting that the correct masking had been done to use PACKUS-style instructions as a blend operation rather than there being any saturating as is indicated by its name. We generate really nice code for what few test cases I've come up with that aren't completely contrived for this by just directly prefering PSHUFB and so let's go with that strategy for now. =] llvm-svn: 214707
-
Chandler Carruth authored
patterns of v16i8 shuffles. This implements one of the more important FIXMEs for the SSE2 support in the new shuffle lowering. We now generate the optimal shuffle sequence for truncate-derived shuffles which show up essentially everywhere. Unfortunately, this exposes a weakness in other parts of the shuffle logic -- we can no longer form PSHUFB here. I'll add the necessary support for that and other things in a subsequent commit. llvm-svn: 214702
-
Benjamin Kramer authored
llvm-svn: 214700
-
Kevin Qin authored
This commit broke "make check" for several hours, so get it reverted. llvm-svn: 214697
-
NAKAMURA Takumi authored
On Cygwin, getpagesize() returns 64k(AllocationGranularity). In r214580, the size of X86GenInstrInfo.inc became 1499136. FIXME: We should reorganize again getPageSize() on Win32. MapFile allocates address along AllocationGranularity but view is mapped by physical page. llvm-svn: 214681
-
Chandler Carruth authored
I spent some time looking into a better or more principled way to handle this. For example, by detecting arbitrary "unneeded" ORs... But really, there wasn't any point. We just shouldn't build blatantly wrong code so late in the pipeline rather than adding more stages and logic later on to fix it. Avoiding this is just too simple. llvm-svn: 214680
-
Chandler Carruth authored
Fundamentally, there isn't a really portable way to test the constant pool contents. Instead, pin this test to the bare-metal triple. This also makes it a 64-bit triple which allows us to only match a single constant pool rather than two. It can also just hard code the '.' prefix as the format should be stable now that it has a fixed triple. Finally, I've switched it to use CHECK-NEXT to be more precise in the instruction sequence expected and to use variables rather than hard coding decisions by the register allocator. llvm-svn: 214679
-
Peter Zotov authored
llvm-svn: 214677
-
Peter Zotov authored
llvm-svn: 214676
-
Sanjay Patel authored
llvm-svn: 214674
-
Chandler Carruth authored
combines) until they are legal. Doing it the old way could, when the stars align *just* right, cause a node to get into the combine set prior to being legalized. Then, when the same node showed up as an operand to another node later on (but not so much later on that it had been deleted as dead) we would fail to add it back to the worklist thinking it had already been combined. This would in turn cause it to not be legalized. Fortunately, we can also walk the operands looking for uncombined (and thus potentially un-legalized) nodes late. It will still ensure that we walk all operands of all nodes and send all of them through both the legalizer without changes and the combiner at least once. (Which was the original goal of this). I have a test case for this bug, but it is terribly brittle. For example, it will stop finding the bug the moment I enable the new shuffle lowering. I don't yet have any test case that reliably exercises this bug, and it isn't clear that it will be possible to craft one. It is entirely possible that with the new shuffle lowering the two forms of doing this are precisely equivalent. That doesn't mean we shouldn't take the more conservative approach of insisting on things in the combined set having survived the legalizer. llvm-svn: 214673
-
Saleem Abdulrasool authored
GCC 4.8.2 points out the ambiguity in evaluation of the assertion condition: lib/Target/X86/X86FloatingPoint.cpp:949:49: warning: suggest parentheses around ‘&&’ within ‘||’ [-Wparentheses] assert(STReturns == 0 || isMask_32(STReturns) && N <= 2); llvm-svn: 214672
-
Saleem Abdulrasool authored
GCC 4.8.2 objects to the tautological condition in the assert as the unsigned value is guaranteed to be >= 0. Simplify the assertion by dropping the tautological condition. llvm-svn: 214671
-
Sanjay Patel authored
This is intended to be the minimal change needed to fix PR20354 ( http://llvm.org/bugs/show_bug.cgi?id=20354 ). The check for a vector operation was wrong; we need to check that the fabs itself is not a vector operation. This patch will not generate the optimal code. A constant pool load and 'and' op will be generated instead of just returning a value that we can calculate in advance (as we do for the scalar case). I've put a 'TODO' comment for that here and expect to have that patch ready soon. There is a very similar optimization that we can do in visitFNEG, so I've put another 'TODO' there and expect to have another patch for that too. llvm-svn: 214670
-
Gerolf Hoflehner authored
sequence - AArch64 target support This patch turns off madd/msub generation in the DAGCombiner and generates them in the MachineCombiner instead. It replaces the original code sequence with the combined sequence when it is beneficial to do so. When there is no machine model support it always generates the madd/msub instruction. This is true also when the objective is to optimize for code size: when the combined sequence is shorter is always chosen and does not get evaluated. When there is a machine model the combined instruction sequence is evaluated for critical path and resource length using machine trace metrics and the original code sequence is replaced when it is determined to be faster. rdar://16319955 llvm-svn: 214669
-
- Aug 03, 2014
-
-
Gerolf Hoflehner authored
sequence - target independent framework When the DAGcombiner selects instruction sequences it could increase the critical path or resource len. For example, on arm64 there are multiply-accumulate instructions (madd, msub). If e.g. the equivalent multiply-add sequence is not on the crictial path it makes sense to select it instead of the combined, single accumulate instruction (madd/msub). The reason is that the conversion from add+mul to the madd could lengthen the critical path by the latency of the multiply. But the DAGCombiner would always combine and select the madd/msub instruction. This patch uses machine trace metrics to estimate critical path length and resource length of an original instruction sequence vs a combined instruction sequence and picks the faster code based on its estimates. This patch only commits the target independent framework that evaluates and selects code sequences. The machine instruction combiner is turned off for all targets and expected to evolve over time by gradually handling DAGCombiner pattern in the target specific code. This framework lays the groundwork for fixing rdar://16319955 llvm-svn: 214666
-
Saleem Abdulrasool authored
This makes EmitWindowsUnwindTables a virtual function and lowers the implementation of the function to the X86WinCOFFStreamer. This method is a target specific operation. This enables making the behaviour target dependent by isolating it entirely to the target specific streamer. llvm-svn: 214664
-