- Jul 16, 2013
-
-
Benjamin Kramer authored
llvm-svn: 186439
-
Hans Wennborg authored
llvm-svn: 186438
-
Craig Topper authored
llvm-svn: 186437
-
Manman Ren authored
llvm-svn: 186436
-
Jakob Stoklund Olesen authored
These floats all represented block frequencies anyway, so just use the BlockFrequency class directly. Some floating point computations remain in tryLocalSplit(). They are estimating spill weights which are still floats. llvm-svn: 186435
-
Jakob Stoklund Olesen authored
Original commit message: Remove floating point computations from SpillPlacement.cpp. Patch by Benjamin Kramer! Use the BlockFrequency class instead of floats in the Hopfield network computations. This rescales the node Bias field from a [-2;2] float range to two block frequencies BiasN and BiasP pulling in opposite directions. This construct has a more predictable behavior when block frequencies saturate. The per-node scaling factors are no longer necessary, assuming the block frequencies around a bundle are consistent. This patch can cause the register allocator to make different spilling decisions. The differences should be small. llvm-svn: 186434
-
Daniel Jasper authored
The fundamental concept is: Format as if the braced init list was a function call (with parentheses replaced by braces). If there is no name/type before the opening brace (e.g. if the braced list is nested), assume a zero-length identifier just before the opening brace. This behavior is gated on a new style flag, which for now replaces the SpacesInBracedLists style flag. Activate this style flag for Google style to reflect recent style guide changes. llvm-svn: 186433
-
Juergen Ributzka authored
Use PMIN/PMAX for UGE/ULE vector comparions to reduce the number of required instructions. This trick also works for UGT/ULT, but there is no advantage in doing so. It wouldn't reduce the number of instructions and it would actually reduce performance. Reviewer: Ben radar:5972691 llvm-svn: 186432
-
Peter Collingbourne authored
Differential Revision: http://llvm-reviews.chandlerc.com/D1149 llvm-svn: 186431
-
Marshall Clow authored
llvm-svn: 186430
-
Juergen Ributzka authored
llvm-svn: 186429
-
Rui Ueyama authored
llvm-svn: 186428
-
Rui Ueyama authored
llvm-svn: 186427
-
Reid Kleckner authored
This is to support parsing UTF16 response files in LLVM/lib/Option for lld and clang. Reviewers: hans Differential Revision: http://llvm-reviews.chandlerc.com/D1138 llvm-svn: 186426
-
Hal Finkel authored
For safety, the inliner cannot decrease the allignment on an alloca when merging it with another. I've included two variants of the test case for this: one with DataLayout available, and one without. When DataLayout is not available, if only one of the allocas uses the default alignment (getAlignment() == 0), then they cannot be safely merged. llvm-svn: 186425
-
Dmitry Vyukov authored
not it's possible to write more precise suppressions, e.g. "^foo$" won't match "blafoobar" llvm-svn: 186424
-
Rafael Espindola authored
With this change llvm-ar can remove the temporary file on windows too. llvm-svn: 186423
-
Samuel Benzaquen authored
Summary: Add support for CXXCtorInitializer and TemplateArgument types to ASTNodeKind. This change is to support more matchers from clang/ASTMatchers/ASTMatchers.h in the dynamic layer (clang/ASTMatchers/Dynamic). Reviewers: klimek CC: cfe-commits Differential Revision: http://llvm-reviews.chandlerc.com/D1143 llvm-svn: 186422
-
Fariborz Jahanian authored
parameters in ArrayRef'ize Sema::ActOnAtEnd to ArrayRef. Patch by Robert Wilhelm. llvm-svn: 186421
-
Nadav Rotem authored
Process groups of stores in chunks of 16. llvm-svn: 186420
-
Hongbin Zheng authored
Ensure that the scalar write access corresponds to the result of a load instruction appears after the generic read access corresponds to the load instruction. llvm-svn: 186419
-
Hongbin Zheng authored
llvm-svn: 186418
-
Hongbin Zheng authored
llvm-svn: 186417
-
Aaron Watry authored
The assembly optimizations were making unsafe assumptions about which address spaces had which identifiers. Also, fix vload/vstore with 64-bit pointers. This was broken previously on Radeon SI. This version still only has assembly versions of int/uint 2/4/8/16 for global loads and stores on R600, but it does it in a way that would be very easily extended to private/local/constant and could also be handled easily on other architectures. v2: 1) Leave v[load|store]_impl.ll in generic/lib 2) Remove vload_if.ll and vstore_if.ll interfaces 3) Fix address+offset calculations 3) Remove offset from assembly arg list llvm-svn: 186416
-
Aaron Watry authored
This commit gets us back to pure CLC and fixes offset calculations. The next commit will re-enable the assembly implementation for R600, fix bugs related to 64-bit address spaces, and also fix the incorrect assumption that address space identifiers are the same in all architectures. llvm-svn: 186415
-
Rafael Espindola authored
llvm-svn: 186414
-
Reid Kleckner authored
llvm-svn: 186413
-
Manuel Klimek authored
llvm-svn: 186412
-
Manuel Klimek authored
As every match call can recursively call back into the memoized match via a nested traversal matcher (for example: stmt(hasAncestor(stmt(hasDescendant(stmt(hasDescendant(stmt()))))))), and every memoization step might clear the cache, we must not store iterators into the result cache when calling match on a submatcher. llvm-svn: 186411
-
Alexey Samsonov authored
llvm-svn: 186410
-
Ulrich Weigand authored
[APFloat] PR16573: Avoid losing mantissa bits in ppc_fp128 to double truncation When truncating to a format with fewer mantissa bits, APFloat::convert will perform a right shift of the mantissa by the difference of the precision of the two formats. Usually, this will result in just the mantissa bits needed for the target format. One special situation is if the input number is denormal. In this case, the right shift may discard significant bits. This is usually not a problem, since truncating a denormal usually results in zero (underflow) after normalization anyway, since the result format's exponent range is usually smaller than the target format's. However, there is one case where the latter property does not hold: when truncating from ppc_fp128 to double. In particular, truncating a ppc_fp128 whose first double of the pair is denormal should result in just that first double, not zero. The current code however performs an excessive right shift, resulting in lost result bits. This is then caught in the APFloat::normalize call performed by APFloat::convert and causes an assertion failure. This patch checks for the scenario of truncating a denormal, and attempts to (possibly partially) replace the initial mantissa right shift by decrementing the exponent, if doing so will still result in a valid *target format* exponent. Index: test/CodeGen/PowerPC/pr16573.ll =================================================================== --- test/CodeGen/PowerPC/pr16573.ll (revision 0) +++ test/CodeGen/PowerPC/pr16573.ll (revision 0) @@ -0,0 +1,11 @@ +; RUN: llc < %s | FileCheck %s + +target triple = "powerpc64-unknown-linux-gnu" + +define double @test() { + %1 = fptrunc ppc_fp128 0xM818F2887B9295809800000000032D000 to double + ret double %1 +} + +; CHECK: .quad -9111018957755033591 + Index: lib/Support/APFloat.cpp =================================================================== --- lib/Support/APFloat.cpp (revision 185817) +++ lib/Support/APFloat.cpp (working copy) @@ -1956,6 +1956,23 @@ X86SpecialNan = true; } + // If this is a truncation of a denormal number, and the target semantics + // has larger exponent range than the source semantics (this can happen + // when truncating from PowerPC double-double to double format), the + // right shift could lose result mantissa bits. Adjust exponent instead + // of performing excessive shift. + if (shift < 0 && isFiniteNonZero()) { + int exponentChange = significandMSB() + 1 - fromSemantics.precision; + if (exponent + exponentChange < toSemantics.minExponent) + exponentChange = toSemantics.minExponent - exponent; + if (exponentChange < shift) + exponentChange = shift; + if (exponentChange < 0) { + shift -= exponentChange; + exponent += exponentChange; + } + } + // If this is a truncation, perform the shift before we narrow the storage. if (shift < 0 && (isFiniteNonZero() || category==fcNaN)) lostFraction = shiftRight(significandParts(), oldPartCount, -shift); llvm-svn: 186409
-
Alexey Samsonov authored
llvm-svn: 186408
-
Richard Osborne authored
Previously an asm operand with no operand modifier would give the error "invalid operand in inline asm". llvm-svn: 186407
-
Tim Northover authored
We'd forgotten to provide string representations for the special ARMISD atomic nodes; this adds them in. No effect on CodeGen, just makes the output of "-view-whatever-dags" slightly more readable. llvm-svn: 186406
-
Richard Sandiford authored
llvm-svn: 186405
-
Alexey Samsonov authored
llvm-svn: 186404
-
Vladimir Medic authored
llvm-svn: 186403
-
Daniel Jasper authored
This fixes an incorrect detection that led to a formatting error. Before: some_var = function (*some_pointer_var)[0]; After: some_var = function(*some_pointer_var)[0]; llvm-svn: 186402
-
Richard Sandiford authored
CodeGen support will come later. llvm-svn: 186401
-
Dmitry Vyukov authored
Intercepting it makes it process pending signal before return. llvm-svn: 186400
-