- Mar 23, 2017
-
-
Krzysztof Parzyszek authored
This is another step towards implementing register classes with parametrized register/spill sizes. Differential Revision: https://reviews.llvm.org/D31299 llvm-svn: 298652
-
Jessica Paquette authored
Remove an unused lambda capture that made some bots unhappy. llvm-svn: 298651
-
Alex Shlyapnikov authored
Summary: This change addresses https://github.com/google/sanitizers/issues/766. I tested the change with make check-asan and the newly added test case. Reviewers: ygribov, kcc, alekseyshl Subscribers: kubamracek, llvm-commits Patch by mrigger Differential Revision: https://reviews.llvm.org/D30384 llvm-svn: 298650
-
Reid Kleckner authored
Summary: When dumping these records from an object file section, we should use only one type database. However, when dumping from a PDB, we should use two: one for the type stream and one for the IPI stream. Certain type records that normally live in the .debug$T object file section get moved over to the IPI stream of the PDB file and they get new indices. So far, I've noticed that the MSVC linker always moves these records into IPI: - LF_FUNC_ID - LF_MFUNC_ID - LF_STRING_ID - LF_SUBSTR_LIST - LF_BUILDINFO - LF_UDT_MOD_SRC_LINE These records have index fields that can point into TPI or IPI. In particular, LF_SUBSTR_LIST and LF_BUILDINFO point to LF_STRING_ID records to describe compilation command lines. I've modified the dumper to have an optional pointer to the item DB, and to do type name lookup of these fields in that DB. See printItemIndex. The result is that our pdbdump-headers.test is more faithful to the PDB contents and the output is less confusing. Reviewers: ruiu Subscribers: amccarth, zturner, llvm-commits Differential Revision: https://reviews.llvm.org/D31309 llvm-svn: 298649
-
Jessica Paquette authored
The old candidate collection method in the outliner caused some very large regressions in compile time on large tests. For MultiSource/Benchmarks/7zip it caused a 284.07 s or 1156% increase in compile time. On average, using the SingleSource/MultiSource tests, it caused an average increase of 8 seconds in compile time (something like 1000%). This commit replaces that candidate collection method with a new one which only visits each node in the tree once. This reduces the worst compile time increase (still 7zip) to a 0.542 s overhead (22%) and the average compile time increase on SingleSource and MultiSource to 0.018 s (4%). llvm-svn: 298648
-
Dehao Chen authored
Summary: This is the test added for https://reviews.llvm.org/D31217 Reviewers: tejohnson, mehdi_amini Reviewed By: tejohnson Subscribers: cfe-commits, Prazek Differential Revision: https://reviews.llvm.org/D31219 llvm-svn: 298647
-
Dehao Chen authored
Summary: loop unrolling and icp will make the sample profile annotation much harder in the backend. So disable these 2 optimization in the ThinLTO compile phase. Will add a test in cfe in a separate patch. Reviewers: tejohnson Reviewed By: tejohnson Subscribers: mehdi_amini, llvm-commits, Prazek Differential Revision: https://reviews.llvm.org/D31217 llvm-svn: 298646
-
Richard Smith authored
llvm-svn: 298645
-
Craig Topper authored
[InstCombine] Remove some code from visitAnd that dealt with trying to reduce the LHS of a sub to 0. This should now be fully handled by SimplifyDemandedInstructionBits now. Now that we call ShrinkDemandedConstant on the RHS of sub this should be taken care of. This code doesn't trigger on any in tree regressions, but did before ShrinkDemandedConstant was added to the RHS. llvm-svn: 298644
-
Erich Keane authored
GCC has the alloc_align attribute, which is similar to assume_aligned, except the attribute's parameter is the index of the integer parameter that needs aligning to. This is the required LLVM changes to make this happen. Differential Revision: https://reviews.llvm.org/D29598 llvm-svn: 298643
-
Adrian Prantl authored
It is not guaranteed that the memory used for MachineBasicBlocks in the previous MachineFunction hasn't been freed, so holding on to a pointer to the last function's isn't correct. Particularly I have observed the sret.ll testcase failing because the first BasicBlock in the new function happened to be allocated to the exact same memory as the previously saved and (deleted) PrevInstBB. llvm-svn: 298642
-
Gil Rapaport authored
The new test asserts that scalarized memory operations get memcheck metadata added even if the loop is only unrolled. Differential Revision: https://reviews.llvm.org/D30972 llvm-svn: 298641
-
Anna Thomas authored
Using AssemblyAnnotationWriter for LVI printer prints for instructions and basic blocks. So, we explicitly need to print LVI info for the arguments of the function (these are values and not instructions). llvm-svn: 298640
-
Teresa Johnson authored
Summary: Clang companion patch to LLVM patch D31027, which adds support for emitting minimized bitcode file for use in the thin link step. Add a cc1 option -fthin-link-bitcode=<file> to trigger this behavior. Depends on D31027. Reviewers: mehdi_amini, pcc Subscribers: cfe-commits, Prazek Differential Revision: https://reviews.llvm.org/D31050 llvm-svn: 298639
-
Teresa Johnson authored
Summary: The cumulative size of the bitcode files for a very large application can be huge, particularly with -g. In a distributed build environment, all of these files must be sent to the remote build node that performs the thin link step, and this can exceed size limits. The thin link actually only needs the summary along with a bitcode symbol table. Until we have a proper bitcode symbol table, simply stripping the debug metadata results in significant size reduction. Add support for an option to additionally emit minimized bitcode modules, just for use in the thin link step, which for now just strips all debug metadata. I plan to add a cc1 option so this can be invoked easily during the compile step. However, care must be taken to ensure that these minimized thin link bitcode files produce the same index as with the original bitcode files, as these original bitcode files will be used in the backends. Specifically: 1) The module hash used for caching is typically produced by hashing the written bitcode, and we want to include the hash that would correspond to the original bitcode file. This is because we want to ensure that changes in the stripped portions affect caching. Added plumbing to emit the same module hash in the minimized thin link bitcode file. 2) The module paths in the index are constructed from the module ID of each thin linked bitcode, and typically is automatically generated from the input file path. This is the path used for finding the modules to import from, and obviously we need this to point to the original bitcode files. Added gold-plugin support to take a suffix replacement during the thin link that is used to override the identifier on the MemoryBufferRef constructed from the loaded thin link bitcode file. The assumption is that the build system can specify that the minimized bitcode file has a name that is similar but uses a different suffix (e.g. out.thinlink.bc instead of out.o). Added various tests to ensure that we get identical index files out of the thin link step. Reviewers: mehdi_amini, pcc Subscribers: Prazek, llvm-commits Differential Revision: https://reviews.llvm.org/D31027 llvm-svn: 298638
-
Eric Christopher authored
llvm-svn: 298637
-
Kostya Kortchinsky authored
Summary: Scudo didn't have any test using multiple threads. Add one, borrowed from lsan. Reviewers: kcc, alekseyshl Reviewed By: alekseyshl Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D31297 llvm-svn: 298636
-
Erich Keane authored
Correct class-template deprecation behavior Based on the comment in the test, and my reading of the standard, a deprecated warning should be issued in the following case: template<typename T> [[deprecated]] class Foo{}; Foo<int> f; This was not the case, because the ClassTemplateSpecializationDecl creation did not also copy the deprecated attribute. Note: I did NOT audit the complete set of attributes to see WHICH ones should be copied, so instead I simply copy ONLY the deprecated attribute. Previous DiffRev: https://reviews.llvm.org/D27486, was reverted. This patch fixes the issues brought up here by the reverter: https://reviews.llvm.org/rL298410 Differential Revision: https://reviews.llvm.org/D31245 llvm-svn: 298634
-
Nirav Dave authored
Summary: Fixes pr32329. Reviewers: spatel, craig.topper Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D31286 llvm-svn: 298633
-
Rui Ueyama authored
llvm-svn: 298632
-
Zhaoshi Zheng authored
Given below case: %y = shl %x, n %z = ashr %y, m when n = m, SCEV models it as sext(trunc(x)). This patch tries to handle the case where n > m by using sext(mul(trunc(x), 2^(n-m)))) as the SCEV expression. llvm-svn: 298631
-
Zhaoshi Zheng authored
llvm-svn: 298630
-
Zhaoshi Zheng authored
llvm-svn: 298629
-
Eric Christopher authored
stored on X86TargetLowering. llvm-svn: 298628
-
Eric Christopher authored
llvm-svn: 298627
-
Adrian McCarthy authored
Reverting until I can figure out the root cause. Revert "Re-land: Make NativeExeSymbol a concrete subclass of NativeRawSymbol [PDB]" This reverts commit f461a70cc376f0f91c8b4917be79479cc86330a5. llvm-svn: 298626
-
Adrian McCarthy authored
Use the -color-output option explicitly to eliminate the ANSI color codes in pdb-native-summary.test. (The default should have done this.) llvm-svn: 298625
-
Pirama Arumuga Nainar authored
Summary: The true and false operands for the CMOV are operands 0 and 1. ARMISelLowering.cpp::computeKnownBits was looking at operands 1 and 2 instead. This can cause CMOV instructions to be incorrectly folded into BFI if value set by the CMOV is another CMOV, whose known bits are computed incorrectly. This patch fixes the issue and adds a test case. Reviewers: kristof.beyls, jmolloy Subscribers: llvm-commits, aemerson, srhines, rengolin Differential Revision: https://reviews.llvm.org/D31265 llvm-svn: 298624
-
Adrian McCarthy authored
The new test should pass on all platforms now that llvm-pdbdump has the `-color-output` option. This moves exe symbol-specific method implementations out of NativeRawSymbol into a concrete subclass. Also adds implementations for hasCTypes and hasPrivateSymbols and a simple test to ensure the native reader can access the summary information for the executable from the PDB. Original Differential Revision: https://reviews.llvm.org/D31059 llvm-svn: 298623
-
Argyrios Kyrtzidis authored
Even if we exclude plain reference occurrences, we should include relation-based references, like the 'base' one. rdar://31010737 llvm-svn: 298622
-
Marek Kurdej authored
Summary: This furtherly improves r295303: [clang-tidy] Ignore spaces between globs in the Checks option. Trims all whitespaces and not only spaces and correctly computes the offset of the checks list (taking the size before trimming). Reviewers: alexfh Reviewed By: alexfh Subscribers: cfe-commits, JDevlieghere Differential Revision: https://reviews.llvm.org/D30567 llvm-svn: 298621
-
Matthew Simpson authored
This patch adds support for vectorizing GEPs. Previously, we only generated vector GEPs on-demand when creating gather or scatter operations. All GEPs from the original loop were scalarized by default, and if a pointer was to be stored to memory, we would have to build up the pointer vector with insertelement instructions. With this patch, we will vectorize all GEPs that haven't already been marked for scalarization. The patch refines collectLoopScalars to more exactly identify the scalar GEPs. The function now more closely resembles collectLoopUniforms. And the patch moves vector GEP creation out of vectorizeMemoryInstruction and into the main vectorization loop. The vector GEPs needed for gather and scatter operations will have already been generated before vectoring the memory accesses. Differential Revision: https://reviews.llvm.org/D30710 llvm-svn: 298620
-
Alexander Kornienko authored
Fixes https://bugs.llvm.org/show_bug.cgi?id=27641. llvm-svn: 298619
-
Marshall Clow authored
llvm-svn: 298618
-
Michael Kruse authored
Provide an common way for testing if a statement contains something for region and block statements. First user is RegionGenerator::addOperandToPHI. Suggested-by:
Tobias Grosser <tobias@grosser.es> llvm-svn: 298617
-
Simon Pilgrim authored
Add support for widening narrow shuffle masks so we can directly extract from the relevant input vector of the shuffle. llvm-svn: 298616
-
Matthew Simpson authored
The code for generating scalar base pointers in vectorizeMemoryInstruction is not needed. We currently scalarize all GEPs and maintain the scalarized values in VectorLoopValueMap. The GEP cloning in this unneeded code is the same as that in scalarizeInstruction. The test cases that changed as a result of this patch changed because we were able to reuse the scalarized GEP that we previously generated instead of cloning a new one. Differential Revision: https://reviews.llvm.org/D30587 llvm-svn: 298615
-
Tim Shen authored
Summary: Add tests for all atomic operations for powerpc64le, so that all changes can be easily examined. Reviewers: kbarton, hfinkel, echristo Subscribers: mehdi_amini, nemanjai, llvm-commits Differential Revision: https://reviews.llvm.org/D31285 llvm-svn: 298614
-
Alex Shlyapnikov authored
Summary: sysconf(_SC_PAGESIZE) is called very early during sanitizer init and any instrumented code (sysconf() wrapper/interceptor will likely be instrumented) calling back to sanitizer before init is done will most surely crash. 2nd attempt, now with glibc version checks (D31092 was reverted). Reviewers: eugenis Subscribers: kubamracek, llvm-commits Differential Revision: https://reviews.llvm.org/D31221 llvm-svn: 298613
-
Derek Schuff authored
This fix is a follow up a previous change with stored value types as signed integers in memory. In future, once the yaml<->wasm binary patche lands we can add test coverage for this kind of thing. Differential Revision: https://reviews.llvm.org/D31227 Patch by Sam Clegg llvm-svn: 298612
-