- Feb 03, 2015
-
-
Zachary Turner authored
llvm-svn: 227998
-
DeLesley Hutchins authored
These checks detect potential deadlocks caused by inconsistent lock ordering. The checks are implemented under the -Wthread-safety-beta flag. llvm-svn: 227997
-
Greg Fitzgerald authored
This reverts r227994 llvm-svn: 227996
-
Colin LeMahieu authored
llvm-svn: 227995
-
Greg Fitzgerald authored
Before this patch, the CMake build assumed LIT_EXECUTABLE pointed to a Python script, not an executable. If you were to pass in an executable, such as the result of py2exe on lit.py, the build would fall over. With this patch, the CMake build assumes LIT_EXECUTABLE is an executable. You can continue setting it to lit.py, but it will now use its shebang to find a Python interpreter. Differential Revision: http://reviews.llvm.org/D7315 llvm-svn: 227994
-
Colin LeMahieu authored
llvm-svn: 227993
-
Adam Nemet authored
LoopVectorizationLegality::{getNumLoads,getNumStores} should forward to LoopAccessAnalysis now. Thanks to Takumi for noticing this! llvm-svn: 227992
-
Jingyue Wu authored
making the style consistent with the rest llvm-svn: 227991
-
Marek Olsak authored
This can happen when a REV instruction is commuted. The trick is not to define the _vi versions of instructions, which has these consequences: - code generation will always fail if a pseudo cannot be lowered (very useful to catch bugs where an unsupported instruction somehow makes it to the printer) - ability to query if a pseudo can be lowered, which is done in commuteOpcode to prevent REV from commuting to non-REV on VI Tested-by:
Michel Dänzer <michel.daenzer@amd.com> llvm-svn: 227990
-
Marek Olsak authored
The getCommute* functions are only used with pseudos, so this commit doesn't change anything. The issue with missing non-rev versions of shift instructions on VI will fixed separately. Tested-by:
Michel Dänzer <michel.daenzer@amd.com> llvm-svn: 227989
-
Marek Olsak authored
- V_MAC_LEGACY_F32 exists on VI, but it's VOP3-only. - Define CVT_PK opcodes which are different between SI and VI. These are unused. The idea is to define all chip differences. v2: keep V_MUL_LO_U32 Tested-by:
Michel Dänzer <michel.daenzer@amd.com> llvm-svn: 227988
-
Marek Olsak authored
These are VOP2 on SI and VOP3 on VI, and their pseudos are neither, which can be a problem. In order to make isVOP2 and isVOP3 queries behave as expected, the encoding must be determined first. This doesn't fix any known issue, but better safe than sorry. v2: add and use getMCOpcodeFromPseudo Tested-by:
Michel Dänzer <michel.daenzer@amd.com> llvm-svn: 227987
-
Marek Olsak authored
This fixes a hang when using an empty geometry shader. v2: - don't add s_nop when followed by s_waitcnt - comestic changes Tested-by:
Michel Dänzer <michel.daenzer@amd.com> llvm-svn: 227986
-
Sanjay Patel authored
r224330 introduced a bug by misinterpreting the "FeatureVectorUAMem" bit. The commit log says that change did not affect anything, but that's not correct. That change allowed SSE instructions to have unaligned mem operands folded into math ops, and that's not allowed in the default specification for any SSE variant. The bug is exposed when compiling for an AVX-capable CPU that had this feature flag but without enabling AVX codegen. Another mistake in r224330 was not adding the feature flag to all AVX CPUs; the AMD chips were excluded. This is part of the fix for PR22371 ( http://llvm.org/bugs/show_bug.cgi?id=22371 ). This feature bit is SSE-specific, so I've renamed it to "FeatureSSEUnalignedMem". Changed the existing test case for the feature bit to reflect the new name and renamed the test file itself to better reflect the feature. Added runs to fold-vex.ll to check for the failing codegen. Note that the feature bit is not set by default on any CPU because it may require a configuration register setting to enable the enhanced unaligned behavior. llvm-svn: 227983
-
Nico Weber authored
Thou shall not jump into SEH blocks. Jumping out of SEH __try and __excepts is A-ok. Jumping out of __finally blocks is B-ok (msvc doesn't error about it, but warns that it has undefined behavior). I've checked that clang's behavior with this patch matches msvc's behavior. We don't have the warning on jumping out of a __finally yet, see the FIXME in the test. clang also currently crashes on codegen for a jump out of a __finally block, see PR22414 comment 7. I also added a few tests for the interaction of indirect jumps and SEH blocks. MSVC doesn't support indirect jumps, so there's no way to know if clang behave the same way as msvc here. clang's behavior with this patch does make sense to me, but maybe it could be argued that it should be more permissive (see FIXME in the indirect jump tests -- shout if you have an opinion on this). llvm-svn: 227982
-
Bill Schmidt authored
llvm-svn: 227981
-
Bill Schmidt authored
llvm-svn: 227980
-
Rafael Espindola authored
Patch by İsmail Dönmez! llvm-svn: 227979
-
Bill Schmidt authored
llvm-svn: 227978
-
Bill Schmidt authored
llvm-svn: 227977
-
Bill Schmidt authored
This patch is a third attempt to properly handle the local-dynamic and global-dynamic TLS models. In my original implementation, calls to __tls_get_addr were hidden from view until the asm-printer phase, at which point the underlying branch-and-link instruction was created with proper relocations. This mostly worked well, but I used some repellent techniques to ensure that the TLS_GET_ADDR nodes at the SD and MI levels correctly received input from GPR3 and produced output into GPR3. This proved to work badly in the presence of multiple TLS variable accesses, with the copies to and from GPR3 being scheduled incorrectly and generally creating havoc. In r221703, I addressed that problem by representing the calls to __tls_get_addr as true calls during instruction lowering. This had the advantage of removing all of the bad hacks and relying on the existing call machinery to properly glue the copies in place. It looked like this was going to be the right way to go. However, as a side effect of the recent discovery of problems with linker optimizations for TLS, we discovered cases of suboptimal code generation with this strategy. The problem comes when tls_get_addr is called for the same address, and there is a resulting CSE opportunity. It turns out that in such cases MachineCSE will common the addis/addi instructions that set up the input value to tls_get_addr, but will not common the calls themselves. MachineCSE does not have any machinery to common idempotent calls. This is perfectly sensible, since presumably this would be done at the IR level, and introducing calls in the back end isn't commonplace. In any case, we end up with two calls to __tls_get_addr when one would suffice, and that isn't good. I presumed that the original design would have allowed commoning of the machine-specific nodes that hid the __tls_get_addr calls, so as suggested by Ulrich Weigand, I went back to that design and cleaned it up so that the copies were properly held together by glue nodes. However, it turned out that this didn't work either...the presence of copies to physical registers kept the machine-specific nodes from being commoned also. All of which leads to the design presented here. This is a return to the original design, except that no attempt is made to introduce copies to and from GPR3 during instruction lowering. Virtual registers are used until prior to register allocation. At that point, a special pass is run that identifies the machine-specific nodes that hide the tls_get_addr calls and introduces the copies to and from GPR3 around them. The register allocator then coalesces these copies away. With this design, MachineCSE succeeds in commoning tls_get_addr calls where possible, and we get nice optimal code generation (better than GCC at the moment, which does not common these calls). One additional problem must be dealt with: After introducing the mentions of the physical register GPR3, the aggressive anti-dependence breaker sees opportunities to improve scheduling by selecting a different register instead. Flags must be used on the instruction descriptions to tell the anti-dependence breaker to keep its hands in its pockets. One thing missing from the original design was recording a definition of the link register on the GET_TLS_ADDR nodes. Doing this was found to be insufficient to force a stack frame to be created, which led to looping behavior because two different LR values were stored at the same address. This appears to have been an oversight in PPCFrameLowering::determineFrameLayout(), which is repaired here. Because MustSaveLR() returns true for calls to builtin_return_address, this changed the expected behavior of test/CodeGen/PowerPC/retaddr2.ll, which now stacks a frame but formerly did not. I've fixed the test case to reflect this. There are existing TLS tests to catch regressions; the checks in test/CodeGen/PowerPC/tls-store2.ll proved to be too restrictive in the face of instruction scheduling with these changes, so I fixed that up. I've added a new test case based on the PrettyStackTrace module that demonstrated the original problem. This checks that we get correct code generation and that CSE of the calls to __get_tls_addr has taken place. llvm-svn: 227976
-
Rafael Auler authored
Currently, no one owns script::Parser buffers, but yet ELFLinkingContext gets updated with StringRef pointers to data inside Parser buffers. Since this buffer is locally owned inside GnuLdDriver::evalLinkerScript(), as soon as this function finishes, all pointers in ELFLinkingContext that comes from linker scripts get invalid. The problem is that we need someone to own linker scripts data structures and, since ELFLinkingContext transports references to linker scripts data, we can simply make it also own all linker scripts data. Differential Revision: http://reviews.llvm.org/D7323 llvm-svn: 227975
-
Eric Fiselier authored
llvm-svn: 227974
-
Eric Fiselier authored
Summary: This patch just adds the variable templates in <experimental/system_error>. see: https://rawgit.com/cplusplus/fundamentals-ts/v1/fundamentals-ts.html#syserror Reviewers: jroelofs, danalbert, K-ballo, mclow.lists Reviewed By: mclow.lists Subscribers: chandlerc, cfe-commits Differential Revision: http://reviews.llvm.org/D7353 llvm-svn: 227973
-
Sanjay Patel authored
This test was checking for lack of a "movaps" (an aligned load) rather than a "movups" (an unaligned load). It also included a store which complicated the checking. Add specific CPU runs to prevent subtarget feature flag overrides from inhibiting this optimization. llvm-svn: 227972
-
Jon Roelofs authored
EricWF has updated the compilers on his buildbots. Hopefully they won't crash now. llvm-svn: 227971
-
Tobias Grosser authored
llvm-svn: 227970
-
Bruno Cardoso Lopes authored
Improve EXTRACT_VECTOR_ELT DAG combine to catch conversion patterns between x86mmx and i32 with more layers of indirection. Before: movq2dq %mm0, %xmm0 movd %xmm0, %eax After: movd %mm0, %eax llvm-svn: 227969
-
Alexander Potapenko authored
llvm-svn: 227968
-
Alexander Potapenko authored
and make them global so that they're not removed by `strip -x`. llvm-svn: 227967
-
Renato Golin authored
Also, disabling BuiltinLongJmpTest, as it fails for ARM and PPC as well. Patch by Christophe Lyon. llvm-svn: 227966
-
Renato Golin authored
For the time being, it is still hardcoded to support only the 39 VA bits variant, I plan to work on supporting 42 and 48 VA bits variants, but I don't have access to such hardware at the moment. Patch by Chrystophe Lyon. llvm-svn: 227965
-
Hafiz Abid Qadeer authored
On windows, signal handler is reset to default once a signal is received. This causes doing ctrl-c twice on console (or pressing suspend button twice in eclipse ide which uses SIGINT to stop the debuggee) to crash lldb-mi on windows. Although there is very tiny window (after signal handler is called and before we restore the handler) where default handler will be in place. But this is hardly a problem in practice as IDEs generally disable their suspend button once it has been presses. llvm-svn: 227964
-
Craig Topper authored
[X86] Make fxsave64/fxrstor64/xsave64/xsrstor64/xsaveopt64 parseable in AT&T syntax. Also make them the default output. llvm-svn: 227963
-
Craig Topper authored
[X86] Add Requires[In64BitMode] around MOVSX64rr32/MOVSX64rm32. This makes it more strictly mutexed with the ARPL instruction 32-bit mode. Helps with some disassembler changes I'm experimenting with. Should be NFC. llvm-svn: 227962
-
Denis Protivensky authored
Added relocations to perform function calls with and without passing arguments. ARM-only, Thumb-only and mixed mode code generations are supported. Only simple veneers (direct instruction modification) are supported as ARM-Thumb interwork. Differential Revision: http://reviews.llvm.org/D7223 llvm-svn: 227961
-
Pavel Labath authored
llvm-svn: 227960
-
Yury Gribov authored
Differential Revision: http://reviews.llvm.org/D7294 llvm-svn: 227959
-
Hafiz Abid Qadeer authored
This patch fixes execution of CLI commands in MI mode. The CLI commands are executed using "-interpreter-exec" command. The bug was in the CMICmnLLDBDebugSessionInfo class which contained the following members: SBProcess, SBTarget, SBDebugger and SBListener, but CLI commands don't affect them and they aren't updated. Therefore some members can contain incorrect (or obsolete) reference and it can cause an error. My patch removes these members and uses getters that provides the updated instance every time it is used. Patch from Ilia K ki.stfu@gmail.com. Approved by Greg. llvm-svn: 227958
-
Daniel Jasper authored
llvm-svn: 227957
-