- Mar 24, 2013
-
-
Jakob Stoklund Olesen authored
The types of register variables no longer need to be specified in output patterns. llvm-svn: 177845
-
Jakob Stoklund Olesen authored
Also update the documentation since Sparc is the nicest backend, and used as an example in WritingAnLLVMBackend. llvm-svn: 177835
-
- Mar 23, 2013
-
-
Hal Finkel authored
In order for the new ZERO register to be used with MC, etc. we need to specify its register number (0). Thanks to Kai for reporting the problem! llvm-svn: 177833
-
Hal Finkel authored
In preparation for using the new register scavenger capability for providing more than one register simultaneously, specifically note functions that have spilled VRSAVE (currently, this can happen only in functions that use the setjmp intrinsic). As with CR spilling, such functions will need to provide two emergency spill slots to the scavenger. No functionality change intended. llvm-svn: 177832
-
Hal Finkel authored
I recently added a BCL instruction definition as part of implementing SjLj support. This can also be used to MCize bcl emission in the asm printer. No functionality change intended. llvm-svn: 177830
-
Jakob Stoklund Olesen authored
The SelectionDAG graph has MVT type labels, not register classes, so this makes it clearer what is happening. This notation is also robust against adding more types to the IntRegs register class. llvm-svn: 177829
-
Hal Finkel authored
These spilling functions will eventually make use of the register scavenger, however, they'll do so by taking advantage of PEI's virtual-register-based delayed scavenging mechanism. As a result, these function parameters will not be used, and can be removed. No functionality change intended. llvm-svn: 177827
-
Hal Finkel authored
The LR register is unconditionally reserved, and its spilling and restoration is handled by the prologue/epilogue code. As a result, it is never explicitly spilled by the register allocator. No functionality change intended. llvm-svn: 177823
-
Hal Finkel authored
This patch lets the register scavenger make use of multiple spill slots in order to guarantee that it will be able to provide multiple registers simultaneously. To support this, the RS's API has changed slightly: setScavengingFrameIndex / getScavengingFrameIndex have been replaced by addScavengingFrameIndex / isScavengingFrameIndex / getScavengingFrameIndices. In forthcoming commits, the PowerPC backend will use this capability in order to implement the spilling of condition registers, and some special-purpose registers, without relying on r0 being reserved. In some cases, spilling these registers requires two GPRs: one for addressing and one to hold the value being transferred. llvm-svn: 177774
-
- Mar 22, 2013
-
-
Jyotsna Verma authored
llvm-svn: 177747
-
Ulrich Weigand authored
We currently have a duplicated set of call instruction patterns depending on the ABI to be followed (Darwin vs. Linux). This is a bit odd; while the different ABIs will result in different instruction sequences, the actual instructions themselves ought to be independent of the ABI. And in fact it turns out that the only nontrivial difference between the two sets of patterns is that in the PPC64 Linux ABI, the instruction used for indirect calls is marked to take X11 as extra input register (which is indeed used only with that ABI to hold an incoming environment pointer for nested functions). However, this does not need to be hard-coded at the .td pattern level; instead, the C++ code expanding calls can simply add that use, just like it adds uses for argument registers anyway. No change in generated code expected. llvm-svn: 177735
-
Ulrich Weigand authored
Currently, the sub-operand of a memrr address that corresponds to what hardware considers the base register is called "offreg", while the sub-operand that corresponds to the offset is called "ptrreg". To avoid confusion, this patch simply swaps the named of those two sub-operands and updates all uses. No functional change is intended. llvm-svn: 177734
-
Ulrich Weigand authored
PPCTargetLowering::getPreIndexedAddressParts currently provides the base part of a memory address in the offset result, and the offset part in the base result. That swap is then undone again when an MI instruction is generated (in PPCDAGToDAGISel::Select for loads, and using .md Pat patterns for stores). This patch reverts this double swap, to make common code and back-end be in sync as to which part of the address is base and which is offset. To avoid performance regressions in certain cases, target code now checks whether the choice of base register would be rejected for pre-inc accesses by common code, and attempts to swap base and offset again in such cases. (Overall, this means that now pre-ice accesses are generated *more* frequently than before.) llvm-svn: 177733
-
Ulrich Weigand authored
The iaddroff ComplexPattern is supposed to recognize displacement expressions that have been processed by a SelectAddressRegImm, which means it needs to accept TargetConstant and TargetGlobalAddress nodes. Currently, it erroneously also accepts some other nodes, in particular Constant and PPCISD::Lo. While this problem is currently latent, it would cause wrong-code bugs with a follow-on patch I'm about to commit, so this patch tightens the ComplexPattern. The equivalent change is made in PPCDAGToDAGISel::Select, where pre-inc load patterns are handled (as opposed to store patterns, the loads are handled in C++ code without making use of the .td ComplexPattern). llvm-svn: 177732
-
Ulrich Weigand authored
The xaddroff pattern is currently (mistakenly) used to recognize the *base* register in pre-inc store patterns. This patch replaces those uses by ptr_rc_nor0 (as is elsewhere done to match the base register of an address), and removes the now unused ComplexPattern. llvm-svn: 177731
-
Michel Danzer authored
Fixes wrong lighting in some corner cases with r600g and radeonsi, e.g. manifested by failure of two piglit/glean tests and intermittent black patches in many apps. Tested on SI and RS880. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=62012 [radeonsi] Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=58150 [r600g] NOTE: This is a candidate for the Mesa stable branch. Reviewed-by:
Christian König <christian.koenig@amd.com> llvm-svn: 177730
-
Jack Carter authored
For mips a branch an 18-bit signed offset (the 16-bit offset field shifted left 2 bits) is added to the address of the instruction following the branch (not the branch itself), in the branch delay slot, to form a PC-relative effective target address. Previously, the code generator did not perform the shift of the immediate branch offset which resulted in wrong instruction opcode. This patch fixes the issue. Contributor: Vladimir Medic llvm-svn: 177687
-
Jack Carter authored
This patch uses the generated instruction info tables to identify memory/load store instructions. After successful matching and based on the operand type and size, it generates additional instructions to the output. Contributor: Vladimir Medic llvm-svn: 177685
-
Hal Finkel authored
As Jakob pointed out in his review of r177423, having a shared ZERO register between the 32- and 64-bit register classes causes this odd G8RC_NOX0_and_GPRC_NOR0 class to be created. As recommended, this adds a ZERO8 register which differentiates the 32- and 64-bit zeros. No functionality change intended. llvm-svn: 177683
-
Hal Finkel authored
Thanks to Jakob for isolating the underlying problem from the test case in r177423. The original commit had introduced asymmetric copy operations, but these turned out to be a work-around to the real problem (the use of == instead of hasSubClassEq in PPCCTRLoops). llvm-svn: 177679
-
- Mar 21, 2013
-
-
Jack Carter authored
The .set directive in the Mips the assembler can be used to set the value of a symbol to an expression. This changes the symbol's value and type to conform to the expression's. Syntax: .set symbol, expression This patch implements the parsing of the above syntax and enables the parser to use defined symbols when parsing operands. Contributor: Vladimir Medic llvm-svn: 177667
-
Hal Finkel authored
This implements SJLJ lowering on PPC, making the Clang functions __builtin_{setjmp/longjmp} functional on PPC platforms. The implementation strategy is similar to that on X86, with the exception that a branch-and-link variant is used to get the right jump address. Credit goes to Bill Schmidt for suggesting the use of the unconditional bcl form (instead of the regular bl instruction) to limit return-address-cache pollution. Benchmarking the speed at -O3 of: static jmp_buf env_sigill; void foo() { __builtin_longjmp(env_sigill,1); } main() { ... for (int i = 0; i < c; ++i) { if (__builtin_setjmp(env_sigill)) { goto done; } else { foo(); } done:; } ... } vs. the same code using the libc setjmp/longjmp functions on a P7 shows that this builtin implementation is ~4x faster with Altivec enabled and ~7.25x faster with Altivec disabled. This comparison is somewhat unfair because the libc version must also save/restore the VSX registers which we don't yet support. llvm-svn: 177666
-
Hal Finkel authored
Although there is only one Altivec VRSAVE register, it is a member of a register class, and we need the ability to spill it. Because this register is normally callee-preserved and handled by special code this has never before been necessary. However, this capability will be required by a forthcoming commit adding SjLj support. llvm-svn: 177654
-
Hal Finkel authored
The old code used to lower FRAMEADDR tried to replicate the logic in the real frame-lowering code that determines whether or not the frame pointer (r31) will be used. When it seemed as through the frame pointer would not be used, the stack pointer (r1) was used instead. Unfortunately, because the stack size is not yet known, this does not work. Instead, this change introduces new always-reserved pseudo-registers (FP and FP8) that are replaced during prologue insertion with the real frame-pointer register (either r1 or r31). It is important that this intrinsic always return a valid frame address because it is used by Clang to store the frame address as part of code generation for __builtin_setjmp. llvm-svn: 177653
-
Renato Golin authored
NEON is not IEEE 754 compliant, so we should avoid lowering single-precision floating point operations with NEON unless unsafe-math is turned on. The equivalent VFP instructions are IEEE 754 compliant, but in some cores they're much slower, so some archs/OSs might still request it to be on by default, such as Swift and Darwin. llvm-svn: 177651
-
Jakob Stoklund Olesen authored
llvm-svn: 177611
-
Jakob Stoklund Olesen authored
It's not yet clear if these instructions need a more careful model. llvm-svn: 177599
-
Jakob Stoklund Olesen authored
This is used for all the expensive system instructions. llvm-svn: 177598
-
- Mar 20, 2013
-
-
Jakob Stoklund Olesen authored
llvm-svn: 177592
-
Jakob Stoklund Olesen authored
llvm-svn: 177591
-
Michael Liao authored
- After moving logic recognizing vector shift with scalar amount from DAG combining into DAG lowering, we declare to customize all vector shifts even vector shift on AVX is legal. As a result, the cost model needs special tuning to identify these legal cases. llvm-svn: 177586
-
Jakob Stoklund Olesen authored
llvm-svn: 177540
-
Jakob Stoklund Olesen authored
llvm-svn: 177539
-
Michael Liao authored
- Move SRA/SRL/SHL lowering support from DAG combination to DAG lowering to support extended 256-bit integer in AVX but not AVX2. llvm-svn: 177478
-
Michael Liao authored
- Prepare moving logic from DAG combining into DAG lowering. There's no functionality change. llvm-svn: 177477
-
Michael Liao authored
- no functionality change llvm-svn: 177476
-
Chad Rosier authored
Patch by Stepan Dyatkovskiy <stpworld@narod.ru> rdar://13457826 llvm-svn: 177463
-
Jakob Stoklund Olesen authored
llvm-svn: 177461
-
Jakob Stoklund Olesen authored
llvm-svn: 177460
-
Jakob Stoklund Olesen authored
llvm-svn: 177459
-