- Apr 30, 2012
-
-
Bill Wendling authored
Allow the "SplitCriticalEdge" function to split the edge to a landing pad. If the pass is *sure* that it thinks it knows what it's doing, then it may go ahead and specify that the landing pad can have its critical edge split. The loop unswitch pass is one of these passes. It will split the critical edges of all edges coming from a loop to a landing pad not within the loop. Doing so will retain important loop analysis information, such as loop simplify. llvm-svn: 155817
-
Bill Wendling authored
llvm-svn: 155816
-
Eli Bendersky authored
- Add comments - Change field names to be more reasonable - Fix indentation and naming to conform to coding conventions - Remove unnecessary includes / replace them by forward declatations llvm-svn: 155815
-
Bill Wendling authored
llvm-svn: 155813
-
Craig Topper authored
llvm-svn: 155811
-
Pete Cooper authored
Copied all the VEX prefix encoding code from X86MCCodeEmitter to the x86 JIT emitter. Needs some major refactoring as these two code emitters are almost identical llvm-svn: 155810
-
Rafael Espindola authored
inputs. llvm-svn: 155809
-
- Apr 29, 2012
-
-
Jakub Staszak authored
llvm-svn: 155800
-
Craig Topper authored
llvm-svn: 155799
-
Craig Topper authored
llvm-svn: 155798
-
Kalle Raiskila authored
llvm-svn: 155797
-
Benjamin Kramer authored
llvm-svn: 155795
-
Eli Bendersky authored
llvm-svn: 155793
-
Eli Bendersky authored
if !ForceInterpreteri). It has no effect (apart from a memory leak...) llvm-svn: 155792
-
Benjamin Kramer authored
llvm-svn: 155791
-
Eli Bendersky authored
llvm-svn: 155790
-
Craig Topper authored
llvm-svn: 155787
-
Craig Topper authored
llvm-svn: 155786
-
Craig Topper authored
Mark the default cases of MVT::getVectorElementType and MVT:getVectorNumElements as unreachable to reduce code size. llvm-svn: 155785
-
- Apr 28, 2012
-
-
Jakob Stoklund Olesen authored
We don't compute spill weights until after coalescing anyway. llvm-svn: 155766
-
Jakob Stoklund Olesen authored
llvm-svn: 155765
-
Benjamin Kramer authored
This way we can enable the POD-like class optimization for a lot more classes, saving ~120k of code in clang/i386/Release+Asserts when selfhosting. llvm-svn: 155761
-
Benjamin Kramer authored
llvm-svn: 155760
-
Jakob Stoklund Olesen authored
The code could search past the end of the basic block when there was already a constant pool entry after the block. Test case with giant basic block in SingleSource/UnitTests/Vector/constpool.c llvm-svn: 155753
-
Andrew Trick authored
This time, also fix the caller of AddGlue to properly handle incomplete chains. AddGlue had failure modes, but shamefully hid them from its caller. It's luck ran out. Fixes rdar://11314175: BuildSchedUnits assert. llvm-svn: 155749
-
Jim Grosbach authored
Make sure when parsing the Thumb1 sp+register ADD instruction that the source and destination operands match. In thumb2, just use the wide encoding if they don't. In Thumb1, issue a diagnostic. rdar://11219154 llvm-svn: 155748
-
Jim Grosbach authored
Make the operand order of the instruction match that of the asm syntax. llvm-svn: 155747
-
Derek Schuff authored
llvm-svn: 155746
-
Derek Schuff authored
On x86-32, structure return via sret lets the callee pop the hidden pointer argument off the stack, which the caller then re-pushes. However if the calling convention is fastcc, then a register is used instead, and the caller should not adjust the stack. This is implemented with a check of IsTailCallConvention X86TargetLowering::LowerCall but is now checked properly in X86FastISel::DoSelectCall. llvm-svn: 155745
-
Jakob Stoklund Olesen authored
Previously, ARMConstantIslandPass would conservatively compute the address of an aligned basic block as: RoundUpToAlignment(Offset + UnknownPadding) This worked fine for the layout algorithm itself, but it could fool the verify() function because it accounts for alignment padding twice: Once when adding the worst case UnknownPadding, and again by rounding up the fictional block offset. This meant that when optimizeThumb2Instructions would shrink an instruction, the conservative distance estimate could grow. That shouldn't be possible since the woorst case alignment padding wss already included. This patch drops the use of RoundUpToAlignment, and depends only on worst case padding to compute conservative block offsets. This has the weird effect that the computed offset for an aligned block may not be aligned. The important difference is that shrinking an instruction can never cause the estimated distance between two instructions to grow. The estimated distance is always larger than the real distance that only the assembler knows. <rdar://problem/11339352> llvm-svn: 155744
-
Andrew Trick authored
This definitely caused regression with ARM -mno-thumb. llvm-svn: 155743
-
Craig Topper authored
llvm-svn: 155742
-
Chad Rosier authored
x == -y --> x+y == 0 x != -y --> x+y != 0 On x86, the generated code goes from negl %esi cmpl %esi, %edi je .LBB0_2 to addl %esi, %edi je .L4 This case is correctly handled for ARM with "cmn". Patch by Manman Ren. rdar://11245199 PR12545 llvm-svn: 155739
-
- Apr 27, 2012
-
-
Michael J. Spencer authored
llvm-svn: 155735
-
Craig Topper authored
llvm-svn: 155733
-
Evan Cheng authored
llvm-svn: 155732
-
Hal Finkel authored
Target specific types should not be vectorized. As a practical matter, these types are already register matched (at least in the x86 case), and codegen does not always work correctly (at least in the ppc case, and this is not worth fixing because ppc_fp128 is currently broken and will probably go away soon). llvm-svn: 155729
-
David Blaikie authored
llvm-svn: 155727
-
David Blaikie authored
llvm-svn: 155726
-
Dan Gohman authored
llvm-svn: 155725
-