- Nov 16, 2011
-
-
Bob Wilson authored
There may be many invokes that share one landing pad, and the previous code would record the landing pad once for each invoke. Besides the wasted effort, a pair of volatile loads gets inserted every time the landing pad is processed. The rest of the code can get optimized away when a landing pad is processed repeatedly, but the volatile loads remain, resulting in code like: LBB35_18: Ltmp483: ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r4, [r7, #-72] ldr r2, [r7, #-68] llvm-svn: 144787
-
Craig Topper authored
llvm-svn: 144784
-
rdar://problem/10444602Bob Wilson authored
This same basic code was in the older version of the SjLj exception handling, but it was removed in the recent revisions to that code. It needs to be there. llvm-svn: 144782
-
rdar://problem/10444602Bob Wilson authored
The EmitBasePointerRecalculation function has 2 problems, one minor and one fatal. The minor problem is that it inserts the code at the setjmp instead of in the dispatch block. The fatal problem is that at the point where this code runs, we don't know whether there will be a base pointer, so the entire function is a no-op. The base pointer recalculation needs to be handled as it was before, by inserting a pseudo instruction that gets expanded late. Most of the support for the old approach is still here, but it no longer has any connection to the eh_sjlj_dispatchsetup intrinsic. Clean up the parts related to the intrinsic and just generate the pseudo instruction directly. llvm-svn: 144781
-
Craig Topper authored
llvm-svn: 144777
-
Evan Cheng authored
llvm-svn: 144776
-
Nick Lewycky authored
llvm-svn: 144774
-
Nick Lewycky authored
looking at the size of the pointee. Fixes PR11390! llvm-svn: 144773
-
Evan Cheng authored
If the 2addr instruction has other kills, don't move it below any other uses since we don't want to extend other live ranges. llvm-svn: 144772
-
Evan Cheng authored
RescheduleKillAboveMI() must backtrack to before the rescheduled DBG_VALUE instructions. rdar://10451185 llvm-svn: 144771
-
-
Eli Friedman authored
llvm-svn: 144769
-
Eli Friedman authored
llvm-svn: 144768
-
Eli Friedman authored
Add a couple asserts so it will be easier to debug if we accidentally pass indexed loads/stores to the legalizer. llvm-svn: 144767
-
Michael J. Spencer authored
llvm-svn: 144759
-
Kostya Serebryany authored
llvm-svn: 144758
-
Michael J. Spencer authored
llvm-svn: 144757
-
Michael J. Spencer authored
llvm-svn: 144756
-
Michael J. Spencer authored
llvm-svn: 144755
-
Kostya Serebryany authored
llvm-svn: 144748
-
Owen Anderson authored
llvm-svn: 144747
-
Andrew Trick authored
Fixes PR11375: Different results for 'clang++ huh.cpp'... llvm-svn: 144746
-
Chad Rosier authored
llvm-svn: 144743
-
Jakob Stoklund Olesen authored
This will widen 32-bit register vmov instructions to 64-bit when possible. The 64-bit vmovd instructions can then be translated to NEON vorr instructions by the execution dependency fix pass. The copies are only widened if they are marked as clobbering the whole D-register. llvm-svn: 144734
-
Eric Christopher authored
failure during bootstrap with it turned on. llvm-svn: 144731
-
Chad Rosier authored
%arrayidx135 = getelementptr inbounds [4 x [4 x [4 x [4 x i32]]]]* %M0, i32 0, i64 0 %arrayidx136 = getelementptr inbounds [4 x [4 x [4 x i32]]]* %arrayidx135, i32 0, i64 %idxprom134 Prior to this commit, the GEP instruction that defines %arrayidx136 thought that %arrayidx135 was a trivial kill. The GEP that defines %arrayidx135 doesn't generate any code and thus %M0 gets folded into the second GEP. Thus, we need to look through GEPs with all zero indices. rdar://10443319 llvm-svn: 144730
-
Jim Grosbach authored
For example, vld1.f64 {d2-d5}, [r2,:128]! Should be equivalent to: vld1.f64 {d2,d3,d4,d5}, [r2,:128]! It's not documented syntax in the ARM ARM, but it is consistent with what's accepted for VLDM/VSTM and is unambiguous in meaning, so it's a good thing to support. rdar://10451128 llvm-svn: 144727
-
- Nov 15, 2011
-
-
Devang Patel authored
llvm-svn: 144724
-
Jim Grosbach authored
llvm-svn: 144722
-
Nadav Rotem authored
llvm-svn: 144721
-
Nadav Rotem authored
llvm-svn: 144720
-
Chris Lattner authored
llvm-svn: 144719
-
Chris Lattner authored
llvm-svn: 144716
-
NAKAMURA Takumi authored
llvm-svn: 144714
-
Jim Grosbach authored
llvm-svn: 144713
-
Chris Lattner authored
llvm-svn: 144711
-
Jim Grosbach authored
llvm-svn: 144710
-
Jim Grosbach authored
llvm-svn: 144709
-
Chris Lattner authored
llvm-svn: 144708
-
Pete Cooper authored
by later instructions. Only done for DEC64m right now. Fixes <rdar://problem/6172640> llvm-svn: 144705
-