- Jul 28, 2009
-
-
David Goodwin authored
llvm-svn: 77329
-
Evan Cheng authored
llvm-svn: 77305
-
Evan Cheng authored
- This change also makes it possible to switch between ARM / Thumb on a per-function basis. - Fixed thumb2 routine which expand reg + arbitrary immediate. It was using using ARM so_imm logic. - Use movw and movt to do reg + imm when profitable. - Other code clean ups and minor optimizations. llvm-svn: 77300
-
Dan Gohman authored
to a few tests where it is required for the expected transformation. llvm-svn: 77290
-
David Goodwin authored
llvm-svn: 77275
-
Daniel Dunbar authored
llvm-svn: 77272
-
- Jul 27, 2009
-
-
Dan Gohman authored
LangRef.html changes for details. llvm-svn: 77259
-
David Goodwin authored
llvm-svn: 77199
-
Sanjiv Gupta authored
Test case to check that separate section is created for a global variable specified with section attribute. llvm-svn: 77195
-
Dan Gohman authored
after their associated opcodes rather than before. This makes them a little easier to read. llvm-svn: 77194
-
Chris Lattner authored
llvm-svn: 77192
-
- Jul 26, 2009
-
-
Chris Lattner authored
llvm-svn: 77116
-
Chris Lattner authored
for now. Make the section switching directives more consistent by not including \n and including \t for them all. llvm-svn: 77107
-
Chris Lattner authored
and make it more aggressive, we now put: const int G2 __attribute__((weak)) = 42; into the text (readonly) segment like gcc, previously we put it into the data (readwrite) segment. llvm-svn: 77104
-
Bob Wilson authored
Patch by Anton Korzh, with some modifications from me. llvm-svn: 77101
-
- Jul 25, 2009
-
-
Chris Lattner authored
Thanks to Rafael for the great example. llvm-svn: 77083
-
Dan Gohman authored
the step value as unsigned, the start value and the addrec itself still need to be treated as signed. llvm-svn: 77078
-
Chris Lattner authored
on darwin with ".cstring" instead of ".section __TEXT,__cstring". They are the same and the former is better. Remove this because this is no longer magic pixie dust in the frontend. llvm-svn: 77055
-
Dan Gohman authored
analyzing add recurrences. llvm-svn: 77034
-
Evan Cheng authored
llvm-svn: 77031
-
Evan Cheng authored
Before: adr r12, #LJTI3_0_0 ldr pc, [r12, +r0, lsl #2] LJTI3_0_0: .long LBB3_24 .long LBB3_30 .long LBB3_31 .long LBB3_32 After: adr r12, #LJTI3_0_0 add pc, r12, +r0, lsl #2 LJTI3_0_0: b.w LBB3_24 b.w LBB3_30 b.w LBB3_31 b.w LBB3_32 This has several advantages. 1. This will make it easier to optimize this to a TBB / TBH instruction + (smaller) table. 2. This eliminate the need for ugly asm printer hack to force the address into thumb addresses (bit 0 is one). 3. Same codegen for pic and non-pic. 4. This eliminate the need to align the table so constantpool island pass won't have to over-estimate the size. Based on my calculation, the later is probably slightly faster as well since ldr pc with shifter address is very slow. That is, it should be a win as long as the HW implementation can do a reasonable job of branch predict the second branch. llvm-svn: 77024
-
Evan Cheng authored
llvm-svn: 77020
-
Evan Cheng authored
llvm-svn: 77007
-
Evan Cheng authored
llvm-svn: 77006
-
- Jul 24, 2009
-
-
Eli Friedman authored
There's still a strict-aliasing violation here, but I don't feel like dealing with that right now... llvm-svn: 77005
-
Eric Christopher authored
format and add an extract/insert test. llvm-svn: 76994
-
Evan Cheng authored
llvm-svn: 76954
-
Chris Lattner authored
a sad mistake that is regretted. :) llvm-svn: 76935
-
Richard Osborne authored
but pass when run against r76652. llvm-svn: 76923
-
Dan Gohman authored
llvm-svn: 76920
-
Evan Cheng authored
llvm-svn: 76909
-
- Jul 23, 2009
-
-
Evan Cheng authored
Also fixed up code to fully use the SoImm field for ADR on ARM mode. llvm-svn: 76890
-
Andreas Bolka authored
llvm-svn: 76880
-
Chris Lattner authored
llvm-svn: 76868
-
Chris Lattner authored
llvm-svn: 76864
-
Chris Lattner authored
also apply to vectors. This allows us to compile this: #include <emmintrin.h> __m128i a(__m128 a, __m128 b) { return a==a & b==b; } __m128i b(__m128 a, __m128 b) { return a!=a | b!=b; } to: _a: cmpordps %xmm1, %xmm0 ret _b: cmpunordps %xmm1, %xmm0 ret with clang instead of to a ton of horrible code. llvm-svn: 76863
-
Chris Lattner authored
with negative tests: this test wasn't checking what it thought it was because it was grepping .bc, not .ll. llvm-svn: 76861
-
Chris Lattner authored
llvm-svn: 76860
-
Chris Lattner authored
llvm-svn: 76853
-
Chris Lattner authored
llvm-svn: 76852
-