- Apr 25, 2013
-
-
Akira Hatanaka authored
Patch by Zoran Jovanovic. llvm-svn: 180241
-
Akira Hatanaka authored
Patch by Zoran Jovanovic. llvm-svn: 180238
-
Tom Stellard authored
Fixes test/CodeGen/R600/setcc.ll llvm-svn: 180231
-
Tom Stellard authored
The libelf implementation that is distributed here: http://www.mr511.de/software/english.html will not parse sections that are marked SHT_NULL. llvm-svn: 180230
-
- Apr 23, 2013
-
-
Jyotsna Verma authored
llvm-svn: 180145
-
Jyotsna Verma authored
No functionality change. llvm-svn: 180144
-
Stephen Lin authored
Add more tests for r179925 to verify correct handling of signext/zeroext; strengthen condition check to require actual MVT::i32 virtual register types, just in case (no actual functionality change) llvm-svn: 180138
-
Stephen Lin authored
llvm-svn: 180136
-
Jyotsna Verma authored
llvm-svn: 180133
-
Bill Schmidt authored
No functional change intended. llvm-svn: 180131
-
Akira Hatanaka authored
No intended changes in functionality. llvm-svn: 180130
-
Owen Anderson authored
DAGCombine should not aggressively fold SEXT(VSETCC(...)) into a wider VSETCC without first checking the target's vector boolean contents. This exposed an issue with PowerPC AltiVec where it appears it was setting the wrong vector boolean contents. The included change fixes the PowerPC tests, and was OK'd by Hal. llvm-svn: 180129
-
Vincent Lejeune authored
llvm-svn: 180124
-
Vincent Lejeune authored
llvm-svn: 180123
-
Jyotsna Verma authored
for absolute/absolute-set addressing modes. llvm-svn: 180120
-
Tim Northover authored
AArch64 always demands a register-scavenger, so the pointer should never be NULL. However, in the spirit of paranoia, we'll assert it before use just in case. llvm-svn: 180080
-
Matt Arsenault authored
The value isn't actually used, and setting it emits a COFF specific directive. llvm-svn: 180064
-
Eric Christopher authored
or the C++ files themselves. This enables people to use just a C compiler to interoperate with LLVM. llvm-svn: 180063
-
Chad Rosier authored
Disp will always be one of MCSymbolRefExpr or MCConstantExpr, and never NULL. llvm-svn: 180059
-
Chad Rosier authored
the MCParsedAsmOperand. Part of rdar://13663589 llvm-svn: 180054
-
- Apr 22, 2013
-
-
Chad Rosier authored
llvm-svn: 180044
-
Akira Hatanaka authored
llvm-svn: 180040
-
Akira Hatanaka authored
shifted by the same amount and the shift amount is smaller than the element size. llvm-svn: 180039
-
Chad Rosier authored
now taken care of by the frontend, which allows us to parse arbitrary C/C++ variables. Part of rdar://13663589 llvm-svn: 180037
-
Chad Rosier authored
change indended. Part of rdar://13663589 llvm-svn: 180028
-
Eric Christopher authored
set below. llvm-svn: 180015
-
Eric Christopher authored
llvm-svn: 180014
-
Stepan Dyatkovskiy authored
-- C.4 and C.5 statements, when NSAA is not equal to SP. -- C.1.cp statement for VA functions. Note: There are no VFP CPRCs in a variadic procedure. Before this patch "NSAA != 0" means "don't use GPRs anymore ". But there are some exceptions in AAPCS. 1. For non VA function: allocate all VFP regs for CPRC. When all VFPs are allocated CPRCs would be sent to stack, while non CPRCs may be still allocated in GRPs. 2. Check that for VA functions all params uses GPRs and then stack. No exceptions, no CPRCs here. llvm-svn: 180011
-
Jim Grosbach authored
Rather than just splitting the input type and hoping for the best, apply a bit more cleverness. Just splitting the types until the source is legal often leads to an illegal result time, which is then widened and a scalarization step is introduced which leads to truly horrible code generation. With the loop vectorizer, these sorts of operations are much more common, and so it's worth extra effort to do them well. Add a legalization hook for the operands of a TRUNCATE node, which will be encountered after the result type has been legalized, but if the operand type is still illegal. If simple splitting of both types ends up with the result type of each half still being legal, just do that (v16i16 -> v16i8 on ARM, for example). If, however, that would result in an illegal result type (v8i32 -> v8i8 on ARM, for example), we can get more clever with power-two vectors. Specifically, split the input type, but also widen the result element size, then concatenate the halves and truncate again. For example on ARM, To perform a "%res = v8i8 trunc v8i32 %in" we transform to: %inlo = v4i32 extract_subvector %in, 0 %inhi = v4i32 extract_subvector %in, 4 %lo16 = v4i16 trunc v4i32 %inlo %hi16 = v4i16 trunc v4i32 %inhi %in16 = v8i16 concat_vectors v4i16 %lo16, v4i16 %hi16 %res = v8i8 trunc v8i16 %in16 This allows instruction selection to generate three VMOVN instructions instead of a sequences of moves, stores and loads. Update the ARMTargetTransformInfo to take this improved legalization into account. Consider the simplified IR: define <16 x i8> @test1(<16 x i32>* %ap) { %a = load <16 x i32>* %ap %tmp = trunc <16 x i32> %a to <16 x i8> ret <16 x i8> %tmp } define <8 x i8> @test2(<8 x i32>* %ap) { %a = load <8 x i32>* %ap %tmp = trunc <8 x i32> %a to <8 x i8> ret <8 x i8> %tmp } Previously, we would generate the truly hideous: .syntax unified .section __TEXT,__text,regular,pure_instructions .globl _test1 .align 2 _test1: @ @test1 @ BB#0: push {r7} mov r7, sp sub sp, sp, #20 bic sp, sp, #7 add r1, r0, #48 add r2, r0, #32 vld1.64 {d24, d25}, [r0:128] vld1.64 {d16, d17}, [r1:128] vld1.64 {d18, d19}, [r2:128] add r1, r0, #16 vmovn.i32 d22, q8 vld1.64 {d16, d17}, [r1:128] vmovn.i32 d20, q9 vmovn.i32 d18, q12 vmov.u16 r0, d22[3] strb r0, [sp, #15] vmov.u16 r0, d22[2] strb r0, [sp, #14] vmov.u16 r0, d22[1] strb r0, [sp, #13] vmov.u16 r0, d22[0] vmovn.i32 d16, q8 strb r0, [sp, #12] vmov.u16 r0, d20[3] strb r0, [sp, #11] vmov.u16 r0, d20[2] strb r0, [sp, #10] vmov.u16 r0, d20[1] strb r0, [sp, #9] vmov.u16 r0, d20[0] strb r0, [sp, #8] vmov.u16 r0, d18[3] strb r0, [sp, #3] vmov.u16 r0, d18[2] strb r0, [sp, #2] vmov.u16 r0, d18[1] strb r0, [sp, #1] vmov.u16 r0, d18[0] strb r0, [sp] vmov.u16 r0, d16[3] strb r0, [sp, #7] vmov.u16 r0, d16[2] strb r0, [sp, #6] vmov.u16 r0, d16[1] strb r0, [sp, #5] vmov.u16 r0, d16[0] strb r0, [sp, #4] vldmia sp, {d16, d17} vmov r0, r1, d16 vmov r2, r3, d17 mov sp, r7 pop {r7} bx lr .globl _test2 .align 2 _test2: @ @test2 @ BB#0: push {r7} mov r7, sp sub sp, sp, #12 bic sp, sp, #7 vld1.64 {d16, d17}, [r0:128] add r0, r0, #16 vld1.64 {d20, d21}, [r0:128] vmovn.i32 d18, q8 vmov.u16 r0, d18[3] vmovn.i32 d16, q10 strb r0, [sp, #3] vmov.u16 r0, d18[2] strb r0, [sp, #2] vmov.u16 r0, d18[1] strb r0, [sp, #1] vmov.u16 r0, d18[0] strb r0, [sp] vmov.u16 r0, d16[3] strb r0, [sp, #7] vmov.u16 r0, d16[2] strb r0, [sp, #6] vmov.u16 r0, d16[1] strb r0, [sp, #5] vmov.u16 r0, d16[0] strb r0, [sp, #4] ldm sp, {r0, r1} mov sp, r7 pop {r7} bx lr Now, however, we generate the much more straightforward: .syntax unified .section __TEXT,__text,regular,pure_instructions .globl _test1 .align 2 _test1: @ @test1 @ BB#0: add r1, r0, #48 add r2, r0, #32 vld1.64 {d20, d21}, [r0:128] vld1.64 {d16, d17}, [r1:128] add r1, r0, #16 vld1.64 {d18, d19}, [r2:128] vld1.64 {d22, d23}, [r1:128] vmovn.i32 d17, q8 vmovn.i32 d16, q9 vmovn.i32 d18, q10 vmovn.i32 d19, q11 vmovn.i16 d17, q8 vmovn.i16 d16, q9 vmov r0, r1, d16 vmov r2, r3, d17 bx lr .globl _test2 .align 2 _test2: @ @test2 @ BB#0: vld1.64 {d16, d17}, [r0:128] add r0, r0, #16 vld1.64 {d18, d19}, [r0:128] vmovn.i32 d16, q8 vmovn.i32 d17, q9 vmovn.i16 d16, q8 vmov r0, r1, d16 bx lr llvm-svn: 179989
-
- Apr 21, 2013
-
-
Jakob Stoklund Olesen authored
Arguments after the fixed arguments never use the floating point registers. llvm-svn: 179987
-
Jakob Stoklund Olesen authored
Don't ignore the high 32 bits of the immediate. llvm-svn: 179985
-
Tim Northover authored
This allows common sp-offsets to be part of the instruction and is probably faster on modern CPUs too. llvm-svn: 179977
-
Jakob Stoklund Olesen authored
With a little help from the frontend, it looks like the standard va_* intrinsics can do the job. Also clean up an old bitcast hack in LowerVAARG that dealt with unaligned double loads. Load SDNodes can specify an alignment now. Still missing: Calling varargs functions with float arguments. llvm-svn: 179961
-
- Apr 20, 2013
-
-
Tim Northover authored
Previously, when spilling 64-bit paired registers, an LDMIA with both a FrameIndex and an offset was produced. This kind of instruction shouldn't exist, and the extra operand was being confused with the predicate, causing aborts later on. This removes the invalid 0-offset from the instruction being produced. llvm-svn: 179956
-
Tim Northover authored
llvm-svn: 179952
-
Tim Northover authored
I think it's almost impossible to fold atomic fences profitably under LLVM/C++11 semantics. As a result, this is now unused and just cluttering up the target interface. llvm-svn: 179940
-
Tim Northover authored
llvm-svn: 179939
-
Hal Finkel authored
The getSwappedPredicate function can be used in other places (such as in improvements to the PPCCTRLoops pass). Instead of trapping it as a static function in PPCInstrInfo, move it into PPCPredicates with other predicate-related things. No functionality change intended. llvm-svn: 179926
-
Stephen Lin authored
Add CodeGen support for functions that always return arguments via a new parameter attribute 'returned', which is taken advantage of in target-independent tail call opportunity detection and in ARM call lowering (when placed on an integral first parameter). llvm-svn: 179925
-
Stephen Lin authored
llvm-svn: 179913
-