- Jun 28, 2010
-
-
Devang Patel authored
llvm-svn: 106990
-
Devang Patel authored
Use named MDNode, llvm.dbg.sp, to collect subprogram info. This will be used to emit local variable's debug info of deleted functions. llvm-svn: 106989
-
Jim Grosbach authored
llvm-svn: 106988
-
Chandler Carruth authored
much as we already do for allocation function lookup. Explicitly check access for the function we actually select in one case that was previously missing, but being caught behind the blanket diagnostics for all overload candidates. This fixs PR7436. llvm-svn: 106986
-
- Jun 27, 2010
-
-
Devang Patel authored
Do not forget last element, function, while creating Subprogram definition MDNode from subprogram declare MDNode. llvm-svn: 106985
-
Rafael Espindola authored
llvm-svn: 106984
-
Anders Carlsson authored
Correctly destroy reference temporaries with global storage. Remove ErrorUnsupported call when binding a global reference to a non-lvalue. Fixes PR7326. llvm-svn: 106983
-
Anders Carlsson authored
llvm-svn: 106982
-
Anders Carlsson authored
llvm-svn: 106981
-
Anders Carlsson authored
llvm-svn: 106980
-
Chris Lattner authored
large integers, the first inserted value would always create an 'or X, 0'. Even though this is trivially zapped by instcombine, don't bother creating this pointless instruction. llvm-svn: 106979
-
Chris Lattner authored
llvm-svn: 106978
-
Chris Lattner authored
have CGF create and make accessible standard int32,int64 and intptr types. This fixes a ton of 80 column violations introduced by LLVMContextification and cleans up stuff a lot. llvm-svn: 106977
-
Chris Lattner authored
llvm-svn: 106976
-
Chris Lattner authored
(potentially after unwrapping it from a struct) do it without going through memory. We now compile: struct DeclGroup { unsigned NumDecls; }; int foo(DeclGroup D) { return D.NumDecls; } into: %struct.DeclGroup = type { i32 } define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone { entry: %D = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2] %coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1] %coerce.val.ii = trunc i64 %0 to i32 ; <i32> [#uses=1] store i32 %coerce.val.ii, i32* %coerce.dive %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1] %tmp1 = load i32* %tmp ; <i32> [#uses=1] ret i32 %tmp1 } instead of: %struct.DeclGroup = type { i32 } define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone { entry: %D = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2] %tmp = alloca i64 ; <i64*> [#uses=2] %coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1] store i64 %0, i64* %tmp %1 = bitcast i64* %tmp to i32* ; <i32*> [#uses=1] %2 = load i32* %1, align 1 ; <i32> [#uses=1] store i32 %2, i32* %coerce.dive %tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1] %tmp2 = load i32* %tmp1 ; <i32> [#uses=1] ret i32 %tmp2 } ... which is quite a bit less terrifying. llvm-svn: 106975
-
Chris Lattner authored
struct DeclGroup { unsigned NumDecls; }; int foo(DeclGroup D) { return D.NumDecls; } to: %struct.DeclGroup = type { i32 } define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone { entry: %D = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2] %tmp = alloca i64 ; <i64*> [#uses=2] store i64 %0, i64* %tmp %1 = bitcast i64* %tmp to %struct.DeclGroup* ; <%struct.DeclGroup*> [#uses=1] %2 = load %struct.DeclGroup* %1, align 1 ; <%struct.DeclGroup> [#uses=1] store %struct.DeclGroup %2, %struct.DeclGroup* %D %tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1] %tmp2 = load i32* %tmp1 ; <i32> [#uses=1] ret i32 %tmp2 } which caused fast isel bailouts due to the FCA load/store of %2. Now we generate this just blissful code: %struct.DeclGroup = type { i32 } define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone { entry: %D = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2] %tmp = alloca i64 ; <i64*> [#uses=2] %coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1] store i64 %0, i64* %tmp %1 = bitcast i64* %tmp to i32* ; <i32*> [#uses=1] %2 = load i32* %1, align 1 ; <i32> [#uses=1] store i32 %2, i32* %coerce.dive %tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1] %tmp2 = load i32* %tmp1 ; <i32> [#uses=1] ret i32 %tmp2 } This avoids fastisel bailing out and is groundwork for future patch. This reduces bailouts on CGStmt.ll to 911 from 935. llvm-svn: 106974
-
Chris Lattner authored
IR when handling X86-64 by-value struct stuff. For example, we use to compile this: struct DeclGroup { unsigned NumDecls; }; int foo(DeclGroup D); void bar(DeclGroup *D) { foo(*D); } into: define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) ssp nounwind { entry: %D.addr = alloca %struct.DeclGroup*, align 8 ; <%struct.DeclGroup**> [#uses=2] %agg.tmp = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2] %tmp3 = alloca i64 ; <i64*> [#uses=2] store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr %tmp = load %struct.DeclGroup** %D.addr ; <%struct.DeclGroup*> [#uses=1] %tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1] %tmp2 = bitcast %struct.DeclGroup* %tmp to i8* ; <i8*> [#uses=1] call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false) %0 = bitcast i64* %tmp3 to %struct.DeclGroup* ; <%struct.DeclGroup*> [#uses=1] %1 = load %struct.DeclGroup* %agg.tmp ; <%struct.DeclGroup> [#uses=1] store %struct.DeclGroup %1, %struct.DeclGroup* %0, align 1 %2 = load i64* %tmp3 ; <i64> [#uses=1] call void @_Z3foo9DeclGroup(i64 %2) ret void } which would cause fastisel to bail out due to the first class aggregate load %1. With this patch we now compile it into the (still awful): define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) nounwind ssp noredzone { entry: %D.addr = alloca %struct.DeclGroup*, align 8 ; <%struct.DeclGroup**> [#uses=2] %agg.tmp = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2] %tmp3 = alloca i64 ; <i64*> [#uses=2] store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr %tmp = load %struct.DeclGroup** %D.addr ; <%struct.DeclGroup*> [#uses=1] %tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1] %tmp2 = bitcast %struct.DeclGroup* %tmp to i8* ; <i8*> [#uses=1] call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false) %coerce.dive = getelementptr %struct.DeclGroup* %agg.tmp, i32 0, i32 0 ; <i32*> [#uses=1] %0 = bitcast i64* %tmp3 to i32* ; <i32*> [#uses=1] %1 = load i32* %coerce.dive ; <i32> [#uses=1] store i32 %1, i32* %0, align 1 %2 = load i64* %tmp3 ; <i64> [#uses=1] %call = call i32 @_Z3foo9DeclGroup(i64 %2) noredzone ; <i32> [#uses=0] ret void } which doesn't bail out. On CGStmt.ll, this reduces fastisel bail outs from 958 to 935, and is the precursor of better things to come. llvm-svn: 106973
-
Jordy Rose authored
Implicitly compare symbolic expressions to zero when they're being used as constraints. Part of PR7491. llvm-svn: 106972
-
Chris Lattner authored
llvm-svn: 106971
-
Chris Lattner authored
load/store nonsense in the epilog. For example, for: int foo(int X) { int A[100]; return A[X]; } we used to generate: %arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1] %tmp1 = load i32* %arrayidx ; <i32> [#uses=1] store i32 %tmp1, i32* %retval %0 = load i32* %retval ; <i32> [#uses=1] ret i32 %0 } which codegen'd to this code: _foo: ## @foo ## BB#0: ## %entry subq $408, %rsp ## imm = 0x198 movl %edi, 400(%rsp) movl 400(%rsp), %edi movslq %edi, %rax movl (%rsp,%rax,4), %edi movl %edi, 404(%rsp) movl 404(%rsp), %eax addq $408, %rsp ## imm = 0x198 ret Now we generate: %arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1] %tmp1 = load i32* %arrayidx ; <i32> [#uses=1] ret i32 %tmp1 } and: _foo: ## @foo ## BB#0: ## %entry subq $408, %rsp ## imm = 0x198 movl %edi, 404(%rsp) movl 404(%rsp), %edi movslq %edi, %rax movl (%rsp,%rax,4), %eax addq $408, %rsp ## imm = 0x198 ret This actually does matter, cutting out 2000 lines of IR from CGStmt.ll for example. Another interesting effect is that altivec.h functions which are dead now get dce'd by the inliner. Hence all the changes to builtins-ppc-altivec.c to ensure the calls aren't dead. llvm-svn: 106970
-
Chris Lattner authored
llvm-svn: 106969
-
Chris Lattner authored
llvm-svn: 106968
-
Chris Lattner authored
llvm-svn: 106967
-
rdar://7530813Chris Lattner authored
This avoids generating two gep's for common array operations. Before we would generate something like: %tmp = load i32* %X.addr ; <i32> [#uses=1] %arraydecay = getelementptr inbounds [100 x i32]* %A, i32 0, i32 0 ; <i32*> [#uses=1] %arrayidx = getelementptr inbounds i32* %arraydecay, i32 %tmp ; <i32*> [#uses=1] %tmp1 = load i32* %arrayidx ; <i32> [#uses=1] Now we generate: %tmp = load i32* %X.addr ; <i32> [#uses=1] %arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i32 %tmp ; <i32*> [#uses=1] %tmp1 = load i32* %arrayidx ; <i32> [#uses=1] Less IR is better at -O0. llvm-svn: 106966
-
Ted Kremenek authored
llvm-svn: 106964
-
Chris Lattner authored
we're done diddling around with the index stuff. Use a cheaper type comparison. llvm-svn: 106963
-
Chris Lattner authored
llvm-svn: 106962
-
Chris Lattner authored
of being in CGF. No functionality change. llvm-svn: 106961
-
Chris Lattner authored
it for now. llvm-svn: 106960
-
Benjamin Kramer authored
llvm-svn: 106959
-
- Jun 26, 2010
-
-
Chris Lattner authored
llvm-svn: 106958
-
Chris Lattner authored
code so we can use it from VisitUnaryMinus. llvm-svn: 106957
-
rdar://7221421Chris Lattner authored
As part of this, pull together trapv handling into the same enum. This also add support for NSW multiplies. This also makes PCH disagreement on overflow behavior silent, since it really doesn't matter except for warnings and codegen (no macros get defined etc). llvm-svn: 106956
-
rdar://7432000Chris Lattner authored
While I'm in there, adjust pointer to member adjustments as well. llvm-svn: 106955
-
Benjamin Kramer authored
llvm-svn: 106954
-
Kenneth Uildriks authored
Partial specialization test should not depend on the order of specialization operations or the names assigned to the specialized functions llvm-svn: 106953
-
Rafael Espindola authored
This produces terrible but correct code. llvm-svn: 106952
-
Bob Wilson authored
regressions. --- Reverse-merging r106939 into '.': U test/CodeGen/Thumb2/thumb2-ifcvt3.ll U lib/CodeGen/IfConversion.cpp llvm-svn: 106951
-
Chris Lattner authored
llvm-svn: 106950
-
Anders Carlsson authored
llvm-svn: 106949
-