- Jul 29, 2010
-
-
Douglas Gregor authored
qualifiers) when checking a K&R function definition against a previous prototype. Fixes <rdar://problem/8193107>. llvm-svn: 109751
-
Howard Hinnant authored
llvm-svn: 109750
-
Douglas Gregor authored
before looking for conversions to pointer type. Fixes <rdar://problem/8248780>. llvm-svn: 109749
-
Douglas Gregor authored
Reword the empty struct/union warning in C to note that such structs and unions have size 0 in C, size 1 in C++. Put this warning under -Wc++-compat. llvm-svn: 109748
-
Rafael Espindola authored
memory when one of the original BB is destroyed. llvm-svn: 109747
-
Benjamin Kramer authored
llvm-svn: 109746
-
Benjamin Kramer authored
llvm-svn: 109745
-
John McCall authored
it establishes a context and does a complaining diff. Also make sure we unify the prelude and postlude of a diff after a block-diff call. llvm-svn: 109744
-
John McCall authored
structurally identical. llvm-svn: 109743
-
John McCall authored
any differences we see. This should only happen if there are "non-structural" differences between the instructions, i.e. differences which wouldn't cause diff to return true. llvm-svn: 109742
-
John McCall authored
in despite not ever incrementing any path costs, so that the only nonzero costs arose from the all-left path in the first column. Anyway. Perform the diff starting from the beginning of the block to avoid capturing (say) loads of allocas. Vastly improves diff results on code that hasn't been mem2reg'ed. llvm-svn: 109741
-
John McCall authored
llvm-svn: 109740
-
John McCall authored
diff of a function. There's a lot of cruft in the current version, and it's pretty far from perfect, but it's usable. Currently only capable of comparing functions. Currently ignores metadata. Currently ignores most attributes of functions and instructions. Patches welcome. llvm-svn: 109739
-
Chris Lattner authored
struct a { struct c { double x; int y; } x[1]; }; void foo(struct a A) { } into: define void @foo(double %A.coerce0, i32 %A.coerce1) nounwind { entry: %A = alloca %struct.a, align 8 ; <%struct.a*> [#uses=1] %0 = bitcast %struct.a* %A to %struct.c* ; <%struct.c*> [#uses=2] %1 = getelementptr %struct.c* %0, i32 0, i32 0 ; <double*> [#uses=1] store double %A.coerce0, double* %1 %2 = getelementptr %struct.c* %0, i32 0, i32 1 ; <i32*> [#uses=1] store i32 %A.coerce1, i32* %2 instead of: define void @foo(double %A.coerce0, i64 %A.coerce1) nounwind { entry: %A = alloca %struct.a, align 8 ; <%struct.a*> [#uses=1] %0 = bitcast %struct.a* %A to %0* ; <%0*> [#uses=2] %1 = getelementptr %0* %0, i32 0, i32 0 ; <double*> [#uses=1] store double %A.coerce0, double* %1 %2 = getelementptr %0* %0, i32 0, i32 1 ; <i64*> [#uses=1] store i64 %A.coerce1, i64* %2 I only do this now because I never want to look at this code again :) llvm-svn: 109738
-
Chris Lattner authored
small integer + padding as that small integer. On code like: struct c { double x; int y; }; void bar(struct c C) { } This means that we compile to: define void @bar(double %C.coerce0, i32 %C.coerce1) nounwind { entry: %C = alloca %struct.c, align 8 ; <%struct.c*> [#uses=2] %0 = getelementptr %struct.c* %C, i32 0, i32 0 ; <double*> [#uses=1] store double %C.coerce0, double* %0 %1 = getelementptr %struct.c* %C, i32 0, i32 1 ; <i32*> [#uses=1] store i32 %C.coerce1, i32* %1 instead of: define void @bar(double %C.coerce0, i64 %C.coerce1) nounwind { entry: %C = alloca %struct.c, align 8 ; <%struct.c*> [#uses=3] %0 = bitcast %struct.c* %C to %0* ; <%0*> [#uses=2] %1 = getelementptr %0* %0, i32 0, i32 0 ; <double*> [#uses=1] store double %C.coerce0, double* %1 %2 = getelementptr %0* %0, i32 0, i32 1 ; <i64*> [#uses=1] store i64 %C.coerce1, i64* %2 which gives SRoA heartburn. This implements rdar://5711709, a nice low number :) llvm-svn: 109737
-
Jordy Rose authored
llvm-svn: 109736
-
Chris Lattner authored
llvm-svn: 109735
-
Jordy Rose authored
Use a LazyCompoundVal to handle initialization with a string literal, rather than copying each character. llvm-svn: 109734
-
Chris Lattner authored
have a "coerce to" type which often matches the default lowering of Clang type to LLVM IR type, but the coerce case can be handled by making them not be the same. This simplifies things and fixes issues where X86-64 abi lowering would return coerce after making preferred types exactly match up. This caused us to compile: typedef float v4f32 __attribute__((__vector_size__(16))); v4f32 foo(v4f32 X) { return X+X; } into this code at -O0: define <4 x float> @foo(<4 x float> %X.coerce) nounwind { entry: %retval = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2] %coerce = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2] %X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3] store <4 x float> %X.coerce, <4 x float>* %coerce %X = load <4 x float>* %coerce ; <<4 x float>> [#uses=1] store <4 x float> %X, <4 x float>* %X.addr %tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1] %tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1] %add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1] store <4 x float> %add, <4 x float>* %retval %0 = load <4 x float>* %retval ; <<4 x float>> [#uses=1] ret <4 x float> %0 } Now we get: define <4 x float> @foo(<4 x float> %X) nounwind { entry: %X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3] store <4 x float> %X, <4 x float>* %X.addr %tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1] %tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1] %add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1] ret <4 x float> %add } This implements rdar://8248065 llvm-svn: 109733
-
Chris Lattner authored
Before we'd compile the example into something like: %coerce.dive2 = getelementptr %struct.v4f32wrapper* %retval, i32 0, i32 0 ; <<4 x float>*> [#uses=1] %1 = bitcast <4 x float>* %coerce.dive2 to <2 x double>* ; <<2 x double>*> [#uses=1] %2 = load <2 x double>* %1, align 1 ; <<2 x double>> [#uses=1] ret <2 x double> %2 Now we produce: %coerce.dive2 = getelementptr %struct.v4f32wrapper* %retval, i32 0, i32 0 ; <<4 x float>*> [#uses=1] %0 = load <4 x float>* %coerce.dive2, align 1 ; <<4 x float>> [#uses=1] ret <4 x float> %0 llvm-svn: 109732
-
Chris Lattner authored
with return values, improving stuff that returns __m128 etc. llvm-svn: 109731
-
Chris Lattner authored
llvm-svn: 109730
-
Chris Lattner authored
for return values too. Instead of compiling something like: struct foo { int *X; float *Y; }; struct foo test(struct foo *P) { return *P; } to: %1 = type { i64, i64 } define %1 @test(%struct.foo* %P) nounwind { entry: %retval = alloca %struct.foo, align 8 ; <%struct.foo*> [#uses=2] %P.addr = alloca %struct.foo*, align 8 ; <%struct.foo**> [#uses=2] store %struct.foo* %P, %struct.foo** %P.addr %tmp = load %struct.foo** %P.addr ; <%struct.foo*> [#uses=1] %tmp1 = bitcast %struct.foo* %retval to i8* ; <i8*> [#uses=1] %tmp2 = bitcast %struct.foo* %tmp to i8* ; <i8*> [#uses=1] call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 16, i32 8, i1 false) %0 = bitcast %struct.foo* %retval to %1* ; <%1*> [#uses=1] %1 = load %1* %0, align 1 ; <%1> [#uses=1] ret %1 %1 } We now get the result more type safe, with: define %struct.foo @test(%struct.foo* %P) nounwind { entry: %retval = alloca %struct.foo, align 8 ; <%struct.foo*> [#uses=2] %P.addr = alloca %struct.foo*, align 8 ; <%struct.foo**> [#uses=2] store %struct.foo* %P, %struct.foo** %P.addr %tmp = load %struct.foo** %P.addr ; <%struct.foo*> [#uses=1] %tmp1 = bitcast %struct.foo* %retval to i8* ; <i8*> [#uses=1] %tmp2 = bitcast %struct.foo* %tmp to i8* ; <i8*> [#uses=1] call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 16, i32 8, i1 false) %0 = load %struct.foo* %retval ; <%struct.foo> [#uses=1] ret %struct.foo %0 } That memcpy is completely terrible, but I don't know how to fix it. llvm-svn: 109729
-
Chris Lattner authored
improve codegen for vaarg or something, because its codepath is getting preferred types now. llvm-svn: 109728
-
Daniel Dunbar authored
llvm-svn: 109727
-
Chris Lattner authored
compute its own preferred types instead of having CGT compute them then pass them (circuituously) down into ABIInfo. llvm-svn: 109726
-
Daniel Dunbar authored
llvm-svn: 109725
-
Chris Lattner authored
llvm-svn: 109724
-
Chris Lattner authored
things as TargetData, ASTContext, LLVMContext etc. Stop passing them through so many APIs. llvm-svn: 109723
-
Chris Lattner authored
This will simplify a bunch of code, coming up next. llvm-svn: 109722
-
Daniel Dunbar authored
llvm-svn: 109721
-
Daniel Dunbar authored
llvm-svn: 109720
-
Ted Kremenek authored
Teach GRExprEngine::VisitLValue() about FloatingLiteral, ImaginaryLiteral, and CharacterLiteral. Fixes an assertion failure reported in PR 7675. llvm-svn: 109719
-
Eric Christopher authored
angst. llvm-svn: 109718
-
Daniel Dunbar authored
- This works, but won't handle crashes on stack overflow, or signals delivered to a thread other than the one that crashed. The latter is particular annoying on Darwin, because SIGABRT tends to go to the main thread. llvm-svn: 109717
-
Howard Hinnant authored
llvm-svn: 109716
-
Jakob Stoklund Olesen authored
multiple defs, like t2LDRSB_POST. The first def could accidentally steal the physreg that the second, tied def was required to be allocated to. Now, the tied use-def is treated more like an early clobber, and the physreg is reserved before allocating the other defs. This would never be a problem when the tied def was the only def which is the usual case. This fixes MallocBench/gs for thumb2 -O0. llvm-svn: 109715
-
Jakob Stoklund Olesen authored
llvm-svn: 109714
-
Ted Kremenek authored
Check for an invalid SourceLocation in clang_getCursor(). This avoids a possible assertion failure in SourceManager in the call to Lexer::GetBeginningOfToken(). Fixes <rdar://problem/8244873>. llvm-svn: 109713
-
Johnny Chen authored
llvm-svn: 109712
-