- Aug 28, 2010
-
-
John McCall authored
kind. Fixes PR7252. llvm-svn: 112383
-
Dan Gohman authored
llvm-svn: 112382
-
Dan Gohman authored
llvm-svn: 112381
-
Ted Kremenek authored
llvm-svn: 112380
-
Chris Lattner authored
insertp[sd] $0, which is a noop. Before: _f32: ## @f32 pshufd $1, %xmm1, %xmm2 pshufd $1, %xmm0, %xmm3 addss %xmm2, %xmm3 addss %xmm1, %xmm0 ## kill: XMM0<def> XMM0<kill> XMM0<def> insertps $0, %xmm0, %xmm0 insertps $16, %xmm3, %xmm0 ret after: _f32: ## @f32 movdqa %xmm0, %xmm2 addss %xmm1, %xmm2 pshufd $1, %xmm1, %xmm1 pshufd $1, %xmm0, %xmm3 addss %xmm1, %xmm3 movdqa %xmm2, %xmm0 insertps $16, %xmm3, %xmm0 ret The extra movs are due to a random (poor) scheduling decision. llvm-svn: 112379
-
Chris Lattner authored
when the top elements of a vector are undefined. This happens all the time for X86-64 ABI stuff because only the low 2 elements of a 4 element vector are defined. For example, on: _Complex float f32(_Complex float A, _Complex float B) { return A+B; } We used to produce (with SSE2, SSE4.1+ uses insertps): _f32: ## @f32 movdqa %xmm0, %xmm2 addss %xmm1, %xmm2 pshufd $16, %xmm2, %xmm2 pshufd $1, %xmm1, %xmm1 pshufd $1, %xmm0, %xmm0 addss %xmm1, %xmm0 pshufd $16, %xmm0, %xmm1 movdqa %xmm2, %xmm0 unpcklps %xmm1, %xmm0 ret We now produce: _f32: ## @f32 movdqa %xmm0, %xmm2 addss %xmm1, %xmm2 pshufd $1, %xmm1, %xmm1 pshufd $1, %xmm0, %xmm3 addss %xmm1, %xmm3 movaps %xmm2, %xmm0 unpcklps %xmm3, %xmm0 ret This implements rdar://8368414 llvm-svn: 112378
-
Chris Lattner authored
a new EltStride variable instead of reusing NumElems variable for a non-obvious purpose. No functionality change. llvm-svn: 112377
-
Michael J. Spencer authored
According to the Microsoft documentation here: http://msdn.microsoft.com/en-us/library/ms724284%28VS.85%29.aspx this cast used in lib/System/Win32/Path.inc: __int64 ft = *reinterpret_cast<__int64*>(&fi.ftLastWriteTime); should not be done. The documentation says: "Do not cast a pointer to a FILETIME structure to either a ULARGE_INTEGER* or __int64* value because it can cause alignment faults on 64-bit Windows." llvm-svn: 112376
-
Chris Lattner authored
and hasn't kept up with ToT. Approved by Anton. llvm-svn: 112375
-
Chris Lattner authored
llvm-svn: 112374
-
Gabor Greif authored
llvm-svn: 112373
-
Gabor Greif authored
llvm-svn: 112372
-
Nick Lewycky authored
llvm-svn: 112370
-
Gabor Greif authored
this is still failing, need to come up with a fix (but we are in good company as the first gcc version pass this test will be v4.6) llvm-svn: 112369
-
Gabor Greif authored
llvm-svn: 112367
-
Gabor Greif authored
llvm-svn: 112366
-
Gabor Greif authored
llvm-svn: 112365
-
Benjamin Kramer authored
llvm-svn: 112364
-
Benjamin Kramer authored
llvm-svn: 112363
-
Argyrios Kyrtzidis authored
For large floats/integers, APFloat/APInt will allocate memory from the heap to represent these numbers. Unfortunately, when we use a BumpPtrAllocator to allocate IntegerLiteral/FloatingLiteral nodes the memory associated with the APFloat/APInt values will never get freed. I introduce the class 'APNumericStorage' which uses ASTContext's allocator for memory allocation and is used internally by FloatingLiteral/IntegerLiteral. Fixes rdar://7637185 llvm-svn: 112361
-
John McCall authored
llvm-svn: 112360
-
John McCall authored
llvm-svn: 112359
-
John McCall authored
an object of type I, if the current access target is protected when named in a class N, consider the friends of the classes P where I <= P <= N and where a notional member of N would be non-forbidden in P. llvm-svn: 112358
-
Bob Wilson authored
llvm-svn: 112357
-
Chris Lattner authored
being actively maintained, improved, or extended. llvm-svn: 112356
-
Chris Lattner authored
I'm aware of, aren't maintained, and LVI will be replacing their value. nlewycky approved this on irc. llvm-svn: 112355
-
Chris Lattner authored
llvm-svn: 112354
-
Chris Lattner authored
llvm-svn: 112353
-
Chris Lattner authored
llvm-svn: 112352
-
Chris Lattner authored
llvm-svn: 112351
-
Chris Lattner authored
llvm-svn: 112350
-
Chris Lattner authored
llvm-svn: 112349
-
Bruno Cardoso Lopes authored
Also teach this logic how to handle target specific shuffles if needed, this is necessary while searching recursively for zeroed scalar elements in vector shuffle operands. llvm-svn: 112348
-
Gabor Greif authored
llvm-svn: 112347
-
Gabor Greif authored
llvm-svn: 112346
-
Chris Lattner authored
like this: struct S { float A, B, C, D; }; struct S g; struct S bar() { struct S A = g; ++A.B; A.A = 42; return A; } we now generate: _bar: ## @bar ## BB#0: ## %entry movq _g@GOTPCREL(%rip), %rax movss 12(%rax), %xmm0 pshufd $16, %xmm0, %xmm0 movss 4(%rax), %xmm2 movss 8(%rax), %xmm1 pshufd $16, %xmm1, %xmm1 unpcklps %xmm0, %xmm1 addss LCPI1_0(%rip), %xmm2 pshufd $16, %xmm2, %xmm2 movss LCPI1_1(%rip), %xmm0 pshufd $16, %xmm0, %xmm0 unpcklps %xmm2, %xmm0 ret instead of: _bar: ## @bar ## BB#0: ## %entry movq _g@GOTPCREL(%rip), %rax movss 12(%rax), %xmm0 pshufd $16, %xmm0, %xmm0 movss 4(%rax), %xmm2 movss 8(%rax), %xmm1 pshufd $16, %xmm1, %xmm1 unpcklps %xmm0, %xmm1 addss LCPI1_0(%rip), %xmm2 movd %xmm2, %eax shlq $32, %rax addq $1109917696, %rax ## imm = 0x42280000 movd %rax, %xmm0 ret llvm-svn: 112345
-
Duncan Sands authored
they hit the rest of the system. llvm-svn: 112344
-
Chris Lattner authored
element insertion from the pieces that feed into the vector. This handles a pattern that occurs frequently due to code generated for the x86-64 abi. We now compile something like this: struct S { float A, B, C, D; }; struct S g; struct S bar() { struct S A = g; ++A.A; ++A.C; return A; } into all nice vector operations: _bar: ## @bar ## BB#0: ## %entry movq _g@GOTPCREL(%rip), %rax movss LCPI1_0(%rip), %xmm1 movss (%rax), %xmm0 addss %xmm1, %xmm0 pshufd $16, %xmm0, %xmm0 movss 4(%rax), %xmm2 movss 12(%rax), %xmm3 pshufd $16, %xmm2, %xmm2 unpcklps %xmm2, %xmm0 addss 8(%rax), %xmm1 pshufd $16, %xmm1, %xmm1 pshufd $16, %xmm3, %xmm2 unpcklps %xmm2, %xmm1 ret instead of icky integer operations: _bar: ## @bar movq _g@GOTPCREL(%rip), %rax movss LCPI1_0(%rip), %xmm1 movss (%rax), %xmm0 addss %xmm1, %xmm0 movd %xmm0, %ecx movl 4(%rax), %edx movl 12(%rax), %esi shlq $32, %rdx addq %rcx, %rdx movd %rdx, %xmm0 addss 8(%rax), %xmm1 movd %xmm1, %eax shlq $32, %rsi addq %rax, %rsi movd %rsi, %xmm1 ret This resolves rdar://8360454 llvm-svn: 112343
-
Nick Lewycky authored
llvm-svn: 112342
-
Dan Gohman authored
doesn't currently support dealing with this. llvm-svn: 112341
-