- Aug 28, 2010
-
-
Chris Lattner authored
being actively maintained, improved, or extended. llvm-svn: 112356
-
Chris Lattner authored
I'm aware of, aren't maintained, and LVI will be replacing their value. nlewycky approved this on irc. llvm-svn: 112355
-
Chris Lattner authored
llvm-svn: 112354
-
Chris Lattner authored
llvm-svn: 112353
-
Chris Lattner authored
llvm-svn: 112352
-
Chris Lattner authored
llvm-svn: 112351
-
Chris Lattner authored
llvm-svn: 112350
-
Chris Lattner authored
llvm-svn: 112349
-
Bruno Cardoso Lopes authored
Also teach this logic how to handle target specific shuffles if needed, this is necessary while searching recursively for zeroed scalar elements in vector shuffle operands. llvm-svn: 112348
-
Gabor Greif authored
llvm-svn: 112347
-
Gabor Greif authored
llvm-svn: 112346
-
Chris Lattner authored
like this: struct S { float A, B, C, D; }; struct S g; struct S bar() { struct S A = g; ++A.B; A.A = 42; return A; } we now generate: _bar: ## @bar ## BB#0: ## %entry movq _g@GOTPCREL(%rip), %rax movss 12(%rax), %xmm0 pshufd $16, %xmm0, %xmm0 movss 4(%rax), %xmm2 movss 8(%rax), %xmm1 pshufd $16, %xmm1, %xmm1 unpcklps %xmm0, %xmm1 addss LCPI1_0(%rip), %xmm2 pshufd $16, %xmm2, %xmm2 movss LCPI1_1(%rip), %xmm0 pshufd $16, %xmm0, %xmm0 unpcklps %xmm2, %xmm0 ret instead of: _bar: ## @bar ## BB#0: ## %entry movq _g@GOTPCREL(%rip), %rax movss 12(%rax), %xmm0 pshufd $16, %xmm0, %xmm0 movss 4(%rax), %xmm2 movss 8(%rax), %xmm1 pshufd $16, %xmm1, %xmm1 unpcklps %xmm0, %xmm1 addss LCPI1_0(%rip), %xmm2 movd %xmm2, %eax shlq $32, %rax addq $1109917696, %rax ## imm = 0x42280000 movd %rax, %xmm0 ret llvm-svn: 112345
-
Duncan Sands authored
they hit the rest of the system. llvm-svn: 112344
-
Chris Lattner authored
element insertion from the pieces that feed into the vector. This handles a pattern that occurs frequently due to code generated for the x86-64 abi. We now compile something like this: struct S { float A, B, C, D; }; struct S g; struct S bar() { struct S A = g; ++A.A; ++A.C; return A; } into all nice vector operations: _bar: ## @bar ## BB#0: ## %entry movq _g@GOTPCREL(%rip), %rax movss LCPI1_0(%rip), %xmm1 movss (%rax), %xmm0 addss %xmm1, %xmm0 pshufd $16, %xmm0, %xmm0 movss 4(%rax), %xmm2 movss 12(%rax), %xmm3 pshufd $16, %xmm2, %xmm2 unpcklps %xmm2, %xmm0 addss 8(%rax), %xmm1 pshufd $16, %xmm1, %xmm1 pshufd $16, %xmm3, %xmm2 unpcklps %xmm2, %xmm1 ret instead of icky integer operations: _bar: ## @bar movq _g@GOTPCREL(%rip), %rax movss LCPI1_0(%rip), %xmm1 movss (%rax), %xmm0 addss %xmm1, %xmm0 movd %xmm0, %ecx movl 4(%rax), %edx movl 12(%rax), %esi shlq $32, %rdx addq %rcx, %rdx movd %rdx, %xmm0 addss 8(%rax), %xmm1 movd %xmm1, %eax shlq $32, %rsi addq %rax, %rsi movd %rsi, %xmm1 ret This resolves rdar://8360454 llvm-svn: 112343
-
Nick Lewycky authored
llvm-svn: 112342
-
Dan Gohman authored
doesn't currently support dealing with this. llvm-svn: 112341
-
Dan Gohman authored
llvm-svn: 112340
-
Gabor Greif authored
llvm-svn: 112339
-
Gabor Greif authored
llvm-svn: 112338
-
Dan Gohman authored
llvm-svn: 112337
-
Bob Wilson authored
llvm-svn: 112336
-
Ted Kremenek authored
Update test case, with comment to later investigate the correct behavior. Now the behavior is at least consistent. llvm-svn: 112335
-
Ted Kremenek authored
Explicitly handle CXXExprWithTemporaries during CFG construction by just visiting the subexpression. While we don't do anything intelligent right now, this obviates a bogus -Wunreahable-code warning reported in PR 6130. llvm-svn: 112334
-
Gabor Greif authored
reordering and redefinition issues still may linger, I plan to nail them next llvm-svn: 112333
-
Benjamin Kramer authored
llvm-svn: 112332
-
Greg Clayton authored
llvm-svn: 112331
-
Douglas Gregor authored
a constructor. llvm-svn: 112330
-
Bob Wilson authored
the special values that for ARM would be used with IB or DA modes. Fall through and consider materializing a new base address is it would be profitable. llvm-svn: 112329
-
Johnny Chen authored
llvm-svn: 112328
-
Johnny Chen authored
breakpoint by FileSpec and line number and exercises some FileSpec APIs. Also, RUN_STOPPED is a bad assert name, RUN_SUCCEEDED is better. llvm-svn: 112327
-
Gabor Greif authored
llvm-svn: 112326
-
Owen Anderson authored
Add a prototype of a new peephole optimizing pass that uses LazyValue info to simplify PHIs and select's. This pass addresses the missed optimizations from PR2581 and PR4420. llvm-svn: 112325
-
Sean Callanan authored
debugger to insert self-contained functions for use by expressions (mainly for error-checking). In order to support detecting whether a crash occurred in one of these helpers -- currently our preferred way of reporting that an error-check failed -- added a bit of support for getting the extent of a JITted function in addition to just its base. llvm-svn: 112324
-
Owen Anderson authored
llvm-svn: 112323
-
Bob Wilson authored
all the other LDM/STM instructions. This fixes asm printer crashes when compiling with -O0. I've changed one of the NEON tests (vst3.ll) to run with -O0 to check this in the future. Prior to this change VLDM/VSTM used addressing mode #5, but not really. The offset field was used to hold a count of the number of registers being loaded or stored, and the AM5 opcode field was expanded to specify the IA or DB mode, instead of the standard ADD/SUB specifier. Much of the backend was not aware of these special cases. The crashes occured when rewriting a frameindex caused the AM5 offset field to be changed so that it did not have a valid submode. I don't know exactly what changed to expose this now. Maybe we've never done much with -O0 and NEON. Regardless, there's no longer any reason to keep a count of the VLDM/VSTM registers, so we can use addressing mode #4 and clean things up in a lot of places. llvm-svn: 112322
-
Chris Lattner authored
llvm-svn: 112321
-
Sebastian Redl authored
llvm-svn: 112320
-
Sebastian Redl authored
llvm-svn: 112319
-
Sebastian Redl authored
llvm-svn: 112318
-
Chris Lattner authored
llvm-svn: 112317
-