- Feb 26, 2008
-
-
Bill Wendling authored
llvm-svn: 47600
-
Arnold Schwaighofer authored
GOT-style position independent code. Before only tail calls to protected/hidden functions within the same module were optimized. Now all function calls are tail call optimized. llvm-svn: 47594
-
Arnold Schwaighofer authored
calls. Before arguments that could overwrite each other were explicitly lowered to a stack slot, not giving the register allocator a chance to optimize. Now a sequence of copyto/copyfrom virtual registers ensures that arguments are loaded in (virtual) registers before they are lowered to the stack slot (and might overwrite each other). Also parameter stack slots are marked mutable for (potentially) tail calling functions. llvm-svn: 47593
-
- Feb 25, 2008
-
-
Dan Gohman authored
pointed out that this isn't correct at -O0. llvm-svn: 47575
-
Dale Johannesen authored
llvm-svn: 47573
-
Dan Gohman authored
{S,U}MUL_LOHI with an unused high value. llvm-svn: 47569
-
Dan Gohman authored
result into a MUL late in the X86 codegen process. ISD::MUL is once again Legal on X86, so this is no longer needed. And, the hack was suboptimal; see PR1874 for details. llvm-svn: 47567
-
Dan Gohman authored
a SignBitIsZero function to simplify a common use case. llvm-svn: 47561
-
Dale Johannesen authored
of TokenFactor underneath chain (seems to be enough) llvm-svn: 47554
-
- Feb 24, 2008
-
-
Bill Wendling authored
%r3 on PPC) in their ASM files. However, it's hard for humans to read during debugging. Adding a new field to the register data that lets you specify a different name to be printed than the one that goes into the ASM file -- %x3 instead of %r3, for instance. llvm-svn: 47534
-
- Feb 23, 2008
-
-
Scott Michel authored
for CellSPU modifications: - SPUInstrInfo.td refactoring: "multiclass" really is _your_ friend. - Other improvements based on refactoring effort in SPUISelLowering.cpp, esp. in SPUISelLowering::PerformDAGCombine(), where zero amount shifts and rotates are now eliminiated, other scalar-to-vector-to-scalar silliness is also eliminated. - 64-bit operations are being implemented, _muldi3.c gcc runtime now compiles and generates the right code. More work still needs to be done. llvm-svn: 47532
-
Evan Cheng authored
llvm-svn: 47524
-
Evan Cheng authored
No need recognize load from a fixed argument slot as re-materializable. LiveIntervalAnalysis already handles it as a special case. llvm-svn: 47522
-
- Feb 22, 2008
-
-
Dale Johannesen authored
stuff into ParamAttrsList.h. Per feedback from ParamAttrs changes. llvm-svn: 47504
-
Dale Johannesen authored
llvm-svn: 47483
-
Evan Cheng authored
llvm-svn: 47476
-
Chris Lattner authored
instead of with mmx registers. This horribleness is apparently done by gcc to avoid having to insert emms in places that really should have it. This is the second half of rdar://5741668. llvm-svn: 47474
-
Chris Lattner authored
GCC apparently does this, and code depends on not having to do emms when this happens. This is x86-64 only so far, second half should handle x86-32. rdar://5741668 llvm-svn: 47470
-
- Feb 21, 2008
-
-
Eli Friedman authored
new things. llvm-svn: 47458
-
Chris Lattner authored
llvm-svn: 47431
-
Andrew Lenharth authored
Atomic op support. If any gcc test uses __sync builtins, it might start failing on archs that haven't implemented them yet llvm-svn: 47430
-
- Feb 20, 2008
-
-
Evan Cheng authored
llvm-svn: 47400
-
Evan Cheng authored
llvm-svn: 47385
-
Anton Korobeynikov authored
llvm-svn: 47375
-
Anton Korobeynikov authored
llvm-svn: 47370
-
Anton Korobeynikov authored
llvm-svn: 47369
-
Anton Korobeynikov authored
llvm-svn: 47367
-
Evan Cheng authored
llvm-svn: 47354
-
Evan Cheng authored
llvm-svn: 47351
-
- Feb 19, 2008
-
-
Andrew Lenharth authored
llvm-svn: 47337
-
Chris Lattner authored
This compiles test-nofold.ll into: _test: movl $15, %ecx andl 4(%esp), %ecx testl %ecx, %ecx movl $42, %eax cmove %ecx, %eax ret instead of: _test: movl 4(%esp), %eax movl %eax, %ecx andl $15, %ecx testl $15, %eax movl $42, %eax cmove %ecx, %eax ret llvm-svn: 47330
-
Evan Cheng authored
llvm-svn: 47300
-
Evan Cheng authored
- When DAG combiner is folding a bit convert into a BUILD_VECTOR, it should check if it's essentially a SCALAR_TO_VECTOR. Avoid turning (v8i16) <10, u, u, u> to <10, 0, u, u, u, u, u, u>. Instead, simply convert it to a SCALAR_TO_VECTOR of the proper type. - X86 now normalize SCALAR_TO_VECTOR to (BIT_CONVERT (v4i32 SCALAR_TO_VECTOR)). Get rid of X86ISD::S2VEC. llvm-svn: 47290
-
- Feb 18, 2008
-
-
Dan Gohman authored
on x86-32 since i64 itself is not a Legal type. And, update some comments. llvm-svn: 47282
-
Chris Lattner authored
llvm-svn: 47280
-
Nate Begeman authored
llvm-svn: 47279
-
Chris Lattner authored
llvm-svn: 47278
-
Dan Gohman authored
has plain one-result scalar integer multiplication instructions. This avoids expanding such instructions into MUL_LOHI sequences that must be special-cased at isel time, and avoids the problem with that code that provented memory operands from being folded. This fixes PR1874, addressesing the most common case. The uncommon cases of optimizing multiply-high operations will require work in DAGCombiner. llvm-svn: 47277
-
- Feb 17, 2008
-
-
Chris Lattner authored
llvm-svn: 47237
-
- Feb 16, 2008
-
-
Andrew Lenharth authored
I cannot find a libgcc function for this builtin. Therefor expanding it to a noop (which is how it use to be treated). If someone who knows the x86 backend better than me could tell me how to get a lock prefix on an instruction, that would be nice to complete x86 support. llvm-svn: 47213
-