- Apr 13, 2008
-
-
Duncan Sands authored
the result IRBuilder. Patch by Dominic Hamon. llvm-svn: 49604
-
Chris Lattner authored
not # of operands as an input. llvm-svn: 49599
-
Anton Korobeynikov authored
llvm-svn: 49593
-
- Apr 12, 2008
-
-
Arnold Schwaighofer authored
optimized x86-64 (and x86) calls so that they work (... at least for my test cases). Should fix the following problems: Problem 1: When i introduced the optimized handling of arguments for tail called functions (using a sequence of copyto/copyfrom virtual registers instead of always lowering to top of the stack) i did not handle byval arguments correctly e.g they did not work at all :). Problem 2: On x86-64 after the arguments of the tail called function are moved to their registers (which include ESI/RSI etc), tail call optimization performs byval lowering which causes xSI,xDI, xCX registers to be overwritten. This is handled in this patch by moving the arguments to virtual registers first and after the byval lowering the arguments are moved from those virtual registers back to RSI/RDI/RCX. llvm-svn: 49584
-
Duncan Sands authored
llvm-svn: 49583
-
Dan Gohman authored
on any current target and aren't optimized in DAGCombiner. Instead of using intermediate nodes, expand the operations, choosing between simple loads/stores, target-specific code, and library calls, immediately. Previously, the code to emit optimized code for these operations was only used at initial SelectionDAG construction time; now it is used at all times. This fixes some cases where rep;movs was being used for small copies where simple loads/stores would be better. This also cleans up code that checks for alignments less than 4; let the targets make that decision instead of doing it in target-independent code. This allows x86 to use rep;movs in low-alignment cases. Also, this fixes a bug that resulted in the use of rep;stos for memsets of 0 with non-constant memory size when the alignment was at least 4. It's better to use the library in this case, which can be significantly faster when the size is large. This also preserves more SourceValue information when memory intrinsics are lowered into simple loads/stores. llvm-svn: 49572
-
Dan Gohman authored
8-byte-aligned data. llvm-svn: 49571
-
Nate Begeman authored
llvm-svn: 49569
-
Nate Begeman authored
llvm-svn: 49568
-
Evan Cheng authored
llvm-svn: 49566
-
- Apr 11, 2008
-
-
Chris Lattner authored
llvm-svn: 49548
-
Evan Cheng authored
llvm-svn: 49544
-
Evan Cheng authored
llvm-svn: 49543
-
Evan Cheng authored
Use of implicit_def is not part of live interval. Create empty intervals for the uses when the live interval is being spilled. llvm-svn: 49542
-
Gabor Greif authored
llvm-svn: 49524
-
Owen Anderson authored
of calls and less aggressive with non-readnone calls. llvm-svn: 49516
-
Evan Cheng authored
llvm-svn: 49513
-
Evan Cheng authored
llvm-svn: 49512
-
Dan Gohman authored
llvm-svn: 49504
-
Owen Anderson authored
wrong order. llvm-svn: 49499
-
- Apr 10, 2008
-
-
Dan Gohman authored
llvm-svn: 49496
-
Dan Gohman authored
in addition to integer expressions. Rewrite GetOrEnforceKnownAlignment as a ComputeMaskedBits problem, moving all of its special alignment knowledge to ComputeMaskedBits as low-zero-bits knowledge. Also, teach ComputeMaskedBits a few basic things about Mul and PHI instructions. This improves ComputeMaskedBits-based simplifications in a few cases, but more noticeably it significantly improves instcombine's alignment detection for loads, stores, and memory intrinsics. llvm-svn: 49492
-
Evan Cheng authored
llvm-svn: 49491
-
Evan Cheng authored
llvm-svn: 49469
-
Chris Lattner authored
llvm-svn: 49466
-
Chris Lattner authored
llvm-svn: 49465
-
Chris Lattner authored
MOVZQI2PQIrr. This would be better handled as a dag combine (with the goal of eliminating the bitconvert) but I don't know how to do that safely. Thoughts welcome. llvm-svn: 49463
-
Evan Cheng authored
Teach branch folding pass about implicit_def instructions. Unfortunately we can't just eliminate them since register scavenger expects every register use to be defined. However, we can delete them when there are no intra-block uses. Carefully removing some implicit def's which enable more blocks to be optimized away. llvm-svn: 49461
-
Chris Lattner authored
This is not safe for all inputs. llvm-svn: 49458
-
- Apr 09, 2008
-
-
Evan Cheng authored
- Added insert_subreg coalescing support. llvm-svn: 49448
-
Dan Gohman authored
llvm-svn: 49446
-
Dan Gohman authored
is needed for the x86-64-ABI handling of structs that contain floating-point members that are returned by value. llvm-svn: 49441
-
Dan Gohman authored
llvm-svn: 49440
-
Chris Lattner authored
figuring out the suffix to use. implement pow(2,x) -> exp2(x). llvm-svn: 49437
-
Chris Lattner authored
long double and simplify the code. llvm-svn: 49435
-
Devang Patel authored
llvm-svn: 49430
-
Owen Anderson authored
GVN and into its own pass. llvm-svn: 49419
-
Owen Anderson authored
llvm-svn: 49418
-
Chris Lattner authored
particular value but variable type. llvm-svn: 49416
-
Evan Cheng authored
llvm-svn: 49415
-