- Mar 13, 2009
-
-
Chris Lattner authored
for i32/i64 expressions (we could also do i16 on cpus where i16 lea is fast, but I didn't add this). On the example, we now generate: _test: movl 4(%esp), %eax cmpl $42, (%eax) setl %al movzbl %al, %eax leal 4(%eax,%eax,8), %eax ret instead of: _test: movl 4(%esp), %eax cmpl $41, (%eax) movl $4, %ecx movl $13, %eax cmovg %ecx, %eax ret llvm-svn: 66869
-
Chris Lattner authored
example to: _test: movl 4(%esp), %eax cmpl $41, (%eax) setg %al movzbl %al, %eax orl $4294967294, %eax ret instead of: movl 4(%esp), %eax cmpl $41, (%eax) movl $4294967294, %ecx movl $4294967295, %eax cmova %ecx, %eax ret which is smaller in code size and faster. rdar://6668608 llvm-svn: 66868
-
Bill Wendling authored
llvm-svn: 66867
-
Bill Wendling authored
llvm-svn: 66866
-
Dan Gohman authored
operands can't both be fully folded at the same time. For example, in the included testcase, a global variable is being added with an add of two values. The global variable wants RIP-relative addressing, so it can't share the address with another base register, but it's still possible to fold the initial add. llvm-svn: 66865
-
Dale Johannesen authored
codegen (speculative execution). llvm-svn: 66859
-
Chris Lattner authored
llvm-svn: 66850
-
Chris Lattner authored
to the stack. This shrinks all llvm tools by 9k, and improves reentrancy. llvm-svn: 66847
-
Chris Lattner authored
llvm-svn: 66845
-
Dan Gohman authored
llvm-svn: 66843
-
Dale Johannesen authored
right; did the wrong thing when there are exactly 11 non-debug instructions, followed by debug info. Remove a FIXME since it's apparently been fixed along the way. llvm-svn: 66840
-
Gabor Greif authored
llvm-svn: 66839
-
Evan Cheng authored
llvm-svn: 66838
-
- Mar 12, 2009
-
-
Daniel Dunbar authored
single character writes. llvm-svn: 66827
-
Duncan Sands authored
in the Ada testcase. Reverting this only covers up the real problem, which is a nasty conceptual difficulty in the phi elimination pass: when eliminating phi nodes in landing pads, the register copies need to come before the invoke, not at the end of the basic block which is too late... See PR3784. llvm-svn: 66826
-
Scott Michel authored
llvm-svn: 66825
-
Dale Johannesen authored
sorting of ConstantInt's; unreinvent wheel. llvm-svn: 66824
-
Bob Wilson authored
refers to the "prefix" directory, i.e., one level above "bin". LLVMGCCPATH is used as the directory containing the llvm-gcc executable, so add a "/bin" suffix to get from LLVMGCCDIR to LLVMGCCPATH. llvm-svn: 66823
-
Gabor Greif authored
access each with a fixed negative index from op_end(). This has two important implications: - getUser() will work faster, because there are less iterations for the waymarking algorithm to perform. This is important when running various analyses that want to determine callers of basic blocks. - getSuccessor() now runs faster, because the indirection via OperandList is not necessary: Uses corresponding to the successors are at fixed offset to "this". The price we pay is the slightly more complicated logic in the operator User::delete, as it has to pick up the information whether it has to free the memory of an original unconditional BranchInst or a BranchInst that was originally conditional, but has been shortened to unconditional. I was not able to come up with a nicer solution to this problem. (And rest assured, I tried *a lot*). Similar reorderings will follow for InvokeInst and CallInst. After that some optimizations to pred_iterator and CallSite will fall out naturally. llvm-svn: 66815
-
Evan Cheng authored
Re-apply 66024 with fixes: 1. Fixed indirect call to immediate address assembly. 2. Fixed JIT encoding by making the address pc-relative. llvm-svn: 66803
-
Dale Johannesen authored
llvm-svn: 66800
-
Chris Lattner authored
llvm-svn: 66798
-
Evan Cheng authored
llvm-svn: 66797
-
Evan Cheng authored
llvm-svn: 66795
-
Duncan Sands authored
llvm-svn: 66791
-
Gabor Greif authored
llvm-svn: 66790
-
Gabor Greif authored
llvm-svn: 66788
-
Owen Anderson authored
llvm-svn: 66780
-
Chris Lattner authored
related transformations out of target-specific dag combine into the ARM backend. These were added by Evan in r37685 with no testcases and only seems to help ARM (e.g. test/CodeGen/ARM/select_xform.ll). Add some simple X86-specific (for now) DAG combines that turn things like cond ? 8 : 0 -> (zext(cond) << 3). This happens frequently with the recently added cp constant select optimization, but is a very general xform. For example, we now compile the second example in const-select.ll to: _test: movsd LCPI2_0, %xmm0 ucomisd 8(%esp), %xmm0 seta %al movzbl %al, %eax movl 4(%esp), %ecx movsbl (%ecx,%eax,4), %eax ret instead of: _test: movl 4(%esp), %eax leal 4(%eax), %ecx movsd LCPI2_0, %xmm0 ucomisd 8(%esp), %xmm0 cmovbe %eax, %ecx movsbl (%ecx), %eax ret This passes multisource and dejagnu. llvm-svn: 66779
-
Chris Lattner authored
llvm-svn: 66778
-
Evan Cheng authored
Enable Chris' value propagation change. It make available known sign, zero, one bits information for values that are live out of basic blocks. The goal is to eliminate unnecessary sext, zext, truncate of values that are live-in to blocks. This does not handle PHI nodes yet. llvm-svn: 66777
-
Evan Cheng authored
On x86, if the only use of a i64 load is a i64 store, generate a pair of double load and store instead. llvm-svn: 66776
-
Chris Lattner authored
llvm-svn: 66775
-
Chris Lattner authored
llvm-svn: 66773
-
Bill Wendling authored
llvm-svn: 66770
-
Nick Lewycky authored
llvm-svn: 66766
-
Nick Lewycky authored
Remove the explicit if OS = Darwin test around the setting of -m32/-m64. llvm-svn: 66765
-
Sanjiv Gupta authored
llvm-svn: 66763
-
Sanjiv Gupta authored
Banksel optimization is now based on the section names of symbols, since the symbols in one section will always be put into one bank. llvm-svn: 66761
-
Dale Johannesen authored
llvm-svn: 66751
-