- Jan 10, 2008
-
-
Chris Lattner authored
llvm-svn: 45841
-
Chris Lattner authored
llvm-svn: 45838
-
Chris Lattner authored
llvm-svn: 45837
-
Duncan Sands authored
Likewise fix up a bunch of other libcalls. While there I remove NEG_F32 and NEG_F64 since they are not used anywhere. This fixes 9 Ada ACATS failures. llvm-svn: 45833
-
Evan Cheng authored
llvm-svn: 45831
-
Evan Cheng authored
llvm-svn: 45830
-
Chris Lattner authored
x86 backend where instructions were not marked maystore/mayload, and perf issues where instructions were not marked neverHasSideEffects. It would be really nice if we could write patterns for copy instructions. I have audited all the x86 instructions down to MOVDQAmr. The flags on others and on other targets are probably not right in all cases, but no clients currently use this info that are enabled by default. llvm-svn: 45829
-
Chris Lattner authored
llvm-svn: 45826
-
Chris Lattner authored
llvm-svn: 45825
-
Chris Lattner authored
inferred from the instr patterns. llvm-svn: 45824
-
Chris Lattner authored
llvm-svn: 45821
-
Chris Lattner authored
instructions (with patterns) that load memory marked, for example. llvm-svn: 45818
-
Chris Lattner authored
or being side-effect free. llvm-svn: 45816
-
Owen Anderson authored
llvm-svn: 45815
-
Evan Cheng authored
llvm-svn: 45813
-
Dale Johannesen authored
because assembler/linker can't cope with weak absolutes. PR 1880. llvm-svn: 45811
-
Owen Anderson authored
MachineRegisterInfo. Once all clients are switched over, the former will be going away. llvm-svn: 45805
-
Owen Anderson authored
copies is made. llvm-svn: 45799
-
Evan Cheng authored
Do not use the stack pointer directly, issue a copyfromreg instead. Otherwise we can end up with something like ADD32ri %esp, x which two-address pass won't like. llvm-svn: 45798
-
Owen Anderson authored
llvm-svn: 45797
-
rdar://5676945Chris Lattner authored
than hardware supported type will be scalarized, so we can infer their alignment from that info. We now codegen pr1845 into: _boolVectorSelect: lbz r2, 0(r3) stb r2, -16(r1) blr llvm-svn: 45796
-
Evan Cheng authored
llvm-svn: 45792
-
Owen Anderson authored
llvm-svn: 45791
-
Evan Cheng authored
llvm-svn: 45787
-
- Jan 09, 2008
-
-
Owen Anderson authored
Clean up StrongPHIElimination a bit, and add some more comments to the internal structures. There's still more work to do on this front. llvm-svn: 45783
-
Duncan Sands authored
llvm-svn: 45781
-
Owen Anderson authored
llvm-svn: 45775
-
Owen Anderson authored
llvm-svn: 45774
-
Owen Anderson authored
llvm-svn: 45773
-
Chris Lattner authored
llvm-svn: 45768
-
Chris Lattner authored
llvm-svn: 45766
-
Chris Lattner authored
void test(long long *P) { *P ^= 1; } into just: _test: movl 4(%esp), %eax xorl $1, (%eax) ret instead of code like this: _test: movl 4(%esp), %ecx xorl $1, (%ecx) movl 4(%ecx), %edx movl %edx, 4(%ecx) ret llvm-svn: 45762
-
- Jan 08, 2008
-
-
Owen Anderson authored
llvm-svn: 45759
-
Duncan Sands authored
on 64-bit builds. Analysis and original patch by Török Edwin. Code audit found another place with the same problem, also fixed here. llvm-svn: 45746
-
Chris Lattner authored
llvm-svn: 45745
-
Chris Lattner authored
the code generated is not wonderful. This turns a miscompilation into a code quality bug (noted in the ppc readme). This fixes PR642, which is over 2 years old (!). Nate, please review this. llvm-svn: 45742
-
Owen Anderson authored
llvm-svn: 45738
-
Evan Cheng authored
llvm-svn: 45734
-
Evan Cheng authored
llvm-svn: 45733
-
Bill Wendling authored
llvm-svn: 45731
-