- Jan 10, 2008
-
-
Evan Cheng authored
llvm-svn: 45830
-
Chris Lattner authored
x86 backend where instructions were not marked maystore/mayload, and perf issues where instructions were not marked neverHasSideEffects. It would be really nice if we could write patterns for copy instructions. I have audited all the x86 instructions down to MOVDQAmr. The flags on others and on other targets are probably not right in all cases, but no clients currently use this info that are enabled by default. llvm-svn: 45829
-
Evan Cheng authored
llvm-svn: 45828
-
Chris Lattner authored
llvm-svn: 45827
-
Chris Lattner authored
llvm-svn: 45826
-
Chris Lattner authored
llvm-svn: 45825
-
Chris Lattner authored
inferred from the instr patterns. llvm-svn: 45824
-
Chris Lattner authored
llvm-svn: 45823
-
Chris Lattner authored
llvm-svn: 45822
-
Chris Lattner authored
llvm-svn: 45821
-
Chris Lattner authored
llvm-svn: 45819
-
Chris Lattner authored
instructions (with patterns) that load memory marked, for example. llvm-svn: 45818
-
Chris Lattner authored
Also, instructions with any nodes that are SDNPMayLoad also read memory. llvm-svn: 45817
-
Chris Lattner authored
or being side-effect free. llvm-svn: 45816
-
Owen Anderson authored
llvm-svn: 45815
-
Evan Cheng authored
llvm-svn: 45814
-
Evan Cheng authored
llvm-svn: 45813
-
Evan Cheng authored
Add a isImmutable bit to StackObject. Fixed stack objects are immutable (in the function) unless specified otherwise. llvm-svn: 45812
-
Dale Johannesen authored
because assembler/linker can't cope with weak absolutes. PR 1880. llvm-svn: 45811
-
Owen Anderson authored
MachineRegisterInfo. Once all clients are switched over, the former will be going away. llvm-svn: 45805
-
Chris Lattner authored
The first only returns definitions of a register, the second only returns uses, the third returns both. llvm-svn: 45803
-
Owen Anderson authored
copies is made. llvm-svn: 45799
-
Evan Cheng authored
Do not use the stack pointer directly, issue a copyfromreg instead. Otherwise we can end up with something like ADD32ri %esp, x which two-address pass won't like. llvm-svn: 45798
-
Owen Anderson authored
llvm-svn: 45797
-
rdar://5676945Chris Lattner authored
than hardware supported type will be scalarized, so we can infer their alignment from that info. We now codegen pr1845 into: _boolVectorSelect: lbz r2, 0(r3) stb r2, -16(r1) blr llvm-svn: 45796
-
Chris Lattner authored
llvm-svn: 45795
-
Evan Cheng authored
llvm-svn: 45792
-
Owen Anderson authored
llvm-svn: 45791
-
Evan Cheng authored
llvm-svn: 45787
-
- Jan 09, 2008
-
-
Owen Anderson authored
Clean up StrongPHIElimination a bit, and add some more comments to the internal structures. There's still more work to do on this front. llvm-svn: 45783
-
Duncan Sands authored
llvm-svn: 45781
-
Chris Lattner authored
llvm-svn: 45780
-
Owen Anderson authored
llvm-svn: 45775
-
Owen Anderson authored
llvm-svn: 45774
-
Owen Anderson authored
llvm-svn: 45773
-
Evan Cheng authored
llvm-svn: 45772
-
Chris Lattner authored
llvm-svn: 45770
-
Chris Lattner authored
llvm-svn: 45768
-
Chris Lattner authored
llvm-svn: 45766
-
Chris Lattner authored
void test(long long *P) { *P ^= 1; } into just: _test: movl 4(%esp), %eax xorl $1, (%eax) ret instead of code like this: _test: movl 4(%esp), %ecx xorl $1, (%ecx) movl 4(%ecx), %edx movl %edx, 4(%ecx) ret llvm-svn: 45762
-