- Dec 16, 2005
-
-
Jim Laskey authored
llvm-svn: 24748
-
Chris Lattner authored
llvm-svn: 24747
-
Nate Begeman authored
so that tablegen can infer all types. llvm-svn: 24746
-
Chris Lattner authored
llvm-svn: 24745
-
Chris Lattner authored
llvm-svn: 24744
-
Chris Lattner authored
llvm-svn: 24743
-
Chris Lattner authored
llvm-svn: 24742
-
Chris Lattner authored
llvm-svn: 24741
-
Chris Lattner authored
llvm-svn: 24740
-
Chris Lattner authored
llvm-svn: 24739
-
Chris Lattner authored
With this, Regression/CodeGen/SparcV8/basictest.ll now passes. Lets hear it for regression tests :) llvm-svn: 24738
-
Chris Lattner authored
again. llvm-svn: 24737
-
Chris Lattner authored
llvm-svn: 24736
-
Chris Lattner authored
llvm-svn: 24735
-
Chris Lattner authored
llvm-svn: 24734
-
Chris Lattner authored
llvm-svn: 24733
-
Chris Lattner authored
llvm-svn: 24732
-
Chris Lattner authored
llvm-svn: 24731
-
Chris Lattner authored
llvm-svn: 24730
-
Chris Lattner authored
line. llvm-svn: 24729
-
Chris Lattner authored
should work in all permutations. llvm-svn: 24728
-
Chris Lattner authored
llvm-svn: 24727
-
- Dec 15, 2005
-
-
Evan Cheng authored
* Handling extload (1 bit -> 8 bit) and remove C++ code that handle 1 bit zextload. llvm-svn: 24726
-
Chris Lattner authored
if after legalize. This fixes IA64 failures. llvm-svn: 24725
-
Evan Cheng authored
leaaddr. llvm-svn: 24724
-
Evan Cheng authored
llvm-svn: 24723
-
Evan Cheng authored
llvm-svn: 24722
-
Evan Cheng authored
llvm-svn: 24721
-
- Dec 14, 2005
-
-
Nate Begeman authored
llvm-svn: 24720
-
Nate Begeman authored
from the DAGToDAG cpp file. This adds pattern support for vector and scalar fma, which passes test/Regression/CodeGen/PowerPC/fma.ll, and does the right thing in the presence of -disable-excess-fp-precision. Allows us to match: void %foo(<4 x float> * %a) { entry: %tmp1 = load <4 x float> * %a; %tmp2 = mul <4 x float> %tmp1, %tmp1 %tmp3 = add <4 x float> %tmp2, %tmp1 store <4 x float> %tmp3, <4 x float> *%a ret void } As: _foo: li r2, 0 lvx v0, r2, r3 vmaddfp v0, v0, v0, v0 stvx v0, r2, r3 blr Or, with llc -disable-excess-fp-precision, _foo: li r2, 0 lvx v0, r2, r3 vxor v1, v1, v1 vmaddfp v1, v0, v0, v1 vaddfp v0, v1, v0 stvx v0, r2, r3 blr llvm-svn: 24719
-
Nate Begeman authored
are matching llvm-svn: 24718
-
Evan Cheng authored
llvm-svn: 24717
-
Evan Cheng authored
llvm-svn: 24716
-
Evan Cheng authored
llvm-svn: 24715
-
Chris Lattner authored
llvm-svn: 24714
-
Evan Cheng authored
OtherVT, it cannot be compare to type of 1st operand which is an integer type. llvm-svn: 24713
-
Chris Lattner authored
load. This reduces number of worklist iterations and avoid missing optimizations depending on folding of things into sext_inreg nodes (which aren't supported by all targets). Tested by Regression/CodeGen/X86/extend.ll:test2 llvm-svn: 24712
-
Chris Lattner authored
llvm-svn: 24711
-
Reid Spencer authored
in last patch. llvm-svn: 24710
-
Chris Lattner authored
Allow (zext (truncate)) to apply after legalize if the target supports AND (which all do). This compiles short %foo() { %tmp.0 = load ubyte* %X ; <ubyte> [#uses=1] %tmp.3 = cast ubyte %tmp.0 to short ; <short> [#uses=1] ret short %tmp.3 } to: _foo: movzbl _X, %eax ret instead of: _foo: movzbl _X, %eax movzbl %al, %eax ret thanks to Evan for pointing this out. llvm-svn: 24709
-