- Dec 23, 2005
-
-
Chris Lattner authored
llvm-svn: 24978
-
Chris Lattner authored
to represent the int part (because it's always 32-bits) llvm-svn: 24976
-
Chris Lattner authored
llvm-svn: 24975
-
Chris Lattner authored
llvm-svn: 24974
-
- Dec 22, 2005
-
-
Chris Lattner authored
llvm-svn: 24967
-
Chris Lattner authored
llvm-svn: 24965
-
Chris Lattner authored
llvm-svn: 24964
-
Chris Lattner authored
llvm-svn: 24956
-
Duraid Madina authored
whimper out of doing things the Right Way, and hack up a generic 'BRCALL' instruction, that gets generated when calls are lowered. This gets selected by hand in the DAG isel, where it gets turned into real (i.e. in tablegen) br.call instructions. BUG: this dies on void calls, but seems to work otherwise? llvm-svn: 24952
-
Duraid Madina authored
llvm-svn: 24951
-
Duraid Madina authored
llvm-svn: 24950
-
Duraid Madina authored
llvm-svn: 24948
-
Duraid Madina authored
BUG: calling printf(string, float) will load the float into the wrong register, completely forget about loading the string, etce llvm-svn: 24947
-
Duraid Madina authored
to IA64ISD llvm-svn: 24946
-
Duraid Madina authored
i.e. r1/r12/rp are saved/restored regardless of scheduling/luck TODO: calls to external symbols, indirect (function descriptor) calls, performance (we're being paranoid right now) BUG: the code for handling calls to vararg functions breaks if FP args are passed (this will make printf() go haywire so a bunch of tests will fail) BUG: this seems to trigger some legalize nastiness llvm-svn: 24942
-
Duraid Madina authored
SPARCv8. (we copy sparcv8's workaround for tablegen not being nice about ISD::CALL/TAILCALL) llvm-svn: 24941
-
Duraid Madina authored
llvm-svn: 24939
-
Evan Cheng authored
llvm-svn: 24935
-
Evan Cheng authored
llvm-svn: 24934
-
Evan Cheng authored
llvm-svn: 24922
-
Evan Cheng authored
* Teach DAG combiner about X86ISD::SETCC by adding a TargetLowering hook. llvm-svn: 24921
-
- Dec 21, 2005
-
-
Evan Cheng authored
llvm-svn: 24920
-
Jim Laskey authored
llvm-svn: 24919
-
Evan Cheng authored
bytes to pop off stack. * Added support for X86 SETCC. llvm-svn: 24917
-
Chris Lattner authored
llvm-svn: 24901
-
Chris Lattner authored
llvm-svn: 24900
-
Chris Lattner authored
that were overloaded to work before and after the stackifier runs. With the new clean world, it is possible to write patterns for these instructions: woo! This also adds a few simple patterns here and there, though there are a lot still missing. These should be easy to add though. :) See the comments under "Floating Point Stack Support" for more details on the new world order. This patch as absolutely no effect on the generated code, woo! llvm-svn: 24899
-
Chris Lattner authored
llvm-svn: 24898
-
Chris Lattner authored
llvm-svn: 24896
-
Evan Cheng authored
llvm-svn: 24889
-
Evan Cheng authored
for Darwin. * Added lowering hook for ISD::RET. It inserts CopyToRegs for the return value (or store / fld / copy to ST(0) for floating point value). This eliminate the need to write C++ code to handle RET with variable number of operands. llvm-svn: 24888
-
- Dec 20, 2005
-
-
Evan Cheng authored
llvm-svn: 24886
-
Evan Cheng authored
llvm-svn: 24884
-
Chris Lattner authored
Only run lower-allocations and lower-select for the simple isel llvm-svn: 24881
-
Chris Lattner authored
For example, instead of emitting this: test: save -40112, %o6, %o6 ;; imm too large add %i6, -40016, %o0 ;; imm too large call caller nop restore %g0, %g0, %g0 retl nop emit this: test: sethi 4194264, %g1 or %g1, 848, %g1 save %o6, %g1, %o6 sethi 4194264, %g1 add %g1, %i6, %g1 add %i1, 944, %o0 call caller nop restore %g0, %g0, %g0 retl nop which doesn't cause the assembler to barf. llvm-svn: 24880
-
Evan Cheng authored
llvm-svn: 24879
-
Evan Cheng authored
llvm-svn: 24877
-
Nate Begeman authored
llvm-svn: 24874
-
Nate Begeman authored
us to load and store vectors directly at a pointer (offset of zero) by using r0 as the base register. This also requires some asm printer work to satisfy the darwin assembler. For void %foo(<4 x float> * %a) { entry: %tmp1 = load <4 x float> * %a; %tmp2 = add <4 x float> %tmp1, %tmp1 store <4 x float> %tmp2, <4 x float> *%a ret void } We now produce: _foo: lvx v0, 0, r3 vaddfp v0, v0, v0 stvx v0, 0, r3 blr Instead of: _foo: li r2, 0 lvx v0, r2, r3 vaddfp v0, v0, v0 stvx v0, r2, r3 blr llvm-svn: 24872
-
Nate Begeman authored
llvm-svn: 24871
-