- May 17, 2006
-
-
Chris Lattner authored
ISD::CALL node, then custom lower that. This means that we only have to handle LEGAL call operands/results, not every possible type. This allows us to simplify the call code, shrinking it by about 1/3. llvm-svn: 28339
-
Chris Lattner authored
produce it. llvm-svn: 28338
-
- May 16, 2006
-
-
Chris Lattner authored
llvm-svn: 28335
-
Chris Lattner authored
llvm-svn: 28334
-
Chris Lattner authored
llvm-svn: 28333
-
Chris Lattner authored
handling. This makes the lower argument code significantly simpler (we only need to handle legal argument types). Incidentally, this also implements support for vector argument registers, so long as they are not on the stack. llvm-svn: 28331
-
Andrew Lenharth authored
llvm-svn: 28330
-
Andrew Lenharth authored
llvm-svn: 28329
-
Chris Lattner authored
arguments at once. llvm-svn: 28327
-
Chris Lattner authored
llvm-svn: 28326
-
Evan Cheng authored
llvm-svn: 28324
-
Chris Lattner authored
llvm-svn: 28321
-
Chris Lattner authored
it doesn't currently use/maintain the chain properly. Also, make the X86ISelLowering.cpp file 80-col clean. llvm-svn: 28320
-
Vladimir Prus authored
can just add lib/Target to TableGen includes. llvm-svn: 28318
-
Chris Lattner authored
This code should be emitted after legalize, so it can't be in sdisel. Note that the EmitFunctionEntryCode hook should be updated to operate on the DAG. The X86 backend is the only one currently using this hook. llvm-svn: 28315
-
Chris Lattner authored
llvm-svn: 28314
-
Chris Lattner authored
for each argument. llvm-svn: 28313
-
Chris Lattner authored
llvm-svn: 28311
-
Rafael Espindola authored
llvm-svn: 28310
-
Reid Spencer authored
Add an additional catch block to ensure that this function can't throw any exceptions, even one's we're not expecting. llvm-svn: 28309
-
- May 15, 2006
-
-
Chris Lattner authored
llvm-svn: 28307
-
Chris Lattner authored
llvm-svn: 28303
-
Rafael Espindola authored
llvm-svn: 28301
-
- May 14, 2006
-
-
Chris Lattner authored
it out of 'ExecutionEngine::create'. This fixes a problem reported by coverity. llvm-svn: 28293
-
Chris Lattner authored
llvm-svn: 28292
-
Chris Lattner authored
handle it. Just silently fail. llvm-svn: 28291
-
Chris Lattner authored
llvm-svn: 28290
-
Chris Lattner authored
llvm-svn: 28289
-
Chris Lattner authored
llvm-svn: 28287
-
Chris Lattner authored
llvm-svn: 28286
-
Evan Cheng authored
llvm-svn: 28284
-
Chris Lattner authored
llvm-svn: 28283
-
- May 13, 2006
-
-
Evan Cheng authored
llvm-svn: 28279
-
Evan Cheng authored
llvm-svn: 28278
-
Chris Lattner authored
bitfield now gives this code: _plus: lwz r2, 0(r3) rlwimi r2, r2, 0, 1, 31 xoris r2, r2, 32768 stw r2, 0(r3) blr instead of this: _plus: lwz r2, 0(r3) srwi r4, r2, 31 slwi r4, r4, 31 addis r4, r4, -32768 rlwimi r2, r4, 0, 0, 0 stw r2, 0(r3) blr this can obviously still be improved. llvm-svn: 28275
-
Chris Lattner authored
llvm-svn: 28274
-
Chris Lattner authored
currently very limited, but can be extended in the future. For example, we now compile: uint %test30(uint %c1) { %c2 = cast uint %c1 to ubyte %c3 = xor ubyte %c2, 1 %c4 = cast ubyte %c3 to uint ret uint %c4 } to: _xor: movzbl 4(%esp), %eax xorl $1, %eax ret instead of: _xor: movb $1, %al xorb 4(%esp), %al movzbl %al, %eax ret More impressively, we now compile: struct B { unsigned bit : 1; }; void xor(struct B *b) { b->bit = b->bit ^ 1; } To (X86/PPC): _xor: movl 4(%esp), %eax xorl $-2147483648, (%eax) ret _xor: lwz r2, 0(r3) xoris r2, r2, 32768 stw r2, 0(r3) blr instead of (X86/PPC): _xor: movl 4(%esp), %eax movl (%eax), %ecx movl %ecx, %edx shrl $31, %edx # TRUNCATE movb %dl, %dl xorb $1, %dl movzbl %dl, %edx andl $2147483647, %ecx shll $31, %edx orl %ecx, %edx movl %edx, (%eax) ret _xor: lwz r2, 0(r3) srwi r4, r2, 31 xori r4, r4, 1 rlwimi r2, r4, 31, 0, 0 stw r2, 0(r3) blr This implements InstCombine/cast.ll:test30. llvm-svn: 28273
-
Chris Lattner authored
Fix a nasty bug in the memcmp optimizer where we used the wrong variable! llvm-svn: 28269
-
Chris Lattner authored
llvm-svn: 28268
-
Chris Lattner authored
llvm-svn: 28267
-