- Nov 10, 2005
-
-
Chris Lattner authored
llvm-svn: 24278
-
Chris Lattner authored
llvm-svn: 24275
-
Chris Lattner authored
l1__2E_str_1: ; '.str_1' .asciz "foo" not: .align 0 l1__2E_str_1: ; '.str_1' .asciz "foo" llvm-svn: 24273
-
Chris Lattner authored
add support for .asciz, and enable it by default. If your target assemblerdoesn't support .asciz, just set AscizDirective to null in your asmprinter. This compiles C strings to: l1__2E_str_1: ; '.str_1' .asciz "foo" instead of: l1__2E_str_1: ; '.str_1' .ascii "foo\000" llvm-svn: 24272
-
Chris Lattner authored
Switch the allnodes list from a vector of pointers to an ilist of nodes.This eliminates the vector, allows constant time removal of a node froma graph, and makes iteration over the all nodes list stable when adding nodes to the graph. llvm-svn: 24263
-
- Nov 09, 2005
-
-
Chris Lattner authored
llvm-svn: 24261
-
Chris Lattner authored
llvm-svn: 24259
-
Chris Lattner authored
llvm-svn: 24258
-
Chris Lattner authored
llvm-svn: 24256
-
Chris Lattner authored
allocator from 23s to 11s on kc++ in debug mode. llvm-svn: 24255
-
Chris Lattner authored
eliminates almost one node per block in common cases. llvm-svn: 24254
-
Chris Lattner authored
turn power-of-two multiplies into shifts early to improve compile time. llvm-svn: 24253
-
Chris Lattner authored
llvm-svn: 24252
-
Chris Lattner authored
Change the ValueList array for each node to be shared instead of individuallyallocated. Further, in the common case where a node has a single value, justreference an element from a small array. This is a small compile-time win. llvm-svn: 24251
-
- Nov 08, 2005
-
-
Chris Lattner authored
Switch the operandlist/valuelist from being vectors to being just an array.This saves 12 bytes from SDNode, but doesn't speed things up substantially (our graphs apparently already fit within the cache on my g5). In any case this reduces memory usage. llvm-svn: 24249
-
Chris Lattner authored
llvm-svn: 24247
-
Chris Lattner authored
set and eliminating the need to iterate whenever something is removed (which can be really slow in some cases). Thx to Jim for pointing out something silly I was getting stuck on. :) llvm-svn: 24241
-
- Nov 07, 2005
-
-
Jim Laskey authored
llvm-svn: 24231
-
- Nov 06, 2005
-
-
Chris Lattner authored
llvm-svn: 24227
-
Nate Begeman authored
alignment information appropriately. Includes code for PowerPC to support fixed-size allocas with alignment larger than the stack. Support for arbitrarily aligned dynamic allocas coming soon. llvm-svn: 24224
-
- Nov 05, 2005
-
-
Jim Laskey authored
llvm-svn: 24188
-
- Nov 04, 2005
-
-
Jim Laskey authored
llvm-svn: 24187
-
Jim Laskey authored
llvm-svn: 24180
-
- Nov 02, 2005
-
-
Nate Begeman authored
XCode's indenting. llvm-svn: 24159
-
Chris Lattner authored
may fix PR652. Thanks to Andrew for tracking down the problem. llvm-svn: 24145
-
- Oct 31, 2005
-
-
Jim Laskey authored
1. Embed and not inherit vector for NodeGroup. 2. Iterate operands and not uses (performance.) 3. Some long pending comment changes. llvm-svn: 24119
-
- Oct 30, 2005
-
-
Chris Lattner authored
a special case hack for X86, make the hack more general: if an incoming argument register is not used in any block other than the entry block, don't copy it to a vreg. This helps us compile code like this: %struct.foo = type { int, int, [0 x ubyte] } int %test(%struct.foo* %X) { %tmp1 = getelementptr %struct.foo* %X, int 0, uint 2, int 100 %tmp = load ubyte* %tmp1 ; <ubyte> [#uses=1] %tmp2 = cast ubyte %tmp to int ; <int> [#uses=1] ret int %tmp2 } to: _test: lbz r3, 108(r3) blr instead of: _test: lbz r2, 108(r3) or r3, r2, r2 blr The (dead) copy emitted to copy r3 into a vreg for extra-block uses was increasing the live range of r3 past the load, preventing the coallescing. This implements CodeGen/PowerPC/reg-coallesce-simple.ll llvm-svn: 24115
-
Chris Lattner authored
generating results in vregs that will need them. In the case of something like this: CopyToReg((add X, Y), reg1024), we no longer emit code like this: reg1025 = add X, Y reg1024 = reg 1025 Instead, we emit: reg1024 = add X, Y Whoa! :) llvm-svn: 24111
-
Chris Lattner authored
This implements test/Regression/CodeGen/PowerPC/mul-neg-power-2.ll, producing: _foo: slwi r2, r3, 1 subfic r3, r2, 63 blr instead of: _foo: mulli r2, r3, -2 addi r3, r2, 63 blr llvm-svn: 24106
-
- Oct 27, 2005
-
-
Chris Lattner authored
VT as the killing one. Fix fixes PR491 llvm-svn: 24034
-
Chris Lattner authored
llvm-svn: 24029
-
- Oct 26, 2005
-
-
Chris Lattner authored
llvm-svn: 24019
-
Nate Begeman authored
FP_TO_SINT is preferred to a larger FP_TO_UINT. This seems to be begging for a TLI.isOperationCustom() helper function. llvm-svn: 23992
-
- Oct 25, 2005
-
-
Chris Lattner authored
llvm-svn: 23980
-
- Oct 24, 2005
-
-
Chris Lattner authored
in the future, remove it. llvm-svn: 23952
-
- Oct 23, 2005
-
-
Jeff Cohen authored
pointer marking the end of the list, the zero *must* be cast to the pointer type. An un-cast zero is a 32-bit int, and at least on x86_64, gcc will not extend the zero to 64 bits, thus allowing the upper 32 bits to be random junk. The new END_WITH_NULL macro may be used to annotate a such a function so that GCC (version 4 or newer) will detect the use of un-casted zero at compile time. llvm-svn: 23888
-
Andrew Lenharth authored
llvm-svn: 23886
-
- Oct 22, 2005
-
-
Chris Lattner authored
the input is that type, this caused a failure on gs on X86 last night. Move the hard checks into Build[US]Div since that is where decisions like this should be made. llvm-svn: 23881
-
- Oct 21, 2005
-
-
Chris Lattner authored
2005-10-21-longlonggtu.ll. llvm-svn: 23875
-
Chris Lattner authored
For example, we can now join things like [0-30:0)[31-40:1)[52-59:2) with [40:60:0) if the 52-59 range is defined by a copy from the 40-60 range. The resultant range ends up being [0-30:0)[31-60:1). This fires a lot through-out the test suite (e.g. shrinking bc from 19492 -> 18509 machineinstrs) though most gains are smaller (e.g. about 50 copies eliminated from crafty). llvm-svn: 23866
-