- Mar 13, 2009
-
-
Rafael Espindola authored
llvm-svn: 66922
-
Chris Lattner authored
codegen to the same thing as integer truncates to i8 (the top bits are just undefined). This implements rdar://6667338 llvm-svn: 66902
-
Bill Wendling authored
instructions. Prevent that if we don't want implicit uses of SSE. llvm-svn: 66877
-
Evan Cheng authored
Fix some significant problems with constant pools that resulted in unnecessary paddings between constant pool entries, larger than necessary alignments (e.g. 8 byte alignment for .literal4 sections), and potentially other issues. 1. ConstantPoolSDNode alignment field is log2 value of the alignment requirement. This is not consistent with other SDNode variants. 2. MachineConstantPool alignment field is also a log2 value. 3. However, some places are creating ConstantPoolSDNode with alignment value rather than log2 values. This creates entries with artificially large alignments, e.g. 256 for SSE vector values. 4. Constant pool entry offsets are computed when they are created. However, asm printer group them by sections. That means the offsets are no longer valid. However, asm printer uses them to determine size of padding between entries. 5. Asm printer uses expensive data structure multimap to track constant pool entries by sections. 6. Asm printer iterate over SmallPtrSet when it's emitting constant pool entries. This is non-deterministic. Solutions: 1. ConstantPoolSDNode alignment field is changed to keep non-log2 value. 2. MachineConstantPool alignment field is also changed to keep non-log2 value. 3. Functions that create ConstantPool nodes are passing in non-log2 alignments. 4. MachineConstantPoolEntry no longer keeps an offset field. It's replaced with an alignment field. Offsets are not computed when constant pool entries are created. They are computed on the fly in asm printer and JIT. 5. Asm printer uses cheaper data structure to group constant pool entries. 6. Asm printer compute entry offsets after grouping is done. 7. Change JIT code to compute entry offsets on the fly. llvm-svn: 66875
-
Chris Lattner authored
for i32/i64 expressions (we could also do i16 on cpus where i16 lea is fast, but I didn't add this). On the example, we now generate: _test: movl 4(%esp), %eax cmpl $42, (%eax) setl %al movzbl %al, %eax leal 4(%eax,%eax,8), %eax ret instead of: _test: movl 4(%esp), %eax cmpl $41, (%eax) movl $4, %ecx movl $13, %eax cmovg %ecx, %eax ret llvm-svn: 66869
-
Chris Lattner authored
example to: _test: movl 4(%esp), %eax cmpl $41, (%eax) setg %al movzbl %al, %eax orl $4294967294, %eax ret instead of: movl 4(%esp), %eax cmpl $41, (%eax) movl $4294967294, %ecx movl $4294967295, %eax cmova %ecx, %eax ret which is smaller in code size and faster. rdar://6668608 llvm-svn: 66868
-
Dan Gohman authored
operands can't both be fully folded at the same time. For example, in the included testcase, a global variable is being added with an add of two values. The global variable wants RIP-relative addressing, so it can't share the address with another base register, but it's still possible to fold the initial add. llvm-svn: 66865
-
- Mar 12, 2009
-
-
Evan Cheng authored
Re-apply 66024 with fixes: 1. Fixed indirect call to immediate address assembly. 2. Fixed JIT encoding by making the address pc-relative. llvm-svn: 66803
-
Chris Lattner authored
related transformations out of target-specific dag combine into the ARM backend. These were added by Evan in r37685 with no testcases and only seems to help ARM (e.g. test/CodeGen/ARM/select_xform.ll). Add some simple X86-specific (for now) DAG combines that turn things like cond ? 8 : 0 -> (zext(cond) << 3). This happens frequently with the recently added cp constant select optimization, but is a very general xform. For example, we now compile the second example in const-select.ll to: _test: movsd LCPI2_0, %xmm0 ucomisd 8(%esp), %xmm0 seta %al movzbl %al, %eax movl 4(%esp), %ecx movsbl (%ecx,%eax,4), %eax ret instead of: _test: movl 4(%esp), %eax leal 4(%eax), %ecx movsd LCPI2_0, %xmm0 ucomisd 8(%esp), %xmm0 cmovbe %eax, %ecx movsbl (%ecx), %eax ret This passes multisource and dejagnu. llvm-svn: 66779
-
Chris Lattner authored
llvm-svn: 66778
-
Evan Cheng authored
On x86, if the only use of a i64 load is a i64 store, generate a pair of double load and store instead. llvm-svn: 66776
-
Sanjiv Gupta authored
llvm-svn: 66763
-
Sanjiv Gupta authored
Banksel optimization is now based on the section names of symbols, since the symbols in one section will always be put into one bank. llvm-svn: 66761
-
Dan Gohman authored
assembly text output uses an indirect call ("call *") instead of a direct call. llvm-svn: 66735
-
- Mar 11, 2009
-
-
Rafael Espindola authored
llvm-svn: 66725
-
Bill Wendling authored
floating point instructions that are explicitly specified by the user. llvm-svn: 66719
-
Duncan Sands authored
linkage, so remove it. llvm-svn: 66690
-
Mon P Wang authored
llvm-svn: 66684
-
Chris Lattner authored
llvm-svn: 66660
-
Duncan Sands authored
linkage: this linkage type only applies to declarations, but ODR is only relevant to globals with definitions. llvm-svn: 66650
-
Mon P Wang authored
llvm-svn: 66645
-
Chris Lattner authored
llvm-svn: 66642
-
- Mar 10, 2009
-
-
Sanjiv Gupta authored
llvm-svn: 66540
-
Dan Gohman authored
llvm-svn: 66515
-
Dan Gohman authored
llvm-svn: 66508
-
- Mar 09, 2009
-
-
Evan Cheng authored
ARM target now also recognize triplets like thumbv6-apple-darwin and set thumb mode and arch subversion. Eventually thumb triplets will go way and replaced with function notes. llvm-svn: 66435
-
Evan Cheng authored
ARM isLegalAddressImmediate should check if type is a simple type now that optimizer can create values of funky scalar types. llvm-svn: 66429
-
- Mar 08, 2009
-
-
Chris Lattner authored
llvm-svn: 66382
-
Evan Cheng authored
llvm-svn: 66365
-
Chris Lattner authored
llvm-svn: 66360
-
Chris Lattner authored
llvm-svn: 66359
-
- Mar 07, 2009
-
-
Duncan Sands authored
and extern_weak_odr. These are the same as the non-odr versions, except that they indicate that the global will only be overridden by an *equivalent* global. In C, a function with weak linkage can be overridden by a function which behaves completely differently. This means that IP passes have to skip weak functions, since any deductions made from the function definition might be wrong, since the definition could be replaced by something completely different at link time. This is not allowed in C++, thanks to the ODR (One-Definition-Rule): if a function is replaced by another at link-time, then the new function must be the same as the original function. If a language knows that a function or other global can only be overridden by an equivalent global, it can give it the weak_odr linkage type, and the optimizers will understand that it is alright to make deductions based on the function body. The code generators on the other hand map weak and weak_odr linkage to the same thing. llvm-svn: 66339
-
Dan Gohman authored
the same say the "test" instruction does in overflow cases, so eliminating the test is only safe when those bits aren't needed, as is the case for COND_E and COND_NE, or if it can be proven that no overflow will occur. For now, just restrict the optimization to COND_E and COND_NE and don't do any overflow analysis. llvm-svn: 66318
-
- Mar 05, 2009
-
-
Dan Gohman authored
INC64_32r and INC64_16r, because these instructions are encoded differently on x86-64. This fixes JIT regressions on x86-64 in kimwitu++ and others. llvm-svn: 66207
-
Dan Gohman authored
The extra operand didn't appear to cause any trouble, but it was erroneous regardless. llvm-svn: 66206
-
Dan Gohman authored
negative one, as subtracts of immediates are canonicalized to adds. llvm-svn: 66180
-
- Mar 04, 2009
-
-
Dan Gohman authored
llvm-svn: 66058
-
Dan Gohman authored
llvm-svn: 66057
-
Dan Gohman authored
of MachineInstr def operands must be subtracted out. This bug was uncovered by the recent x86 EFLAGS optimization. Before that, the only instructions that ever needed unfolding were things like CMP32rm, where NumDefs is zero. llvm-svn: 66056
-
Evan Cheng authored
llvm-svn: 66024
-