- Nov 12, 2007
-
-
Owen Anderson authored
Target maintainers: please check that the instructions for your target are correctly marked. llvm-svn: 44012
-
- Nov 11, 2007
-
-
Anton Korobeynikov authored
This makes DwarfRegNum to accept list of numbers instead. Added three different "flavours", but only slightly tested on x86-32/linux. Please check another subtargets if possible, llvm-svn: 43997
-
- Nov 10, 2007
-
-
Dale Johannesen authored
dealing with types whose size & alignment are different on different subtargets. Use it for x86 f80. llvm-svn: 43988
-
Arnold Schwaighofer authored
llvm-svn: 43978
-
- Nov 09, 2007
-
-
Evan Cheng authored
llvm-svn: 43955
-
Dale Johannesen authored
llvm-svn: 43950
-
Evan Cheng authored
Then: call "L1$pb" "L1$pb": popl %eax ... LBB1_1: # entry imull $4, %ecx, %ecx leal LJTI1_0-"L1$pb"(%eax), %edx addl LJTI1_0-"L1$pb"(%ecx,%eax), %edx jmpl *%edx .align 2 .set L1_0_set_3,LBB1_3-LJTI1_0 .set L1_0_set_2,LBB1_2-LJTI1_0 .set L1_0_set_5,LBB1_5-LJTI1_0 .set L1_0_set_4,LBB1_4-LJTI1_0 LJTI1_0: .long L1_0_set_3 .long L1_0_set_2 Now: call "L1$pb" "L1$pb": popl %eax ... LBB1_1: # entry addl LJTI1_0-"L1$pb"(%eax,%ecx,4), %eax jmpl *%eax .align 2 .set L1_0_set_3,LBB1_3-"L1$pb" .set L1_0_set_2,LBB1_2-"L1$pb" .set L1_0_set_5,LBB1_5-"L1$pb" .set L1_0_set_4,LBB1_4-"L1$pb" LJTI1_0: .long L1_0_set_3 .long L1_0_set_2 llvm-svn: 43924
-
Dale Johannesen authored
llvm-svn: 43918
-
- Nov 07, 2007
-
-
Dale Johannesen authored
Would somebody not on Darwin please make sure this doesn't break anything. Exception handling failures would be the most likely symptom. llvm-svn: 43844
-
Dale Johannesen authored
Much improvement in exception handling. llvm-svn: 43794
-
- Nov 06, 2007
-
-
Rafael Espindola authored
Thanks for the suggestions Bill :-) llvm-svn: 43742
-
- Nov 05, 2007
-
-
Evan Cheng authored
less than 16. This is a temporary solution until dynamic stack alignment is implemented. llvm-svn: 43703
-
Duncan Sands authored
should only effect x86 when using long double. Now 12/16 bytes are output for long double globals (the exact amount depends on the alignment). This brings globals in line with the rest of LLVM: the space reserved for an object is now always the ABI size. One tricky point is that only 10 bytes should be output for long double if it is a field in a packed struct, which is the reason for the additional argument to EmitGlobalConstant. llvm-svn: 43688
-
- Nov 04, 2007
-
-
Chris Lattner authored
Evan, please review this. llvm-svn: 43680
-
Chris Lattner authored
regs on x86-64. llvm-svn: 43669
-
- Nov 02, 2007
-
-
Evan Cheng authored
llvm-svn: 43646
-
Chris Lattner authored
llvm-svn: 43642
-
Evan Cheng authored
llvm-svn: 43630
-
- Nov 01, 2007
-
-
Bill Wendling authored
llvm-svn: 43609
-
- Oct 31, 2007
-
-
Rafael Espindola authored
and by restructuring the X86 version. New I just have to move this to a common place :-) llvm-svn: 43554
-
Rafael Espindola authored
Now both subtarget define getMaxInlineSizeThreshold and the expansion uses it. This should not change generated code. llvm-svn: 43552
-
Dale Johannesen authored
llvm-svn: 43535
-
- Oct 30, 2007
-
-
Dale Johannesen authored
CVTTPD2PI, CVTTPS2PI, CVTPI2PD, CVTPI2PS. llvm-svn: 43523
-
Duncan Sands authored
llvm-svn: 43500
-
Dale Johannesen authored
llvm-svn: 43488
-
- Oct 29, 2007
-
-
Evan Cheng authored
transformation. Previously, it's restricted by ensuring the number of load uses is one. Now the restriction is loosened up by allowing setcc uses to be "extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq). llvm-svn: 43465
-
Evan Cheng authored
llvm-svn: 43446
-
Chris Lattner authored
llvm-svn: 43444
-
Chris Lattner authored
b/h/w/k/q inline asm memory modifiers, which are just ignored. This fixes PR1748 and CodeGen/X86/2007-10-28-inlineasm-q-modifier.ll llvm-svn: 43430
-
- Oct 28, 2007
-
-
Evan Cheng authored
llvm-svn: 43420
-
- Oct 26, 2007
-
-
Anton Korobeynikov authored
registers in case, when FP pointer was eliminated. This should fixes misc. random EH-related crahses, when stuff is compiled with -fomit-frame-pointer. Thanks Duncan for nailing this bug! llvm-svn: 43381
-
Evan Cheng authored
Loosen up iv reuse to allow reuse of the same stride but a larger type when truncating from the larger type to smaller type is free. e.g. Turns this loop: LBB1_1: # entry.bb_crit_edge xorl %ecx, %ecx xorw %dx, %dx movw %dx, %si LBB1_2: # bb movl L_X$non_lazy_ptr, %edi movw %si, (%edi) movl L_Y$non_lazy_ptr, %edi movw %dx, (%edi) addw $4, %dx incw %si incl %ecx cmpl %eax, %ecx jne LBB1_2 # bb into LBB1_1: # entry.bb_crit_edge xorl %ecx, %ecx xorw %dx, %dx LBB1_2: # bb movl L_X$non_lazy_ptr, %esi movw %cx, (%esi) movl L_Y$non_lazy_ptr, %esi movw %dx, (%esi) addw $4, %dx incl %ecx cmpl %eax, %ecx jne LBB1_2 # bb llvm-svn: 43375
-
- Oct 22, 2007
-
-
Dan Gohman authored
by the recent {U,S}MUL_LOHI changes. llvm-svn: 43230
-
Evan Cheng authored
llvm-svn: 43212
-
- Oct 21, 2007
-
-
Dale Johannesen authored
Fixes 5550319. llvm-svn: 43205
-
- Oct 20, 2007
-
-
Evan Cheng authored
llvm-svn: 43194
-
- Oct 19, 2007
-
-
Evan Cheng authored
Turn a store folding instruction into a load folding instruction. e.g. xorl %edi, %eax movl %eax, -32(%ebp) movl -36(%ebp), %eax orl %eax, -32(%ebp) => xorl %edi, %eax orl -36(%ebp), %eax mov %eax, -32(%ebp) This enables the unfolding optimization for a subsequent instruction which will also eliminate the newly introduced store instruction. llvm-svn: 43192
-
Rafael Espindola authored
To do this it is necessary to add a "always inline" argument to the memcpy node. For completeness I have also added this node to memmove and memset. I have also added getMem* functions, because the extra argument makes it cumbersome to use getNode and because I get confused by it :-) llvm-svn: 43172
-
Evan Cheng authored
- Added getOpcodeAfterMemoryUnfold(). It doesn't unfold an instruction, but only returns the opcode of the instruction post unfolding. - Fix some copy+paste bugs. llvm-svn: 43153
-
- Oct 18, 2007
-
-
Evan Cheng authored
llvm-svn: 43150
-