- Mar 02, 2008
-
-
Andrew Lenharth authored
llvm-svn: 47800
-
- Mar 01, 2008
-
-
Andrew Lenharth authored
llvm-svn: 47799
-
Andrew Lenharth authored
llvm-svn: 47798
-
Andrew Lenharth authored
Add lock prefix support to x86. Also add the instructions necessary for the atomic ops. They are still marked pseudo, since I cannot figure out what format to use, but they are the correct opcode. llvm-svn: 47795
-
- Feb 21, 2008
-
-
Andrew Lenharth authored
Atomic op support. If any gcc test uses __sync builtins, it might start failing on archs that haven't implemented them yet llvm-svn: 47430
-
- Feb 20, 2008
-
-
Evan Cheng authored
llvm-svn: 47400
-
Evan Cheng authored
llvm-svn: 47351
-
- Feb 19, 2008
-
-
Chris Lattner authored
This compiles test-nofold.ll into: _test: movl $15, %ecx andl 4(%esp), %ecx testl %ecx, %ecx movl $42, %eax cmove %ecx, %eax ret instead of: _test: movl 4(%esp), %eax movl %eax, %ecx andl $15, %ecx testl $15, %eax movl $42, %eax cmove %ecx, %eax ret llvm-svn: 47330
-
- Feb 07, 2008
-
-
Evan Cheng authored
Before: _main: subq $8, %rsp leaq _X(%rip), %rax movsd 8(%rax), %xmm1 movss _X(%rip), %xmm0 call _t xorl %ecx, %ecx movl %ecx, %eax addq $8, %rsp ret Now: _main: subq $8, %rsp movsd _X+8(%rip), %xmm1 movss _X(%rip), %xmm0 call _t xorl %ecx, %ecx movl %ecx, %eax addq $8, %rsp ret Notice there is another idiotic codegen issue that needs to be fixed asap: xorl %ecx, %ecx movl %ecx, %eax llvm-svn: 46850
-
- Feb 03, 2008
-
-
Nate Begeman authored
llvm-svn: 46681
-
- Jan 23, 2008
-
-
Duncan Sands authored
precision integers. This won't actually work (and most of the code is dead) unless the new legalization machinery is turned on. While there, I rationalized the handling of i1, and removed some bogus (and unused) sextload patterns. For i1, this could result in microscopically better code for some architectures (not X86). It might also result in worse code if annotating with AssertZExt nodes turns out to be more harmful than helpful. llvm-svn: 46280
-
- Jan 17, 2008
-
-
Chris Lattner authored
1. Legalize now always promotes truncstore of i1 to i8. 2. Remove patterns and gunk related to truncstore i1 from targets. 3. Rename the StoreXAction stuff to TruncStoreAction in TLI. 4. Make the TLI TruncStoreAction table a 2d table to handle from/to conversions. 5. Mark a wide variety of invalid truncstores as such in various targets, e.g. X86 currently doesn't support truncstore of any of its integer types. 6. Add legalize support for truncstores with invalid value input types. 7. Add a dag combine transform to turn store(truncate) into truncstore when safe. The later allows us to compile CodeGen/X86/storetrunc-fp.ll to: _foo: fldt 20(%esp) fldt 4(%esp) faddp %st(1) movl 36(%esp), %eax fstps (%eax) ret instead of: _foo: subl $4, %esp fldt 24(%esp) fldt 8(%esp) faddp %st(1) fstps (%esp) movl 40(%esp), %eax movss (%esp), %xmm0 movss %xmm0, (%eax) addl $4, %esp ret llvm-svn: 46140
-
- Jan 15, 2008
-
-
Chris Lattner authored
Move definition of 'trap' sdnode up from x86 instrinfo to targetselectiondag.td. llvm-svn: 46017
-
Chris Lattner authored
llvm-svn: 46015
-
Anton Korobeynikov authored
llvm-svn: 46012
-
Anton Korobeynikov authored
as well as PPC codegen llvm-svn: 46001
-
- Jan 11, 2008
-
-
Chris Lattner authored
llvm-svn: 45870
-
Chris Lattner authored
llvm-svn: 45860
-
- Jan 10, 2008
-
-
Chris Lattner authored
llvm-svn: 45838
-
Chris Lattner authored
x86 backend where instructions were not marked maystore/mayload, and perf issues where instructions were not marked neverHasSideEffects. It would be really nice if we could write patterns for copy instructions. I have audited all the x86 instructions down to MOVDQAmr. The flags on others and on other targets are probably not right in all cases, but no clients currently use this info that are enabled by default. llvm-svn: 45829
-
Chris Lattner authored
llvm-svn: 45826
-
Chris Lattner authored
inferred from the instr patterns. llvm-svn: 45824
-
Chris Lattner authored
llvm-svn: 45821
-
- Jan 07, 2008
-
-
Chris Lattner authored
llvm-svn: 45668
-
Chris Lattner authored
llvm-svn: 45667
-
- Jan 05, 2008
-
-
Chris Lattner authored
llvm-svn: 45618
-
Evan Cheng authored
llvm-svn: 45605
-
- Dec 29, 2007
-
-
Chris Lattner authored
llvm-svn: 45418
-
- Dec 22, 2007
-
-
Evan Cheng authored
llvm-svn: 45307
-
- Dec 18, 2007
-
-
Bill Wendling authored
based what flag to set on whether it was already marked as "isRematerializable". If there was a further check to determine if it's "really" rematerializable, then I marked it as "mayHaveSideEffects" and created a check in the X86 back-end similar to the remat one. llvm-svn: 45132
-
- Dec 14, 2007
-
-
Evan Cheng authored
llvm-svn: 45037
-
Dan Gohman authored
llvm-svn: 45030
-
Evan Cheng authored
Fix ctlz and cttz. llvm definition requires them to return number of bits in of the src type when value is zero. llvm-svn: 45029
-
Evan Cheng authored
llvm-svn: 45024
-
- Dec 13, 2007
-
-
Evan Cheng authored
llvm-svn: 44970
-
Evan Cheng authored
Implicit def instructions, e.g. X86::IMPLICIT_DEF_GR32, are always re-materializable and they should not be spilled. llvm-svn: 44960
-
- Nov 13, 2007
-
-
Bill Wendling authored
llvm-svn: 44045
-
Bill Wendling authored
adjustment fields, and an optional flag. If there is a "dynamic_stackalloc" in the code, make sure that it's bracketed by CALLSEQ_START and CALLSEQ_END. If not, then there is the potential for the stack to be changed while the stack's being used by another instruction (like a call). This can only result in tears... llvm-svn: 44037
-
- Nov 12, 2007
-
-
Owen Anderson authored
Target maintainers: please check that the instructions for your target are correctly marked. llvm-svn: 44012
-
- Oct 19, 2007
-
-
Evan Cheng authored
Turn a store folding instruction into a load folding instruction. e.g. xorl %edi, %eax movl %eax, -32(%ebp) movl -36(%ebp), %eax orl %eax, -32(%ebp) => xorl %edi, %eax orl -36(%ebp), %eax mov %eax, -32(%ebp) This enables the unfolding optimization for a subsequent instruction which will also eliminate the newly introduced store instruction. llvm-svn: 43192
-