Skip to content
  1. Jun 10, 2013
    • Tim Northover's avatar
      X86: Stop LEA64_32r doing unspeakable things to its arguments. · 6833e3fd
      Tim Northover authored
      Previously LEA64_32r went through virtually the entire backend thinking it was
      using 32-bit registers until its blissful illusions were cruelly snatched away
      by MCInstLower and 64-bit equivalents were substituted at the last minute.
      
      This patch makes it behave normally, and take 64-bit registers as sources all
      the way through. Previous uses (for 32-bit arithmetic) are accommodated via
      SUBREG_TO_REG instructions which make the types and classes agree properly.
      
      llvm-svn: 183693
      6833e3fd
  2. Jun 01, 2013
  3. May 10, 2013
    • Chad Rosier's avatar
      [ms-inline asm] Fix a crasher when we fail on a direct match. · c8569cba
      Chad Rosier authored
      The issue was that the MatchingInlineAsm and VariantID args to the
      MatchInstructionImpl function weren't being set properly.  Specifically, when
      parsing intel syntax, the parser thought it was parsing inline assembly in the
      at&t dialect; that will never be the case.  
      
      The crash was caused when the emitter tried to emit the instruction, but the
      operands weren't set.  When parsing inline assembly we only set the opcode, not
      the operands, which is used to lookup the instruction descriptor.
      rdar://13854391 and PR15945
      
      Also, this commit reverts r176036.  Now that we're correctly parsing the intel
      syntax the pushad/popad don't match properly.  I've reimplemented that fix using
      a MnemonicAlias.
      
      llvm-svn: 181620
      c8569cba
  4. Apr 19, 2013
  5. Mar 29, 2013
  6. Mar 26, 2013
  7. Mar 21, 2013
  8. Mar 19, 2013
  9. Feb 25, 2013
  10. Feb 14, 2013
  11. Feb 01, 2013
    • David Sehr's avatar
      Two changes relevant to LEA and x32: · 8114a7a6
      David Sehr authored
      1) allows the use of RIP-relative addressing in 32-bit LEA instructions under
         x86-64 (ILP32 and LP64)
      2) separates the size of address registers in 64-bit LEA instructions from
         control by ILP32/LP64.
      
      llvm-svn: 174208
      8114a7a6
  12. Jan 07, 2013
  13. Jan 02, 2013
  14. Dec 27, 2012
  15. Dec 26, 2012
  16. Dec 17, 2012
  17. Nov 14, 2012
  18. Nov 08, 2012
    • Michael Liao's avatar
      Add support of RTM from TSX extension · 73cffddb
      Michael Liao authored
      - Add RTM code generation support throught 3 X86 intrinsics:
        xbegin()/xend() to start/end a transaction region, and xabort() to abort a
        tranaction region
      
      llvm-svn: 167573
      73cffddb
  19. Oct 16, 2012
    • Michael Liao's avatar
      Add __builtin_setjmp/_longjmp supprt in X86 backend · 97bf363a
      Michael Liao authored
      - Besides used in SjLj exception handling, __builtin_setjmp/__longjmp is also
        used as a light-weight replacement of setjmp/longjmp which are used to
        implementation continuation, user-level threading, and etc. The support added
        in this patch ONLY addresses this usage and is NOT intended to support SjLj
        exception handling as zero-cost DWARF exception handling is used by default
        in X86.
      
      llvm-svn: 165989
      97bf363a
  20. Oct 09, 2012
  21. Sep 26, 2012
  22. Sep 21, 2012
  23. Sep 13, 2012
  24. Sep 11, 2012
  25. Aug 30, 2012
    • Michael Liao's avatar
      Introduce 'UseSSEx' to force SSE legacy encoding · bbd10792
      Michael Liao authored
      - Add 'UseSSEx' to force SSE legacy insn not being selected when AVX is
        enabled.
      
        As the penalty of inter-mixing SSE and AVX instructions, we need
        prevent SSE legacy insn from being generated except explicitly
        specified through some intrinsics. For patterns supported by both
        SSE and AVX, so far, we force AVX insn will be tried first relying on
        AddedComplexity or position in td file. It's error-prone and
        introduces bugs accidentally.
      
        'UseSSEx' is disabled when AVX is turned on. For SSE insns inherited
        by AVX, we need this predicate to force VEX encoding or SSE legacy
        encoding only.
      
        For insns not inherited by AVX, we still use the previous predicates,
        i.e. 'HasSSEx'. So far, these insns fall into the following
        categories:
        * SSE insns with MMX operands
        * SSE insns with GPR/MEM operands only (xFENCE, PREFETCH, CLFLUSH,
          CRC, and etc.)
        * SSE4A insns.
        * MMX insns.
        * x87 insns added by SSE.
      
      2 test cases are modified:
      
       - test/CodeGen/X86/fast-isel-x86-64.ll
         AVX code generation is different from SSE one. 'vcvtsi2sdq' cannot be
         selected by fast-isel due to complicated pattern and fast-isel
         fallback to materialize it from constant pool.
      
       - test/CodeGen/X86/widen_load-1.ll
         AVX code generation is different from SSE one after fixing SSE/AVX
         inter-mixing. Exec-domain fixing prefers 'vmovapd' instead of
         'vmovaps'.
      
      llvm-svn: 162919
      bbd10792
  26. Aug 27, 2012
  27. Aug 24, 2012
  28. Jul 18, 2012
  29. Jul 12, 2012
  30. Jun 29, 2012
    • Manman Ren's avatar
      X86: add more GATHER intrinsics in LLVM · 98a5bf24
      Manman Ren authored
      Corrected type for index of llvm.x86.avx2.gather.d.pd.256
        from 256-bit to 128-bit.
      Corrected types for src|dst|mask of llvm.x86.avx2.gather.q.ps.256
        from 256-bit to 128-bit.
      
      Support the following intrinsics:
        llvm.x86.avx2.gather.d.q, llvm.x86.avx2.gather.q.q
        llvm.x86.avx2.gather.d.q.256, llvm.x86.avx2.gather.q.q.256
        llvm.x86.avx2.gather.d.d, llvm.x86.avx2.gather.q.d
        llvm.x86.avx2.gather.d.d.256, llvm.x86.avx2.gather.q.d.256
      
      llvm-svn: 159402
      98a5bf24
  31. Jun 26, 2012
    • Manman Ren's avatar
      X86: add GATHER intrinsics (AVX2) in LLVM · a0982041
      Manman Ren authored
      Support the following intrinsics:
      llvm.x86.avx2.gather.d.pd, llvm.x86.avx2.gather.q.pd
      llvm.x86.avx2.gather.d.pd.256, llvm.x86.avx2.gather.q.pd.256
      llvm.x86.avx2.gather.d.ps, llvm.x86.avx2.gather.q.ps
      llvm.x86.avx2.gather.d.ps.256, llvm.x86.avx2.gather.q.ps.256
      
      Modified Disassembler to handle VSIB addressing mode.
      
      llvm-svn: 159221
      a0982041
Loading