Skip to content
  1. Mar 21, 2013
  2. Mar 20, 2013
  3. Mar 19, 2013
    • Chad Rosier's avatar
      [ms-inline asm] Move the immediate asm rewrite into the target specific · f3c04f6a
      Chad Rosier authored
      logic as a QOI cleanup.  No functional change.  Tests already in place.
      rdar://13456414
      
      llvm-svn: 177446
      f3c04f6a
    • Jakob Stoklund Olesen's avatar
      Annotate X86InstrCompiler.td with SchedRW lists. · 9bd6b8bd
      Jakob Stoklund Olesen authored
      Add a new WriteZero SchedWrite type for the common dependency-breaking
      instructions that clear a register.
      
      llvm-svn: 177442
      9bd6b8bd
    • Chad Rosier's avatar
      [ms-inline asm] Create a helper function, CreateMemForInlineAsm, that creates · 7ca135b2
      Chad Rosier authored
      an X86Operand, but also performs a Sema lookup and adds the sizing directive
      when appropriate.  Use this when parsing a bracketed statement.  This is
      necessary to get the instruction matching correct as well.  Test case coming
      on clang side.
      rdar://13455408
      
      llvm-svn: 177439
      7ca135b2
    • Ulrich Weigand's avatar
      Add missing mayLoad flag to LHAUX8 and LWAUX. · 01dd4c1a
      Ulrich Weigand authored
      All pre-increment load patterns need to set the mayLoad flag (since
      they don't provide a DAG pattern).
      
      This was missing for LHAUX8 and LWAUX, which is added by this patch.
      
      llvm-svn: 177431
      01dd4c1a
    • Ulrich Weigand's avatar
      Rewrite LHAU8 pattern to use standard memory operand. · f8030096
      Ulrich Weigand authored
      As opposed to to pre-increment store patterns, the pre-increment
      load patterns were already using standard memory operands, with
      the sole exception of LHAU8.
      
      As there's no real reason why LHAU8 should be different here,
      this patch simply rewrites the pattern to also use a memri
      operand, just like all the other patterns.
      
      llvm-svn: 177430
      f8030096
    • Ulrich Weigand's avatar
      Rewrite pre-increment store patterns to use standard memory operands. · d850167a
      Ulrich Weigand authored
      Currently, pre-increment store patterns are written to use two separate
      operands to represent address base and displacement:
      
        stwu $rS, $ptroff($ptrreg)
      
      This causes problems when implementing the assembler parser, so this
      commit changes the patterns to use standard (complex) memory operands
      like in all other memory access instruction patterns:
      
        stwu $rS, $dst
      
      To still match those instructions against the appropriate pre_store
      SelectionDAG nodes, the patch uses the new feature that allows a Pat
      to match multiple DAG operands against a single (complex) instruction
      operand.
      
      Approved by Hal Finkel.
      
      llvm-svn: 177429
      d850167a
    • Ulrich Weigand's avatar
      Fix sub-operand size mismatch in tocentry operands. · fd24544f
      Ulrich Weigand authored
      The tocentry operand class refers to 64-bit values (it is only used in 64-bit,
      where iPTR is a 64-bit type), but its sole suboperand is designated as 32-bit
      type.  This causes a mismatch to be detected at compile-time with the TableGen
      patch I'll check in shortly.
      
      To fix this, this commit changes the suboperand to a 64-bit type as well.
      
      llvm-svn: 177427
      fd24544f
    • Ulrich Weigand's avatar
      Remove an invalid and unnecessary Pat pattern from the X86 backend: · 80d9ad39
      Ulrich Weigand authored
        def : Pat<(load (i64 (X86Wrapper tglobaltlsaddr :$dst))),
                  (MOV64rm tglobaltlsaddr :$dst)>;
      
      This pattern is invalid because the MOV64rm instruction expects a
      source operand of type "i64mem", which is a subclass of X86MemOperand
      and thus actually consists of five MI operands, but the Pat provides
      only a single MI operand ("tglobaltlsaddr" matches an SDnode of
      type ISD::TargetGlobalTLSAddress and provides a single output).
      
      Thus, if the pattern were ever matched, subsequent uses of the MOV64rm
      instruction pattern would access uninitialized memory.  In addition,
      with the TableGen patch I'm about to check in, this would actually be
      reported as a build-time error.
      
      Fortunately, the pattern does in fact never match, for at least two
      independent reasons.
      
      First, the code generator actually never generates a pattern of the
      form (load (X86Wrapper (tglobaltlsaddr))).  For most combinations of
      TLS and code models, (tglobaltlsaddr) represents just an offset that
      needs to be added to some base register, so it is never directly
      dereferenced.  The only exception is the initial-exec model, where
      (tglobaltlsaddr) refers to the (pc-relative) address of a GOT slot,
      which *is* in fact directly dereferenced: but in that case, the
      X86WrapperRIP node is used, not X86Wrapper, so the Pat doesn't match.
      
      Second, even if some patterns along those lines *were* ever generated,
      we should not need an extra Pat pattern to match it.  Instead, the
      original MOV64rm instruction pattern ought to match directly, since
      it uses an "addr" operand, which is implemented via the SelectAddr
      C++ routine; this routine is supposed to accept the full range of
      input DAGs that may be implemented by a single mov instruction,
      including those cases involving ISD::TargetGlobalTLSAddress (and
      actually does so e.g. in the initial-exec case as above).
      
      To avoid build breaks (due to the above-mentioned error) after the
      TableGen patch is checked in, I'm removing this Pat here.
      
      llvm-svn: 177426
      80d9ad39
    • Hal Finkel's avatar
      Prepare to make r0 an allocatable register on PPC · 638a9fa4
      Hal Finkel authored
      Currently the PPC r0 register is unconditionally reserved. There are two reasons
      for this:
      
       1. r0 is treated specially (as the constant 0) by certain instructions, and so
          cannot be used with those instructions as a regular register.
      
       2. r0 is used as a temporary register in the CR-register spilling process
          (where, under some circumstances, we require two GPRs).
      
      This change addresses the first reason by introducing a restricted register
      class (without r0) for use by those instructions that treat r0 specially. These
      register classes have a new pseudo-register, ZERO, which represents the r0-as-0
      use. This has the side benefit of making the existing target code simpler (and
      easier to understand), and will make it clear to the register allocator that
      uses of r0 as 0 don't conflict will real uses of the r0 register.
      
      Once the CR spilling code is improved, we'll be able to allocate r0.
      
      Adding these extra register classes, for some reason unclear to me, causes
      requests to the target to copy 32-bit registers to 64-bit registers. The
      resulting code seems correct (and causes no test-suite failures), and the new
      test case covers this new kind of asymmetric copy.
      
      As r0 is still reserved, no functionality change intended.
      
      llvm-svn: 177423
      638a9fa4
    • Nadav Rotem's avatar
      Optimize sext <4 x i8> and <4 x i16> to <4 x i64>. · 0f1bc60d
      Nadav Rotem authored
      Patch by Ahmad, Muhammad T <muhammad.t.ahmad@intel.com>
      
      llvm-svn: 177421
      0f1bc60d
    • Jakob Stoklund Olesen's avatar
      Annotate X86InstrExtension.td with SchedRW lists. · af39940b
      Jakob Stoklund Olesen authored
      llvm-svn: 177418
      af39940b
    • Jakob Stoklund Olesen's avatar
      Annotate a lot of X86InstrInfo.td with SchedRW lists. · caf3d89f
      Jakob Stoklund Olesen authored
      llvm-svn: 177417
      caf3d89f
    • Chad Rosier's avatar
      [ms-inline asm] Move the size directive asm rewrite into the target specific · 120eefd1
      Chad Rosier authored
      logic as a QOI cleanup.
      rdar://13445327
      
      llvm-svn: 177413
      120eefd1
    • Hal Finkel's avatar
      Cleanup PPC64 unaligned i64 load/store · 66814863
      Hal Finkel authored
      Remove an accidentally-added instruction definition and add a comment in the
      test case. This is in response to a post-commit review by Bill Schmidt.
      
      No functionality change intended.
      
      llvm-svn: 177404
      66814863
    • Renato Golin's avatar
      Improve long vector sext/zext lowering on ARM · 227eb6fc
      Renato Golin authored
      The ARM backend currently has poor codegen for long sext/zext
      operations, such as v8i8 -> v8i32. This patch addresses this
      by performing a custom expansion in ARMISelLowering. It also
      adds/changes the cost of such lowering in ARMTTI.
      
      This partially addresses PR14867.
      
      Patch by Pete Couperus
      
      llvm-svn: 177380
      227eb6fc
    • Hal Finkel's avatar
      Don't reserve R31 on PPC64 unless the frame pointer is needed · d9e10d51
      Hal Finkel authored
      llvm-svn: 177379
      d9e10d51
    • Hal Finkel's avatar
      Fix a sign-extension bug in PPCCTRLoops · fc9aad64
      Hal Finkel authored
      Don't sign extend the immediate value from the OR instruction in
      an LIS/OR pair.
      
      llvm-svn: 177361
      fc9aad64
    • Chad Rosier's avatar
      [ms-inline asm] Avoid emitting a redundant sizing directive, if we've already · 2707d534
      Chad Rosier authored
      parsed one.  Test case coming shortly.
      rdar://13446980
      
      llvm-svn: 177347
      2707d534
    • Hal Finkel's avatar
      Fix PPC unaligned 64-bit loads and stores · b09680b0
      Hal Finkel authored
      PPC64 supports unaligned loads and stores of 64-bit values, but
      in order to use the r+i forms, the offset must be a multiple of 4.
      Unfortunately, this cannot always be determined by examining the
      immediate itself because it might be available only via a TOC entry.
      
      In order to get around this issue, we additionally predicate the
      selection of the r+i form on the alignment of the load or store
      (forcing it to be at least 4 in order to select the r+i form).
      
      llvm-svn: 177338
      b09680b0
  4. Mar 18, 2013
    • Arnold Schwaighofer's avatar
      ARM cost model: Make some vector integer to float casts cheaper · ae0052f1
      Arnold Schwaighofer authored
      The default logic marks them as too expensive.
      
      For example, before this patch we estimated:
        cost of 16 for instruction:   %r = uitofp <4 x i16> %v0 to <4 x float>
      
      While this translates to:
        vmovl.u16 q8, d16
        vcvt.f32.u32  q8, q8
      
      All other costs are left to the values assigned by the fallback logic. Theses
      costs are mostly reasonable in the sense that they get progressively more
      expensive as the instruction sequences emitted get longer.
      
      radar://13445992
      
      llvm-svn: 177334
      ae0052f1
    • Arnold Schwaighofer's avatar
      ARM cost model: Correct cost for some cheap float to integer conversions · 6c9c3a8b
      Arnold Schwaighofer authored
      Fix cost of some "cheap" cast instructions. Before this patch we used to
      estimate for example:
        cost of 16 for instruction:   %r = fptoui <4 x float> %v0 to <4 x i16>
      
      While we would emit:
        vcvt.s32.f32  q8, q8
        vmovn.i32 d16, q8
        vuzp.8  d16, d17
      
      All other costs are left to the values assigned by the fallback logic. Theses
      costs are mostly reasonable in the sense that they get progressively more
      expensive as the instruction sequences emitted get longer.
      
      radar://13434072
      
      llvm-svn: 177333
      6c9c3a8b
    • Jakob Stoklund Olesen's avatar
      Add SchedRW annotations to most of X86InstrSSE.td. · a5158c8f
      Jakob Stoklund Olesen authored
      We hitch a ride with the existing OpndItins class that was used to add
      instruction itinerary classes in the many multiclasses in this file.
      
      Use the link provided by the X86FoldableSchedWrite.Folded to find the
      right SchedWrite for folded loads.
      
      llvm-svn: 177326
      a5158c8f
    • Jakob Stoklund Olesen's avatar
      Annotate X86 arithmetic instructions with SchedRW lists. · e2289b78
      Jakob Stoklund Olesen authored
      This new-style scheduling information is going to replace the
      instruction iteneraries.
      
      This also serves as a test case for Andy's fix in r177317.
      
      llvm-svn: 177323
      e2289b78
    • Hal Finkel's avatar
      Fix 80-col. violations in PPCCTRLoops · e8f1cf47
      Hal Finkel authored
      llvm-svn: 177296
      e8f1cf47
    • Hal Finkel's avatar
      Fix large count and negative constant count handling in PPCCTRLoops · 21f2a43a
      Hal Finkel authored
      This commit fixes an assert that would occur on loops with large constant counts
      (like looping for ((uint32_t) -1) iterations on PPC64). The existing code did
      not handle counts that it computed to be negative (asserting instead), but
      these can be created with valid inputs.
      
      This bug was discovered by bugpoint while I was attempting to isolate a
      completely different problem.
      
      Also, in writing test cases for the negative-count problem, I discovered that
      the ori/lsi handling was broken (there was a typo which caused the logic that
      was supposed to detect these pairs and extract the iteration count to always
      fail). This has now also been corrected (and is covered by one of the new test
      cases).
      
      llvm-svn: 177295
      21f2a43a
    • Hal Finkel's avatar
      Cleanup initial-value constants in PPCCTRLoops · 12337e4e
      Hal Finkel authored
      Because the initial-value constants had not been added to the list
      of instructions considered for DCE the resulting code had redundant
      constant-materialization instructions.
      
      llvm-svn: 177294
      12337e4e
Loading