Skip to content
  1. Aug 21, 2013
  2. Aug 20, 2013
    • Arnold Schwaighofer's avatar
      SLPVectorizer: Fix invalid iterator errors · e1f3ab69
      Arnold Schwaighofer authored
      Update iterator when the SLP vectorizer changes the instructions in the basic
      block by restarting the traversal of the basic block.
      
      Patch by Yi Jiang!
      
      Fixes PR 16899.
      
      llvm-svn: 188832
      e1f3ab69
    • Matt Arsenault's avatar
      Teach ConstantFolding about pointer address spaces · 7a960a84
      Matt Arsenault authored
      llvm-svn: 188831
      7a960a84
    • Akira Hatanaka's avatar
      [mips] Resolve register classes dynamically using ptr_rc to reduce the number of · 6781fc16
      Akira Hatanaka authored
      load/store instructions defined. Previously, we were defining load/store
      instructions for each pointer size (32 and 64-bit), but now we need just one
      definition.
      
      llvm-svn: 188830
      6781fc16
    • Reed Kotler's avatar
      Add an option which permits the user to specify using a bitmask, that various · d8f33625
      Reed Kotler authored
      functions be compiled as mips32, without having to add attributes. This
      is useful in certain situations where you don't want to have to edit the
      function attributes in the source. For now it's only an option used for
      the compiler developers when debugging the mips16 port.
      
      llvm-svn: 188826
      d8f33625
    • Akira Hatanaka's avatar
      [mips] Guard micromips instructions with predicate InMicroMips. Also, fix · a43b56d9
      Akira Hatanaka authored
      assembler predicate HasStdEnd so that it is false when the target is micromips.
      
      llvm-svn: 188824
      a43b56d9
    • Jim Grosbach's avatar
      ARM: Fix fast-isel copy/paste-o. · 71a78f96
      Jim Grosbach authored
      Update testcase to be more careful about checking register
      values. While regexes are general goodness for these sorts of
      testcases, in this example, the registers are constrained by
      the calling convention, so we can and should check their
      explicit values.
      
      rdar://14779513
      
      llvm-svn: 188819
      71a78f96
    • Vladimir Medic's avatar
      Fix style issues in AsmParser.cpp · 9bad0d33
      Vladimir Medic authored
      llvm-svn: 188798
      9bad0d33
    • Elena Demikhovsky's avatar
      AVX-512: Added more patterns for VMOVSS, VMOVSD, VMOVD, VMOVQ · 540d5825
      Elena Demikhovsky authored
      llvm-svn: 188786
      540d5825
    • Daniel Sanders's avatar
      [mips][msa] Removed fcge, fcgt, fsge, fsgt · 4260527f
      Daniel Sanders authored
      These instructions were present in a draft spec but were removed before
      publication.
      
      llvm-svn: 188782
      4260527f
    • Richard Sandiford's avatar
      [SystemZ] Update README · 2bf7b8cc
      Richard Sandiford authored
      We now use MVST, CLST and SRST for the obvious cases.
      
      llvm-svn: 188781
      2bf7b8cc
    • Richard Sandiford's avatar
      [SystemZ] Use SRST to optimize memchr · 6f6d5516
      Richard Sandiford authored
      SystemZTargetLowering::emitStringWrapper() previously loaded the character
      into R0 before the loop and made R0 live on entry.  I'd forgotten that
      allocatable registers weren't allowed to be live across blocks at this stage,
      and it confused LiveVariables enough to cause a miscompilation of f3 in
      memchr-02.ll.
      
      This patch instead loads R0 in the loop and leaves LICM to hoist it
      after RA.  This is actually what I'd tried originally, but I went for
      the manual optimisation after noticing that R0 often wasn't being hoisted.
      This bug forced me to go back and look at why, now fixed as r188774.
      
      We should also try to optimize null checks so that they test the CC result
      of the SRST directly.  The select between null and the SRST GPR result could
      then usually be deleted as dead.
      
      llvm-svn: 188779
      6f6d5516
    • Benjamin Kramer's avatar
      memcmp is not a valid way to compare structs with padding in them. · 5a712501
      Benjamin Kramer authored
      llvm-svn: 188778
      5a712501
    • Daniel Sanders's avatar
      [mips][msa] Added insve · f2a0f1d1
      Daniel Sanders authored
      llvm-svn: 188777
      f2a0f1d1
    • Richard Sandiford's avatar
      Fix overly pessimistic shortcut in post-RA MachineLICM · 96aa93d5
      Richard Sandiford authored
      Post-RA LICM keeps three sets of registers: PhysRegDefs, PhysRegClobbers
      and TermRegs.  When it sees a definition of R it adds all aliases of R
      to the corresponding set, so that when it needs to test for membership
      it only needs to test a single register, rather than worrying about
      aliases there too.  E.g. the final candidate loop just has:
      
          unsigned Def = Candidates[i].Def;
          if (!PhysRegClobbers.test(Def) && ...) {
      
      to test whether register Def is multiply defined.
      
      However, there was also a shortcut in ProcessMI to make sure we didn't
      add candidates if we already knew that they would fail the final test.
      This shortcut was more pessimistic than the final one because it
      checked whether _any alias_ of the defined register was multiply defined.
      This is too conservative for targets that define register pairs.
      E.g. on z, R0 and R1 are sometimes used as a pair, so there is a
      128-bit register that aliases both R0 and R1.  If a loop used
      R0 and R1 independently, and the definition of R0 came first,
      we would be able to hoist the R0 assignment (because that used
      the final test quoted above) but not the R1 assignment (because
      that meant we had two definitions of the paired R0/R1 register
      and would fail the shortcut in ProcessMI).
      
      This patch just uses the same check for the ProcessMI shortcut as
      we use in the final candidate loop.
      
      llvm-svn: 188774
      96aa93d5
    • Tim Northover's avatar
      ARM: implement some simple f64 materializations. · f79c3a5a
      Tim Northover authored
      Previously we used a const-pool load for virtually all 64-bit floating values.
      Actually, we can get quite a few common values (including 0.0, 1.0) via "vmov"
      instructions of one stripe or another.
      
      llvm-svn: 188773
      f79c3a5a
    • Michael Gottesman's avatar
      [stackprotector] Small cleanup. · dc985ef0
      Michael Gottesman authored
      llvm-svn: 188772
      dc985ef0
    • Michael Gottesman's avatar
      [stackprotector] Small Bit of computation hoisting. · 76c44be1
      Michael Gottesman authored
      llvm-svn: 188771
      76c44be1
    • Michael Gottesman's avatar
      [stackprotector] Added significantly longer comment to FindPotentialTailCall... · 1977d15e
      Michael Gottesman authored
      [stackprotector] Added significantly longer comment to FindPotentialTailCall to make clear its relationship to llvm::isInTailCallPosition.
      
      llvm-svn: 188770
      1977d15e
    • Michael Gottesman's avatar
      Removed trailing whitespace. · 62c5d714
      Michael Gottesman authored
      llvm-svn: 188769
      62c5d714
    • Michael Gottesman's avatar
      [stackprotector] Removed stale TODO. · 56e246b1
      Michael Gottesman authored
      llvm-svn: 188768
      56e246b1
    • Daniel Sanders's avatar
      [mips][msa] Added and.v, bmnz.v, bmz.v, bsel.v, nor.v, or.v, xor.v · 869bdad9
      Daniel Sanders authored
      llvm-svn: 188767
      869bdad9
    • Michael Gottesman's avatar
    • Michael Gottesman's avatar
      [stackprotector] Refactor out the end of isInTailCallPosition into the... · ce0e4c26
      Michael Gottesman authored
      [stackprotector] Refactor out the end of isInTailCallPosition into the function returnTypeIsEligibleForTailCall.
      
      This allows me to use returnTypeIsEligibleForTailCall in the stack protector pass.
      
      rdar://13935163
      
      llvm-svn: 188765
      ce0e4c26
    • Michael Gottesman's avatar
      Remove unused variables that crept in. · f7e1203d
      Michael Gottesman authored
      llvm-svn: 188761
      f7e1203d
    • Michael Gottesman's avatar
      Teach selectiondag how to handle the stackprotectorcheck intrinsic. · b27f0f1f
      Michael Gottesman authored
      Previously, generation of stack protectors was done exclusively in the
      pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
      splitting basic blocks at the IR level to create the success/failure basic
      blocks in the tail of the basic block in question. As a result of this,
      calls that would have qualified for the sibling call optimization were no
      longer eligible for optimization since said calls were no longer right in
      the "tail position" (i.e. the immediate predecessor of a ReturnInst
      instruction).
      
      Then it was noticed that since the sibling call optimization causes the
      callee to reuse the caller's stack, if we could delay the generation of
      the stack protector check until later in CodeGen after the sibling call
      decision was made, we get both the tail call optimization and the stack
      protector check!
      
      A few goals in solving this problem were:
      
        1. Preserve the architecture independence of stack protector generation.
      
        2. Preserve the normal IR level stack protector check for platforms like
           OpenBSD for which we support platform specific stack protector
           generation.
      
      The main problem that guided the present solution is that one can not
      solve this problem in an architecture independent manner at the IR level
      only. This is because:
      
        1. The decision on whether or not to perform a sibling call on certain
           platforms (for instance i386) requires lower level information
           related to available registers that can not be known at the IR level.
      
        2. Even if the previous point were not true, the decision on whether to
           perform a tail call is done in LowerCallTo in SelectionDAG which
           occurs after the Stack Protector Pass. As a result, one would need to
           put the relevant callinst into the stack protector check success
           basic block (where the return inst is placed) and then move it back
           later at SelectionDAG/MI time before the stack protector check if the
           tail call optimization failed. The MI level option was nixed
           immediately since it would require platform specific pattern
           matching. The SelectionDAG level option was nixed because
           SelectionDAG only processes one IR level basic block at a time
           implying one could not create a DAG Combine to move the callinst.
      
      To get around this problem a few things were realized:
      
        1. While one can not handle multiple IR level basic blocks at the
           SelectionDAG Level, one can generate multiple machine basic blocks
           for one IR level basic block. This is how we handle bit tests and
           switches.
      
        2. At the MI level, tail calls are represented via a special return
           MIInst called "tcreturn". Thus if we know the basic block in which we
           wish to insert the stack protector check, we get the correct behavior
           by always inserting the stack protector check right before the return
           statement. This is a "magical transformation" since no matter where
           the stack protector check intrinsic is, we always insert the stack
           protector check code at the end of the BB.
      
      Given the aforementioned constraints, the following solution was devised:
      
        1. On platforms that do not support SelectionDAG stack protector check
           generation, allow for the normal IR level stack protector check
           generation to continue.
      
        2. On platforms that do support SelectionDAG stack protector check
           generation:
      
          a. Use the IR level stack protector pass to decide if a stack
             protector is required/which BB we insert the stack protector check
             in by reusing the logic already therein. If we wish to generate a
             stack protector check in a basic block, we place a special IR
             intrinsic called llvm.stackprotectorcheck right before the BB's
             returninst or if there is a callinst that could potentially be
             sibling call optimized, before the call inst.
      
          b. Then when a BB with said intrinsic is processed, we codegen the BB
             normally via SelectBasicBlock. In said process, when we visit the
             stack protector check, we do not actually emit anything into the
             BB. Instead, we just initialize the stack protector descriptor
             class (which involves stashing information/creating the success
             mbbb and the failure mbb if we have not created one for this
             function yet) and export the guard variable that we are going to
             compare.
      
          c. After we finish selecting the basic block, in FinishBasicBlock if
             the StackProtectorDescriptor attached to the SelectionDAGBuilder is
             initialized, we first find a splice point in the parent basic block
             before the terminator and then splice the terminator of said basic
             block into the success basic block. Then we code-gen a new tail for
             the parent basic block consisting of the two loads, the comparison,
             and finally two branches to the success/failure basic blocks. We
             conclude by code-gening the failure basic block if we have not
             code-gened it already (all stack protector checks we generate in
             the same function, use the same failure basic block).
      
      llvm-svn: 188755
      b27f0f1f
    • Craig Topper's avatar
      Fix formatting. No functional change. · 7a8cf010
      Craig Topper authored
      llvm-svn: 188746
      7a8cf010
    • Craig Topper's avatar
      Add AVX-512 and related features to the CPUID detection code. · e13a066c
      Craig Topper authored
      llvm-svn: 188745
      e13a066c
    • Craig Topper's avatar
      Move AVX and non-AVX replication inside a couple multiclasses to avoid... · fd2b3892
      Craig Topper authored
      Move AVX and non-AVX replication inside a couple multiclasses to avoid repeating each instruction for both individually.
      
      llvm-svn: 188743
      fd2b3892
Loading