Skip to content
  1. Jun 29, 2010
    • Gabor Greif's avatar
      use ArgOperand APIs · e73d64c2
      Gabor Greif authored
      llvm-svn: 107132
      e73d64c2
    • Duncan Sands's avatar
      Remove an unused and a pointless variable. · 78ad27ca
      Duncan Sands authored
      llvm-svn: 107131
      78ad27ca
    • Duncan Sands's avatar
      Remove pointless and unused variables. · 67bfa9d1
      Duncan Sands authored
      llvm-svn: 107130
      67bfa9d1
    • Gabor Greif's avatar
      encode operand initializations (at fixed index) · eec74583
      Gabor Greif authored
      in terms of Op<> and ArgOffset. This works for
      values of {0, 1} for ArgOffset.
      Please note that ArgOffset will become 0 soon and
      will go away eventually.
      
      llvm-svn: 107129
      eec74583
    • Duncan Sands's avatar
      Remove a pointless variable. · 67aa21d7
      Duncan Sands authored
      llvm-svn: 107128
      67aa21d7
    • Duncan Sands's avatar
      Remove initialized but otherwise unused variables. · 6d28e73a
      Duncan Sands authored
      llvm-svn: 107127
      6d28e73a
    • Duncan Sands's avatar
      Remove variables that are written by not read. · b69a3e27
      Duncan Sands authored
      llvm-svn: 107126
      b69a3e27
    • Benjamin Kramer's avatar
    • Chandler Carruth's avatar
      Jump through some silly hoops to make GCC accept that a function may not always · b1adb88d
      Chandler Carruth authored
      be called.
      
      llvm-svn: 107124
      b1adb88d
    • Chris Lattner's avatar
      Change X86_64ABIInfo to have ASTContext and TargetData ivars to · 22a931e3
      Chris Lattner authored
      avoid passing ASTContext down through all the methods it has.
      
      When classifying an argument, or argument piece, as INTEGER, check
      to see if we have a pointer at exactly the same offset in the 
      preferred type.  If so, use that pointer type instead of i64.  This
      allows us to compile A function taking a stringref into something
      like this:
      
      define i8* @foo(i64 %D.coerce0, i8* %D.coerce1) nounwind ssp {
      entry:
        %D = alloca %struct.DeclGroup, align 8          ; <%struct.DeclGroup*> [#uses=4]
        %0 = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
        store i64 %D.coerce0, i64* %0
        %1 = getelementptr %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1]
        store i8* %D.coerce1, i8** %1
        %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
        %tmp1 = load i64* %tmp                          ; <i64> [#uses=1]
        %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1]
        %tmp3 = load i8** %tmp2                         ; <i8*> [#uses=1]
        %add.ptr = getelementptr inbounds i8* %tmp3, i64 %tmp1 ; <i8*> [#uses=1]
        ret i8* %add.ptr
      }
      
      instead of this:
      
      define i8* @foo(i64 %D.coerce0, i64 %D.coerce1) nounwind ssp {
      entry:
        %D = alloca %struct.DeclGroup, align 8          ; <%struct.DeclGroup*> [#uses=3]
        %0 = insertvalue %0 undef, i64 %D.coerce0, 0    ; <%0> [#uses=1]
        %1 = insertvalue %0 %0, i64 %D.coerce1, 1       ; <%0> [#uses=1]
        %2 = bitcast %struct.DeclGroup* %D to %0*       ; <%0*> [#uses=1]
        store %0 %1, %0* %2, align 1
        %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
        %tmp1 = load i64* %tmp                          ; <i64> [#uses=1]
        %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1]
        %tmp3 = load i8** %tmp2                         ; <i8*> [#uses=1]
        %add.ptr = getelementptr inbounds i8* %tmp3, i64 %tmp1 ; <i8*> [#uses=1]
        ret i8* %add.ptr
      }
      
      This implements rdar://7375902 - [codegen quality] clang x86-64 ABI lowering code punishing StringRef
      
      llvm-svn: 107123
      22a931e3
    • Evan Cheng's avatar
      PR7503: uxtb16 is not available for ARMv7-M. Patch by Brian G. Lucas. · b59dd8f1
      Evan Cheng authored
      llvm-svn: 107122
      b59dd8f1
    • Evan Cheng's avatar
      Change if-cvt options to something that actually as useable. · 0c30739c
      Evan Cheng authored
      llvm-svn: 107121
      0c30739c
    • Chris Lattner's avatar
      Minix doesn't support dylibs, PR7294 · de310d5d
      Chris Lattner authored
      llvm-svn: 107120
      de310d5d
    • Jim Grosbach's avatar
      When processing loops for scheduling latencies (used for live outs on loop · 907673c4
      Jim Grosbach authored
      back-edges), make sure not to include dbg_value instructions in the count.
      Closing in on the end of rdar://7797940
      
      llvm-svn: 107119
      907673c4
    • Greg Clayton's avatar
      A little code cleanup to not create an script bridging object just to feed · d0c8a0f0
      Greg Clayton authored
      the private object back to another internal function.
      
      llvm-svn: 107118
      d0c8a0f0
    • Dan Gohman's avatar
      Just as its not safe to blindly transfer the nsw bit from an add · 90db61d6
      Dan Gohman authored
      instruction to an add scev, it's not safe to blindly transfer the
      inbounds flag from a gep instruction to an nsw on the scev for the
      gep.
      
      llvm-svn: 107117
      90db61d6
    • Bruno Cardoso Lopes's avatar
      de736a64
    • Chris Lattner's avatar
      plumb preferred types down into X86_64ABIInfo::classifyArgumentType, · 399d22ac
      Chris Lattner authored
      no functionality change.
      
      llvm-svn: 107115
      399d22ac
    • Jakob Stoklund Olesen's avatar
      When no memoperands are present, assume unaligned, volatile. · c1eccbc4
      Jakob Stoklund Olesen authored
      llvm-svn: 107114
      c1eccbc4
    • Bill Wendling's avatar
      Strip resulting binaries. · 9a925bec
      Bill Wendling authored
      llvm-svn: 107112
      9a925bec
    • Chris Lattner's avatar
      Pass the LLVM IR version of argument types down into computeInfo. · 1d7c9f7f
      Chris Lattner authored
      This is somewhat annoying to do this at this level, but it avoids
      having ABIInfo know depend on CodeGenTypes for a hint.
      
      Nothing is using this yet, so no functionality change.
      
      llvm-svn: 107111
      1d7c9f7f
    • Bob Wilson's avatar
      Reapply my if-conversion cleanup from svn r106939 with fixes. · 1e5da550
      Bob Wilson authored
      There are 2 changes relative to the previous version of the patch:
      
      1) For the "simple" if-conversion case, there's no need to worry about
      RemoveExtraEdges not handling an unanalyzable branch.  Predicated terminators
      are ignored in this context, so RemoveExtraEdges does the right thing.
      This might break someday if we ever treat indirect branches (BRIND) as
      predicable, but for now, I just removed this part of the patch, because
      in the case where we do not add an unconditional branch, we rely on keeping
      the fall-through edge to CvtBBI (which is empty after this transformation).
      
      The change relative to the previous patch is:
      
      @@ -1036,10 +1036,6 @@
           IterIfcvt = false;
         }
       
      -  // RemoveExtraEdges won't work if the block has an unanalyzable branch,
      -  // which is typically the case for IfConvertSimple, so explicitly remove
      -  // CvtBBI as a successor.
      -  BBI.BB->removeSuccessor(CvtBBI->BB);
         RemoveExtraEdges(BBI);
       
         // Update block info. BB can be iteratively if-converted.
      
      
      2) My patch exposed a bug in the code for merging the tail of a "diamond",
      which had previously never been exercised.  The code was simply checking that
      the tail had a single predecessor, but there was a case in
      MultiSource/Benchmarks/VersaBench/dbms where that single predecessor was
      neither edge of the diamond.  I added the following change to check for
      that:
      
      @@ -1276,7 +1276,18 @@
         // tail, add a unconditional branch to it.
         if (TailBB) {
           BBInfo TailBBI = BBAnalysis[TailBB->getNumber()];
      -    if (TailBB->pred_size() == 1 && !TailBBI.HasFallThrough) {
      +    bool CanMergeTail = !TailBBI.HasFallThrough;
      +    // There may still be a fall-through edge from BBI1 or BBI2 to TailBB;
      +    // check if there are any other predecessors besides those.
      +    unsigned NumPreds = TailBB->pred_size();
      +    if (NumPreds > 1)
      +      CanMergeTail = false;
      +    else if (NumPreds == 1 && CanMergeTail) {
      +      MachineBasicBlock::pred_iterator PI = TailBB->pred_begin();
      +      if (*PI != BBI1->BB && *PI != BBI2->BB)
      +        CanMergeTail = false;
      +    }
      +    if (CanMergeTail) {
             MergeBlocks(BBI, TailBBI);
             TailBBI.IsDone = true;
           } else {
      
      With these fixes, I was able to run all the SingleSource and MultiSource
      tests successfully.
      
      llvm-svn: 107110
      1e5da550
    • Dan Gohman's avatar
      Add an Intraprocedural form of BasicAliasAnalysis, which aims to · 0824affe
      Dan Gohman authored
      properly handles instructions and arguments defined in different
      functions, or across recursive function iterations.
      
      llvm-svn: 107109
      0824affe
    • Bruno Cardoso Lopes's avatar
      Described the missing AVX forms of SSE2 convert instructions · d6a091a4
      Bruno Cardoso Lopes authored
      llvm-svn: 107108
      d6a091a4
    • Bob Wilson's avatar
      Fix Thumb encoding of VMOV (scalar to ARM core register). The encoding is · 3d12ff79
      Bob Wilson authored
      the same as ARM except that the condition code field is always set to ARMCC::AL.
      
      llvm-svn: 107107
      3d12ff79
    • Chandler Carruth's avatar
      Prefer llvm_unreachable(...) to assert(false && ...). This is important as · 8337ba63
      Chandler Carruth authored
      without it we might exit a non-void function without returning.
      
      llvm-svn: 107106
      8337ba63
    • Chris Lattner's avatar
      add IR names to coerced arguments. · 9e748e9d
      Chris Lattner authored
      llvm-svn: 107105
      9e748e9d
    • Chris Lattner's avatar
      make the argument passing stuff in the FCA case smarter still, by · 15ec361b
      Chris Lattner authored
      avoiding making the FCA at all when the types exactly line up.  For
      example, before we made:
      
      %struct.DeclGroup = type { i64, i64 }
      
      define i64 @_Z3foo9DeclGroup(i64, i64) nounwind {
      entry:
        %D = alloca %struct.DeclGroup, align 8          ; <%struct.DeclGroup*> [#uses=3]
        %2 = insertvalue %struct.DeclGroup undef, i64 %0, 0 ; <%struct.DeclGroup> [#uses=1]
        %3 = insertvalue %struct.DeclGroup %2, i64 %1, 1 ; <%struct.DeclGroup> [#uses=1]
        store %struct.DeclGroup %3, %struct.DeclGroup* %D
        %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
        %tmp1 = load i64* %tmp                          ; <i64> [#uses=1]
        %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1]
        %tmp3 = load i64* %tmp2                         ; <i64> [#uses=1]
        %add = add nsw i64 %tmp1, %tmp3                 ; <i64> [#uses=1]
        ret i64 %add
      }
      
      ... which has the pointless insertvalue, which fastisel hates, now we
      make:
      
      %struct.DeclGroup = type { i64, i64 }
      
      define i64 @_Z3foo9DeclGroup(i64, i64) nounwind {
      entry:
        %D = alloca %struct.DeclGroup, align 8          ; <%struct.DeclGroup*> [#uses=4]
        %2 = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
        store i64 %0, i64* %2
        %3 = getelementptr %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1]
        store i64 %1, i64* %3
        %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
        %tmp1 = load i64* %tmp                          ; <i64> [#uses=1]
        %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1]
        %tmp3 = load i64* %tmp2                         ; <i64> [#uses=1]
        %add = add nsw i64 %tmp1, %tmp3                 ; <i64> [#uses=1]
        ret i64 %add
      }
      
      This only kicks in when x86-64 abi lowering decides it likes us.
      
      llvm-svn: 107104
      15ec361b
    • Devang Patel's avatar
      The comment string does not match for all targets. PowerPC uses ;. · 1575e9f5
      Devang Patel authored
      llvm-svn: 107103
      1575e9f5
    • Craig Silverstein's avatar
      A few prettifications. Also renamed TraverseInitializer to · 1b28a429
      Craig Silverstein authored
      TraverseConstructorInitializer, to be a bit clearer.
      
      llvm-svn: 107102
      1b28a429
    • Ted Kremenek's avatar
      Per Doug's suggestion, move check for invalid SourceLocation into · 54140270
      Ted Kremenek authored
      cxloc::translateSourceLocation() (thus causing all clients of this
      function to have the same behavior).
      
      llvm-svn: 107101
      54140270
    • Greg Clayton's avatar
      Fixed debug map in executable + DWARF in .o debugging on Mac OS X. · 8d38ac45
      Greg Clayton authored
      Added the ability to dump any file in the global module cache using any of
      the "image dump" commands. This allows us to dump the .o files that are used
      with DWARF + .o since they don't belong the the target list for the current
      target.
      
      llvm-svn: 107100
      8d38ac45
    • Chris Lattner's avatar
      Change CGCall to handle the "coerce" case where the coerce-to type · 3dd716c3
      Chris Lattner authored
      is a FCA to pass each of the elements as individual scalars.  This
      produces code fast isel is less likely to reject and is easier on
      the optimizers.
      
      For example, before we would compile:
      struct DeclGroup { long NumDecls; char * Y; };
      char * foo(DeclGroup D) {
        return D.NumDecls+D.Y;
      }
      
      to:
      %struct.DeclGroup = type { i64, i64 }
      
      define i64 @_Z3foo9DeclGroup(%struct.DeclGroup) nounwind {
      entry:
        %D = alloca %struct.DeclGroup, align 8          ; <%struct.DeclGroup*> [#uses=3]
        store %struct.DeclGroup %0, %struct.DeclGroup* %D, align 1
        %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
        %tmp1 = load i64* %tmp                          ; <i64> [#uses=1]
        %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1]
        %tmp3 = load i64* %tmp2                         ; <i64> [#uses=1]
        %add = add nsw i64 %tmp1, %tmp3                 ; <i64> [#uses=1]
        ret i64 %add
      }
      
      Now we get:
      
      %0 = type { i64, i64 }
      %struct.DeclGroup = type { i64, i8* }
      
      define i8* @_Z3foo9DeclGroup(i64, i64) nounwind {
      entry:
        %D = alloca %struct.DeclGroup, align 8          ; <%struct.DeclGroup*> [#uses=3]
        %2 = insertvalue %0 undef, i64 %0, 0            ; <%0> [#uses=1]
        %3 = insertvalue %0 %2, i64 %1, 1               ; <%0> [#uses=1]
        %4 = bitcast %struct.DeclGroup* %D to %0*       ; <%0*> [#uses=1]
        store %0 %3, %0* %4, align 1
        %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
        %tmp1 = load i64* %tmp                          ; <i64> [#uses=1]
        %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1]
        %tmp3 = load i8** %tmp2                         ; <i8*> [#uses=1]
        %add.ptr = getelementptr inbounds i8* %tmp3, i64 %tmp1 ; <i8*> [#uses=1]
        ret i8* %add.ptr
      }
      
      Elimination of the FCA inside the function is still-to-come.
      
      llvm-svn: 107099
      3dd716c3
    • Craig Silverstein's avatar
      Fix up ClassTemplateSpecializationDecl: For implicit instantiations · a37aa88e
      Craig Silverstein authored
      ("set<int> x;"), we don't want to recurse at all, since the
      instatiated class isn't written in the source code anywhere.  (Note
      the instatiated *type* -- set<int> -- is written, and will still get a
      callback of TemplateSpecializationType).  For explicit instantiations
      ("template set<int>;"), we do need a callback, since this is the only
      callback that's made for this instantiation.  We use
      getTypeAsWritten() to distinguish.
      
      We will still need to figure out how to handle template
      specializations, which probably are still not quite correct.
      
      Reviewed by chandlerc
      
      llvm-svn: 107098
      a37aa88e
    • Bob Wilson's avatar
      Unlike other targets, ARM now uses BUILD_VECTORs post-legalization so they · 269a89fd
      Bob Wilson authored
      can't be changed arbitrarily by the DAGCombiner without checking if it is
      running after legalization.
      
      llvm-svn: 107097
      269a89fd
    • Chris Lattner's avatar
      make the trivial forms of CreateCoerced{Load|Store} trivial. · d200eda4
      Chris Lattner authored
      llvm-svn: 107091
      d200eda4
    • Dale Johannesen's avatar
      Refix XTARGET. Previous attempt matches on powerpc-apple-darwin, · 764b056c
      Dale Johannesen authored
      although I don't see why.
      
      llvm-svn: 107090
      764b056c
    • Dale Johannesen's avatar
      Attempt to fix XTARGET. · 65cd5ba7
      Dale Johannesen authored
      llvm-svn: 107088
      65cd5ba7
    • Argyrios Kyrtzidis's avatar
      Modify the way sub-statements are stored and retrieved from PCH. · d0795b2d
      Argyrios Kyrtzidis authored
      Before this commit, sub-stmts were stored as encountered and when they were placed in the Stmts stack we had to know what index
      each stmt operand has. This complicated supporting variable sub-stmts and sub-stmts that were contained in TypeSourceInfos, e.g.
      
      x = sizeof(int[1]);
      
      would crash PCH.
      
      Now, sub-stmts are stored in reverse order, from last to first, so that when reading them, in order to get the next sub-stmt we just
      need to pop the last stmt from the stack. This greatly simplified the way stmts are written and read (just use PCHWriter::AddStmt and
       PCHReader::ReadStmt accordingly) and allowed variable stmt operands and TypeSourceInfo exprs.
      
      llvm-svn: 107087
      d0795b2d
    • Bob Wilson's avatar
      Make the ARMCodeEmitter identify Thumb functions via ARMFunctionInfo instead · 4469a892
      Bob Wilson authored
      of the Subtarget.
      
      llvm-svn: 107086
      4469a892
Loading