Skip to content
  1. Nov 18, 2016
    • Matthias Braun's avatar
      Timer: Track name and description. · 9f15a79e
      Matthias Braun authored
      The previously used "names" are rather descriptions (they use multiple
      words and contain spaces), use short programming language identifier
      like strings for the "names" which should be used when exporting to
      machine parseable formats.
      
      Also removed a unused TimerGroup from Hexxagon.
      
      Differential Revision: https://reviews.llvm.org/D25583
      
      llvm-svn: 287369
      9f15a79e
  2. Oct 11, 2016
  3. Oct 08, 2016
    • Arnold Schwaighofer's avatar
      swifterror: Don't compute swifterror vregs during instruction selection · 3f256581
      Arnold Schwaighofer authored
      The code used llvm basic block predecessors to decided where to insert phi
      nodes. Instruction selection can and will liberally insert new machine basic
      block predecessors. There is not a guaranteed one-to-one mapping from pred.
      llvm basic blocks and machine basic blocks.
      
      Therefore the current approach does not work as it assumes we can mark
      predecessor machine basic block as needing a copy, and needs to know the set of
      all predecessor machine basic blocks to decide when to insert phis.
      
      Instead of computing the swifterror vregs as we select instructions, propagate
      them at the end of instruction selection when the MBB CFG is complete.
      
      When an instruction needs a swifterror vreg and we don't know the value yet,
      generate a new vreg and remember this "upward exposed" use, and reconcile this
      at the end of instruction selection.
      
      This will only happen if the target supports promoting swifterror parameters to
      registers and the swifterror attribute is used.
      
      rdar://28300923
      
      llvm-svn: 283617
      3f256581
  4. Sep 14, 2016
  5. Sep 09, 2016
  6. Aug 27, 2016
  7. Aug 23, 2016
    • Pete Cooper's avatar
      Fix some more asserts after r279466. · 036b94da
      Pete Cooper authored
      That commit added a new version of Intrinsic::getName which should only
      be called when the intrinsic has no overloaded types.  There are several
      debugging paths, such as SDNode::dump which are printing the name of the
      intrinsic but don't have the overloaded types.  These paths should be ok
      to just print the name instead of crashing.
      
      The fix here is ultimately to just add a 'None' second argument as that
      calls the overload capable getName, which is less efficient, but this is a
      debugging path anyway, and not perf critical.
      
      Thanks to Björn Pettersson for pointing out that there were more crashes.
      
      llvm-svn: 279528
      036b94da
  8. Aug 12, 2016
  9. Jul 28, 2016
  10. Jul 18, 2016
    • Simon Dardis's avatar
      [inlineasm] Propagate operand constraints to the backend · d32a2d30
      Simon Dardis authored
      When SelectionDAGISel transforms a node representing an inline asm
      block, memory constraint information is not preserved. This can cause
      constraints to be broken when a memory offset is of the form:
      
      offset + frame index
      
      when the frame is resolved.
      
      By propagating the constraints all the way to the backend, targets can
      enforce memory operands of inline assembly to conform to their constraints.
      
      For MIPSR6, some instructions had their offsets reduced to 9 bits from
      16 bits such as ll/sc. This becomes problematic when using inline assembly
      to perform atomic operations, as an offset can generated that is too big to
      encode in the instruction.
      
      Reviewers: dsanders, vkalintris
      
      Differential Review: https://reviews.llvm.org/D21615
      
      llvm-svn: 275786
      d32a2d30
  11. Jul 08, 2016
  12. Jul 07, 2016
  13. Jul 01, 2016
    • Duncan P. N. Exon Smith's avatar
      CodeGen: Use MachineInstr& in TargetLowering, NFC · e4f5e4f4
      Duncan P. N. Exon Smith authored
      This is a mechanical change to make TargetLowering API take MachineInstr&
      (instead of MachineInstr*), since the argument is expected to be a valid
      MachineInstr.  In one case, changed a parameter from MachineInstr* to
      MachineBasicBlock::iterator, since it was used as an insertion point.
      
      As a side effect, this removes a bunch of MachineInstr* to
      MachineBasicBlock::iterator implicit conversions, a necessary step
      toward fixing PR26753.
      
      llvm-svn: 274287
      e4f5e4f4
  14. Jun 30, 2016
  15. Jun 12, 2016
    • Benjamin Kramer's avatar
      Pass DebugLoc and SDLoc by const ref. · bdc4956b
      Benjamin Kramer authored
      This used to be free, copying and moving DebugLocs became expensive
      after the metadata rewrite. Passing by reference eliminates a ton of
      track/untrack operations. No functionality change intended.
      
      llvm-svn: 272512
      bdc4956b
  16. Jun 07, 2016
    • Etienne Bergeron's avatar
      [stack-protection] Add support for MSVC buffer security check · 22bfa832
      Etienne Bergeron authored
      Summary:
      This patch is adding support for the MSVC buffer security check implementation
      
      The buffer security check is turned on with the '/GS' compiler switch.
        * https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
        * To be added to clang here: http://reviews.llvm.org/D20347
      
      Some overview of buffer security check feature and implementation:
        * https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
        * http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
        * http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
      
      
      For the following example:
      ```
      int example(int offset, int index) {
        char buffer[10];
        memset(buffer, 0xCC, index);
        return buffer[index];
      }
      ```
      
      The MSVC compiler is adding these instructions to perform stack integrity check:
      ```
              push        ebp  
              mov         ebp,esp  
              sub         esp,50h  
        [1]   mov         eax,dword ptr [__security_cookie (01068024h)]  
        [2]   xor         eax,ebp  
        [3]   mov         dword ptr [ebp-4],eax  
              push        ebx  
              push        esi  
              push        edi  
              mov         eax,dword ptr [index]  
              push        eax  
              push        0CCh  
              lea         ecx,[buffer]  
              push        ecx  
              call        _memset (010610B9h)  
              add         esp,0Ch  
              mov         eax,dword ptr [index]  
              movsx       eax,byte ptr buffer[eax]  
              pop         edi  
              pop         esi  
              pop         ebx  
        [4]   mov         ecx,dword ptr [ebp-4]  
        [5]   xor         ecx,ebp  
        [6]   call        @__security_check_cookie@4 (01061276h)  
              mov         esp,ebp  
              pop         ebp  
              ret  
      ```
      
      The instrumentation above is:
        * [1] is loading the global security canary,
        * [3] is storing the local computed ([2]) canary to the guard slot,
        * [4] is loading the guard slot and ([5]) re-compute the global canary,
        * [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
      
      Overview of the current stack-protection implementation:
        * lib/CodeGen/StackProtector.cpp
          * There is a default stack-protection implementation applied on intermediate representation.
          * The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
          * An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
          * Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
          * Guard manipulation and comparison are added directly to the intermediate representation.
      
        * lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
        * lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
          * There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
            * see long comment above 'class StackProtectorDescriptor' declaration.
          * The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
            * 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
          * The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
      
        * include/llvm/Target/TargetLowering.h
          * Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
      
        * lib/Target/X86/X86ISelLowering.cpp
          * Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
      
      Function-based Instrumentation:
        * The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
        * To support function-based instrumentation, this patch is
          * adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
            * If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
          * modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
          * generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
          * if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
      
      Modifications
        * adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
        * adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
      
      Results
      
        * IR generated instrumentation:
      ```
      clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
      ```
      
      ```
      *** Final LLVM Code input to ISel ***
      
      ; Function Attrs: nounwind sspstrong
      define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
      entry:
        %StackGuardSlot = alloca i8*                                                  <<<-- Allocated guard slot
        %0 = call i8* @llvm.stackguard()                                              <<<-- Loading Stack Guard value
        call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot)                  <<<-- Prologue intrinsic call (store to Guard slot)
        %index.addr = alloca i32, align 4
        %offset.addr = alloca i32, align 4
        %buffer = alloca [10 x i8], align 1
        store i32 %index, i32* %index.addr, align 4
        store i32 %offset, i32* %offset.addr, align 4
        %arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
        %1 = load i32, i32* %index.addr, align 4
        call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
        %2 = load i32, i32* %index.addr, align 4
        %arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
        %3 = load i8, i8* %arrayidx, align 1
        %conv = sext i8 %3 to i32
        %4 = load volatile i8*, i8** %StackGuardSlot                                  <<<-- Loading Guard slot
        call void @__security_check_cookie(i8* %4)                                    <<<-- Epilogue function-based check
        ret i32 %conv
      }
      ```
      
        * SelectionDAG generated instrumentation:
      
      ```
      clang-cl /GS test.cc /O1 /c /FA
      ```
      
      ```
      "?example@@YAHHH@Z":                    # @"\01?example@@YAHHH@Z"
      # BB#0:                                 # %entry
              pushl   %esi
              subl    $16, %esp
              movl    ___security_cookie, %eax                                        <<<-- Loading Stack Guard value
              movl    28(%esp), %esi
              movl    %eax, 12(%esp)                                                  <<<-- Store to Guard slot
              leal    2(%esp), %eax
              pushl   %esi
              pushl   $204
              pushl   %eax
              calll   _memset
              addl    $12, %esp
              movsbl  2(%esp,%esi), %esi
              movl    12(%esp), %ecx                                                  <<<-- Loading Guard slot
              calll   @__security_check_cookie@4                                      <<<-- Epilogue function-based check
              movl    %esi, %eax
              addl    $16, %esp
              popl    %esi
              retl
      ```
      
      Reviewers: kcc, pcc, eugenis, rnk
      
      Subscribers: majnemer, llvm-commits, hans, thakis, rnk
      
      Differential Revision: http://reviews.llvm.org/D20346
      
      llvm-svn: 272053
      22bfa832
  17. Jun 03, 2016
  18. Jun 01, 2016
  19. May 11, 2016
    • Justin Bogner's avatar
      SDAG: Have SelectNodeTo replace uses if it CSE's instead of morphing a node · b3534c49
      Justin Bogner authored
      It's awkward to force callers of SelectNodeTo to figure out whether
      the node was morphed or CSE'd. Update uses here instead of requiring
      callers to (sometimes) do it.
      
      llvm-svn: 269235
      b3534c49
    • Justin Bogner's avatar
      SDAG: Make SelectCodeCommon return void · 1df01f0e
      Justin Bogner authored
      This means SelectCode unconditionally returns nullptr now. I'll follow
      up with a change to make that return void as well, but it seems best
      to keep that one very mechanical.
      
      This is part of the work to have Select return void instead of an
      SDNode *, which is in turn part of llvm.org/pr26808.
      
      llvm-svn: 269136
      1df01f0e
  20. May 06, 2016
    • Justin Bogner's avatar
      SDAG: Don't leave dangling dead nodes after SelectCodeCommon · c45c9600
      Justin Bogner authored
      Relying on the caller to clean up after we've replaced all uses of a
      node won't work when we've migrated to the `void Select(...)` API.
      
      llvm-svn: 268774
      c45c9600
    • Justin Bogner's avatar
      SDAG: Rename Select->SelectImpl and repurpose Select as returning void · b0126997
      Justin Bogner authored
      This is a step towards removing the rampant undefined behaviour in
      SelectionDAG, which is a part of llvm.org/PR26808.
      
      We rename SelectionDAGISel::Select to SelectImpl and update targets to
      match, and then change Select to return void and consolidate the
      sketchy behaviour we're trying to get away from there.
      
      Next, we'll update backends to implement `void Select(...)` instead of
      SelectImpl and eventually drop the base Select implementation.
      
      llvm-svn: 268693
      b0126997
    • Justin Bogner's avatar
      SDAG: Remove OPC_MarkGlueResults and associated logic. NFC · 465886ec
      Justin Bogner authored
      This opcode never happens in practice, and yet the logic we have in
      place to handle it would be undefined behaviour if we ever executed
      it. Remove it rather than trying to refactor code that's never
      reached.
      
      llvm-svn: 268692
      465886ec
  21. May 03, 2016
  22. May 02, 2016
  23. Apr 29, 2016
  24. Apr 15, 2016
  25. Apr 08, 2016
    • Tim Shen's avatar
      [SSP] Remove llvm.stackprotectorcheck. · 00127564
      Tim Shen authored
      This is a cleanup patch for SSP support in LLVM. There is no functional change.
      llvm.stackprotectorcheck is not needed, because SelectionDAG isn't
      actually lowering it in SelectBasicBlock; rather, it adds check code in
      FinishBasicBlock, ignoring the position where the intrinsic is inserted
      (See FindSplitPointForStackProtector()).
      
      llvm-svn: 265851
      00127564
  26. Apr 05, 2016
    • Manman Ren's avatar
      Swift Calling Convention: swifterror target-independent change. · e221a870
      Manman Ren authored
      At IR level, the swifterror argument is an input argument with type
      ErrorObject**. For targets that support swifterror, we want to optimize it
      to behave as an inout value with type ErrorObject*; it will be passed in a
      fixed physical register.
      
      The main idea is to track the virtual registers for each swifterror value. We
      define swifterror values as AllocaInsts with swifterror attribute or a function
      argument with swifterror attribute.
      
      In SelectionDAGISel.cpp, we set up swifterror values (SwiftErrorVals) before
      handling the basic blocks.
      
      When iterating over all basic blocks in RPO, before actually visiting the basic
      block, we call mergeIncomingSwiftErrors to merge incoming swifterror values when
      there are multiple predecessors or to simply propagate them. There, we create a
      virtual register for each swifterror value in the entry block. For predecessors
      that are not yet visited, we create virtual registers to hold the swifterror
      values at the end of the predecessor. The assignments are saved in
      SwiftErrorWorklist and will be materialized at the end of visiting the basic
      block.
      
      When visiting a load from a swifterror value, we copy from the current virtual
      register assignment. When visiting a store to a swifterror value, we create a
      virtual register to hold the swifterror value and update SwiftErrorMap to
      track the current virtual register assignment.
      
      Differential Revision: http://reviews.llvm.org/D18108
      
      llvm-svn: 265433
      e221a870
  27. Mar 25, 2016
  28. Mar 19, 2016
  29. Mar 07, 2016
  30. Feb 02, 2016
  31. Jan 31, 2016
  32. Jan 30, 2016
  33. Jan 23, 2016
Loading