Skip to content
  1. Aug 31, 2016
  2. Aug 29, 2016
    • Gor Nishanov's avatar
      [Coroutines] Part 9: Add cleanup subfunction. · dce9b026
      Gor Nishanov authored
      Summary:
      [Coroutines] Part 9: Add cleanup subfunction.
      
      This patch completes coroutine heap allocation elision. Now, the heap elision example from docs\Coroutines.rst compiles and produces expected result (see test/Transform/Coroutines/ex3.ll)
      
      Intrinsic Changes:
      * coro.free gets a token parameter tying it to coro.id to allow reliably discovering all coro.frees associated with a particular coroutine.
      * coro.id gets an extra parameter that points back to a coroutine function. This allows to check whether a coro.id describes the enclosing function or it belongs to a different function that was later inlined.
      
      CoroSplit now creates three subfunctions:
      # f$resume - resume logic
      # f$destroy - cleanup logic, followed by a deallocation code
      # f$cleanup - just the cleanup code
      
      CoroElide pass during devirtualization replaces coro.destroy with either f$destroy or f$cleanup depending whether heap elision is performed or not.
      
      Other fixes, improvements:
      * Fixed buglet in Shape::buildFrame that was not creating coro.save properly if coroutine has more than one suspend point.
      
      * Switched to using variable width suspend index field (no longer limited to 32 bit index field can be as little as i1 or as large as i<whatever-size_t-is>)
      
      Reviewers: majnemer
      
      Subscribers: llvm-commits, mehdi_amini
      
      Differential Revision: https://reviews.llvm.org/D23844
      
      llvm-svn: 279971
      dce9b026
  3. Aug 12, 2016
    • Gor Nishanov's avatar
      [Coroutines]: Part6b: Add coro.id intrinsic. · 0f303acc
      Gor Nishanov authored
      Summary:
      1. Make coroutine representation more robust against optimization that may duplicate instruction by introducing coro.id intrinsics that returns a token that will get fed into coro.alloc and coro.begin. Due to coro.id returning a token, it won't get duplicated and can be used as reliable indicator of coroutine identify when a particular coroutine call gets inlined.
      2. Move last three arguments of coro.begin into coro.id as they will be shared if coro.begin will get duplicated.
      3. doc + test + code updated to support the new intrinsic.
      
      Reviewers: mehdi_amini, majnemer
      
      Subscribers: mehdi_amini, llvm-commits
      
      Differential Revision: https://reviews.llvm.org/D23412
      
      llvm-svn: 278481
      0f303acc
  4. Aug 10, 2016
    • Gor Nishanov's avatar
      [Coroutines] Part 6: Elide dynamic allocation of a coroutine frame when possible · b2a9c025
      Gor Nishanov authored
      Summary:
      A particular coroutine usage pattern, where a coroutine is created, manipulated and
      destroyed by the same calling function, is common for coroutines implementing
      RAII idiom and is suitable for allocation elision optimization which avoid
      dynamic allocation by storing the coroutine frame as a static `alloca` in its
      caller.
      
      coro.free and coro.alloc intrinsics are used to indicate which code needs to be suppressed
      when dynamic allocation elision happens:
      ```
      entry:
        %elide = call i8* @llvm.coro.alloc()
        %need.dyn.alloc = icmp ne i8* %elide, null
        br i1 %need.dyn.alloc, label %coro.begin, label %dyn.alloc
      dyn.alloc:
        %alloc = call i8* @CustomAlloc(i32 4)
        br label %coro.begin
      coro.begin:
        %phi = phi i8* [ %elide, %entry ], [ %alloc, %dyn.alloc ]
        %hdl = call i8* @llvm.coro.begin(i8* %phi, i32 0, i8* null,
                                i8* bitcast ([2 x void (%f.frame*)*]* @f.resumers to i8*))
      ```
      and
      ```
        %mem = call i8* @llvm.coro.free(i8* %hdl)
        %need.dyn.free = icmp ne i8* %mem, null
        br i1 %need.dyn.free, label %dyn.free, label %if.end
      dyn.free:
        call void @CustomFree(i8* %mem)
        br label %if.end
      if.end:
        ...
      ```
      
      If heap allocation elision is performed, we replace coro.alloc with a static alloca on the caller frame and coro.free with null constant.
      
      Also, we need to make sure that if there are any tail calls referencing the coroutine frame, we need to remote tail call attribute, since now coroutine frame lives on the stack.
      
      Documentation and overview is here: http://llvm.org/docs/Coroutines.html.
      
      Upstreaming sequence (rough plan)
      1.Add documentation. (https://reviews.llvm.org/D22603)
      2.Add coroutine intrinsics. (https://reviews.llvm.org/D22659)
      3.Add empty coroutine passes. (https://reviews.llvm.org/D22847)
      4.Add coroutine devirtualization + tests.
      ab) Lower coro.resume and coro.destroy (https://reviews.llvm.org/D22998)
      c) Do devirtualization (https://reviews.llvm.org/D23229)
      5.Add CGSCC restart trigger + tests. (https://reviews.llvm.org/D23234)
      6.Add coroutine heap elision + tests.  <= we are here
      7.Add the rest of the logic (split into more patches)
      
      Reviewers: mehdi_amini, majnemer
      
      Subscribers: mehdi_amini, llvm-commits
      
      Differential Revision: https://reviews.llvm.org/D23245
      
      llvm-svn: 278242
      b2a9c025
  5. Aug 05, 2016
  6. Jul 28, 2016
  7. Jul 27, 2016
  8. Jul 26, 2016
  9. Jul 23, 2016
Loading