Skip to content
  1. Oct 02, 2019
  2. Oct 01, 2019
  3. Sep 30, 2019
    • Evgeniy Stepanov's avatar
      [msan] Intercept __getrlimit. · 72131161
      Evgeniy Stepanov authored
      Summary:
      This interceptor is useful on its own, but the main purpose of this
      change is to intercept libpthread initialization on linux/glibc in
      order to run __msan_init before any .preinit_array constructors.
      
      We used to trigger on pthread_initialize_minimal -> getrlimit(), but
      that call has changed to __getrlimit at some point.
      
      Reviewers: vitalybuka, pcc
      
      Subscribers: jfb, #sanitizers, llvm-commits
      
      Tags: #sanitizers, #llvm
      
      Differential Revision: https://reviews.llvm.org/D68168
      
      llvm-svn: 373239
      72131161
  4. Sep 28, 2019
  5. Sep 27, 2019
    • Peter Collingbourne's avatar
      hwasan: Compatibility fixes for short granules. · c336557f
      Peter Collingbourne authored
      We can't use short granules with stack instrumentation when targeting older
      API levels because the rest of the system won't understand the short granule
      tags stored in shadow memory.
      
      Moreover, we need to be able to let old binaries (which won't understand
      short granule tags) run on a new system that supports short granule
      tags. Such binaries will call the __hwasan_tag_mismatch function when their
      outlined checks fail. We can compensate for the binary's lack of support
      for short granules by implementing the short granule part of the check in
      the __hwasan_tag_mismatch function. Unfortunately we can't do anything about
      inline checks, but I don't believe that we can generate these by default on
      aarch64, nor did we do so when the ABI was fixed.
      
      A new function, __hwasan_tag_mismatch_v2, is introduced that lets code
      targeting the new runtime avoid redoing the short granule check. Because tag
      mismatches are rare this isn't important from a performance perspective; the
      main benefit is that it introduces a symbol dependency that prevents binaries
      targeting the new runtime from running on older (i.e. incompatible) runtimes.
      
      Differential Revision: https://reviews.llvm.org/D68059
      
      llvm-svn: 373035
      c336557f
  6. Sep 26, 2019
  7. Sep 24, 2019
  8. Sep 22, 2019
  9. Sep 21, 2019
    • Kamil Rytarowski's avatar
      Add __lsan::ScopedInterceptorDisabler for strerror(3) · 1b583894
      Kamil Rytarowski authored
      Summary:
      strerror(3) on NetBSD uses internally TSD with a destructor that is never
      fired for exit(3). It's correctly called for pthread_exit(3) scenarios.
      
      This is a case when a leak on exit(3) is expected, unavoidable and harmless.
      
      Reviewers: joerg, vitalybuka, dvyukov, mgorny
      
      Reviewed By: vitalybuka
      
      Subscribers: dmgreen, kristof.beyls, jfb, llvm-commits, #sanitizers
      
      Tags: #sanitizers, #llvm
      
      Differential Revision: https://reviews.llvm.org/D67337
      
      llvm-svn: 372461
      1b583894
    • Kamil Rytarowski's avatar
      Stop tracking atexit/__cxa_atexit/pthread_atfork allocations in LSan/NetBSD · 88270475
      Kamil Rytarowski authored
      Summary:
      The atexit(3) and __cxa_atexit() calls allocate internally memory and free on exit,
      after executing all callback. This causes false positives as DoLeakCheck() is called
      from the atexit handler. In the LSan/ASan tests there are strict checks triggering
      false positives here.
      
      Intercept all atexit(3) and __cxa_atexit() calls and disable LSan when calling the
      real functions.
      
      Stop tracing allocations in pthread_atfork(3) funtions, as there are performed
      internal allocations that are not freed for the time of running StopTheWorld()
      code. This avoids false-positives.
      
      The same changes have to be replicated in the ASan and LSan runtime.
      
      Non-NetBSD OSs are not tested and this code is restricted to NetBSD only.
      
      Reviewers: dvyukov, joerg, mgorny, vitalybuka, eugenis
      
      Reviewed By: vitalybuka
      
      Subscribers: jfb, llvm-commits, #sanitizers
      
      Tags: #sanitizers, #llvm
      
      Differential Revision: https://reviews.llvm.org/D67331
      
      llvm-svn: 372459
      88270475
  10. Sep 19, 2019
    • Evgeniy Stepanov's avatar
      [lsan] Fix deadlock in dl_iterate_phdr. · f1b6bd40
      Evgeniy Stepanov authored
      Summary:
      Do not grab the allocator lock before calling dl_iterate_phdr. This may
      cause a lock order inversion with (valid) user code that uses malloc
      inside a dl_iterate_phdr callback.
      
      Reviewers: vitalybuka, hctim
      
      Subscribers: jfb, #sanitizers, llvm-commits
      
      Tags: #sanitizers, #llvm
      
      Differential Revision: https://reviews.llvm.org/D67738
      
      llvm-svn: 372348
      f1b6bd40
  11. Sep 18, 2019
  12. Sep 17, 2019
  13. Sep 16, 2019
  14. Sep 12, 2019
  15. Sep 11, 2019
    • Vitaly Buka's avatar
    • Kostya Kortchinsky's avatar
      [scudo][standalone] Android related improvements · 161cca26
      Kostya Kortchinsky authored
      Summary:
      This changes a few things to improve memory footprint and performances
      on Android, and fixes a test compilation error:
      - add `stdlib.h` to `wrappers_c_test.cc` to address
        https://bugs.llvm.org/show_bug.cgi?id=42810
      - change Android size class maps, based on benchmarks, to improve
        performances and lower the Svelte memory footprint. Also change the
        32-bit region size for said configuration
      - change the `reallocate` logic to reallocate in place for sizes larger
        than the original chunk size, when they still fit in the same block.
        This addresses patterns from `memory_replay` dumps like the following:
      ```
      202: realloc 0xb48fd000 0xb4930650 12352
      202: realloc 0xb48fd000 0xb48fd000 12420
      202: realloc 0xb48fd000 0xb48fd000 12492
      202: realloc 0xb48fd000 0xb48fd000 12564
      202: realloc 0xb48fd000 0xb48fd000 12636
      202: realloc 0xb48fd000 0xb48fd000 12708
      202: realloc 0xb48fd000 0xb48fd000 12780
      202: realloc 0xb48fd000 0xb48fd000 12852
      202: realloc 0xb48fd000 0xb48fd000 12924
      202: realloc 0xb48fd000 0xb48fd000 12996
      202: realloc 0xb48fd000 0xb48fd000 13068
      202: realloc 0xb48fd000 0xb48fd000 13140
      202: realloc 0xb48fd000 0xb48fd000 13212
      202: realloc 0xb48fd000 0xb48fd000 13284
      202: realloc 0xb48fd000 0xb48fd000 13356
      202: realloc 0xb48fd000 0xb48fd000 13428
      202: realloc 0xb48fd000 0xb48fd000 13500
      202: realloc 0xb48fd000 0xb48fd000 13572
      202: realloc 0xb48fd000 0xb48fd000 13644
      202: realloc 0xb48fd000 0xb48fd000 13716
      202: realloc 0xb48fd000 0xb48fd000 13788
      ...
      ```
        In this situation we were deallocating the old chunk, and
        allocating a new one for every single one of those, but now we can
        keep the same chunk (we just updated the header), which saves some
        heap operations.
      
      Reviewers: hctim, morehouse, vitalybuka, eugenis, cferris, rengolin
      
      Reviewed By: morehouse
      
      Subscribers: srhines, delcypher, #sanitizers, llvm-commits
      
      Tags: #llvm, #sanitizers
      
      Differential Revision: https://reviews.llvm.org/D67293
      
      llvm-svn: 371628
      161cca26
    • Max Moroz's avatar
      [libFuzzer] Make -merge=1 to reuse coverage information from the control file. · f054067f
      Max Moroz authored
      Summary:
      This change allows to perform corpus merging in two steps. This is useful when
      the user wants to address the following two points simultaneously:
      
      1) Get trustworthy incremental stats for the coverage and corpus size changes
          when adding new corpus units.
      2) Make sure the shorter units will be preferred when two or more units give the
          same unique signal (equivalent to the `REDUCE` logic).
      
      This solution was brainstormed together with @kcc, hopefully it looks good to
      the other people too. The proposed use case scenario:
      
      1) We have a `fuzz_target` binary and `existing_corpus` directory.
      2) We do fuzzing and write new units into the `new_corpus` directory.
      3) We want to merge the new corpus into the existing corpus and satisfy the
          points mentioned above.
      4) We create an empty directory `merged_corpus` and run the first merge step:
      
          `
          ./fuzz_target -merge=1 -merge_control_file=MCF ./merged_corpus ./existing_corpus
          `
      
          this provides the initial stats for `existing_corpus`, e.g. from the output:
      
          `
          MERGE-OUTER: 3 new files with 11 new features added; 11 new coverage edges
          `
      
      5) We recreate `merged_corpus` directory and run the second merge step:
      
          `
          ./fuzz_target -merge=1 -merge_control_file=MCF ./merged_corpus ./existing_corpus ./new_corpus
          `
      
          this provides the final stats for the merged corpus, e.g. from the output:
      
          `
          MERGE-OUTER: 6 new files with 14 new features added; 14 new coverage edges
          `
      
      Alternative solutions to this approach are:
      
      A) Store precise coverage information for every unit (not only unique signal).
      B) Execute the same two steps without reusing the control file.
      
      Either of these would be suboptimal as it would impose an extra disk or CPU load
      respectively, which is bad given the quadratic complexity in the worst case.
      
      Tested on Linux, Mac, Windows.
      
      Reviewers: morehouse, metzman, hctim, kcc
      
      Reviewed By: morehouse
      
      Subscribers: JDevlieghere, delcypher, mgrang, #sanitizers, llvm-commits, kcc
      
      Tags: #llvm, #sanitizers
      
      Differential Revision: https://reviews.llvm.org/D66107
      
      llvm-svn: 371620
      f054067f
    • Dmitri Gribenko's avatar
      Revert "clang-misexpect: Profile Guided Validation of Performance Annotations in LLVM" · 57256af3
      Dmitri Gribenko authored
      This reverts commit r371584. It introduced a dependency from compiler-rt
      to llvm/include/ADT, which is problematic for multiple reasons.
      
      One is that it is a novel dependency edge, which needs cross-compliation
      machinery for llvm/include/ADT (yes, it is true that right now
      compiler-rt included only header-only libraries, however, if we allow
      compiler-rt to depend on anything from ADT, other libraries will
      eventually get used).
      
      Secondly, depending on ADT from compiler-rt exposes ADT symbols from
      compiler-rt, which would cause ODR violations when Clang is built with
      the profile library.
      
      llvm-svn: 371598
      57256af3
    • Petr Hosek's avatar
      clang-misexpect: Profile Guided Validation of Performance Annotations in LLVM · 394a8ed8
      Petr Hosek authored
      This patch contains the basic functionality for reporting potentially
      incorrect usage of __builtin_expect() by comparing the developer's
      annotation against a collected PGO profile. A more detailed proposal and
      discussion appears on the CFE-dev mailing list
      (http://lists.llvm.org/pipermail/cfe-dev/2019-July/062971.html) and a
      prototype of the initial frontend changes appear here in D65300
      
      We revised the work in D65300 by moving the misexpect check into the
      LLVM backend, and adding support for IR and sampling based profiles, in
      addition to frontend instrumentation.
      
      We add new misexpect metadata tags to those instructions directly
      influenced by the llvm.expect intrinsic (branch, switch, and select)
      when lowering the intrinsics. The misexpect metadata contains
      information about the expected target of the intrinsic so that we can
      check against the correct PGO counter when emitting diagnostics, and the
      compiler's values for the LikelyBranchWeight and UnlikelyBranchWeight.
      We use these branch weight values to determine when to emit the
      diagnostic to the user.
      
      A future patch should address the comment at the top of
      LowerExpectIntrisic.cpp to hoist the LikelyBranchWeight and
      UnlikelyBranchWeight values into a shared space that can be accessed
      outside of the LowerExpectIntrinsic pass. Once that is done, the
      misexpect metadata can be updated to be smaller.
      
      In the long term, it is possible to reconstruct portions of the
      misexpect metadata from the existing profile data. However, we have
      avoided this to keep the code simple, and because some kind of metadata
      tag will be required to identify which branch/switch/select instructions
      are influenced by the use of llvm.expect
      
      Patch By: paulkirth
      Differential Revision: https://reviews.llvm.org/D66324
      
      llvm-svn: 371584
      394a8ed8
  16. Sep 09, 2019
  17. Sep 08, 2019
  18. Sep 05, 2019
Loading