Skip to content
  1. Jul 26, 2019
  2. Jul 25, 2019
  3. Jul 17, 2019
  4. Jul 16, 2019
    • Rui Ueyama's avatar
      Fix parameter name comments using clang-tidy. NFC. · 49a3ad21
      Rui Ueyama authored
      This patch applies clang-tidy's bugprone-argument-comment tool
      to LLVM, clang and lld source trees. Here is how I created this
      patch:
      
      $ git clone https://github.com/llvm/llvm-project.git
      $ cd llvm-project
      $ mkdir build
      $ cd build
      $ cmake -GNinja -DCMAKE_BUILD_TYPE=Debug \
          -DLLVM_ENABLE_PROJECTS='clang;lld;clang-tools-extra' \
          -DCMAKE_EXPORT_COMPILE_COMMANDS=On -DLLVM_ENABLE_LLD=On \
          -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ ../llvm
      $ ninja
      $ parallel clang-tidy -checks='-*,bugprone-argument-comment' \
          -config='{CheckOptions: [{key: StrictMode, value: 1}]}' -fix \
          ::: ../llvm/lib/**/*.{cpp,h} ../clang/lib/**/*.{cpp,h} ../lld/**/*.{cpp,h}
      
      llvm-svn: 366177
      49a3ad21
  5. Jul 12, 2019
    • Ulrich Weigand's avatar
      [SystemZ] Add support for new cpu architecture - arch13 · b98bf60e
      Ulrich Weigand authored
      This patch series adds support for the next-generation arch13
      CPU architecture to the SystemZ backend.
      
      This includes:
      - Basic support for the new processor and its features.
      - Support for low-level builtins mapped to new LLVM intrinsics.
      - New high-level intrinsics in vecintrin.h.
      - Indicate support by defining  __VEC__ == 10303.
      
      Note: No currently available Z system supports the arch13
      architecture.  Once new systems become available, the
      official system name will be added as supported -march name.
      
      llvm-svn: 365933
      b98bf60e
  6. Jul 09, 2019
    • Erik Pilkington's avatar
    • Yonghong Song's avatar
      [BPF] Preserve debuginfo array/union/struct type/access index · 048493f8
      Yonghong Song authored
      For background of BPF CO-RE project, please refer to
        http://vger.kernel.org/bpfconf2019.html
      
      
      In summary, BPF CO-RE intends to compile bpf programs
      adjustable on struct/union layout change so the same
      program can run on multiple kernels with adjustment
      before loading based on native kernel structures.
      
      In order to do this, we need keep track of GEP(getelementptr)
      instruction base and result debuginfo types, so we
      can adjust on the host based on kernel BTF info.
      Capturing such information as an IR optimization is hard
      as various optimization may have tweaked GEP and also
      union is replaced by structure it is impossible to track
      fieldindex for union member accesses.
      
      Three intrinsic functions, preserve_{array,union,struct}_access_index,
      are introducted.
        addr = preserve_array_access_index(base, index, dimension)
        addr = preserve_union_access_index(base, di_index)
        addr = preserve_struct_access_index(base, gep_index, di_index)
      here,
        base: the base pointer for the array/union/struct access.
        index: the last access index for array, the same for IR/DebugInfo layout.
        dimension: the array dimension.
        gep_index: the access index based on IR layout.
        di_index: the access index based on user/debuginfo types.
      
      If using these intrinsics blindly, i.e., transforming all GEPs
      to these intrinsics and later on reducing them to GEPs, we have
      seen up to 7% more instructions generated. To avoid such an overhead,
      a clang builtin is proposed:
        base = __builtin_preserve_access_index(base)
      such that user wraps to-be-relocated GEPs in this builtin
      and preserve_*_access_index intrinsics only apply to
      those GEPs. Such a buyin will prevent performance degradation
      if people do not use CO-RE, even for programs which use
      bpf_probe_read().
      
      For example, for the following example,
        $ cat test.c
        struct sk_buff {
           int i;
           int b1:1;
           int b2:2;
           union {
             struct {
               int o1;
               int o2;
             } o;
             struct {
               char flags;
               char dev_id;
             } dev;
             int netid;
           } u[10];
        };
      
        static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr)
            = (void *) 4;
      
        #define _(x) (__builtin_preserve_access_index(x))
      
        int bpf_prog(struct sk_buff *ctx) {
          char dev_id;
          bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id));
          return dev_id;
        }
        $ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \
          test.c >& log
      
      The generated IR looks like below:
        ...
        define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 {
          %2 = alloca %struct.sk_buff*, align 8
          %3 = alloca i8, align 1
          store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45
          call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49
          call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50
          call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51
          %4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45
          %5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45
          %6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs(
               %struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19
          %7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons(
               [10 x %union.anon]* %6, i32 1, i32 5), !dbg !53
          %8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons(
               %union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26
          %9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53
          %10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s(
               %struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34
          %11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52
          %12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55
          %13 = sext i8 %12 to i32, !dbg !54
          call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56
          ret i32 %13, !dbg !57
        }
      
        !19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20)
        !26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27)
        !34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35)
      
      Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index
      attached to instructions to provide struct/union debuginfo type information.
      
      For &ctx->u[5].dev.dev_id,
        . The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout.
        . The "%7 = ..." represents array subscript "5".
        . The "%8 = ..." represents union member "dev" with index 1 for DI layout.
        . The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout.
      
      Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and
      examining all preserve_*_access_index calls, the debuginfo struct/union/array access index
      can be achieved.
      
      The intrinsics also contain enough information to regenerate codes for IR layout.
      For array and structure intrinsics, the proper GEP can be constructed.
      For union intrinsics, replacing all uses of "addr" with "base" should be enough.
      
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      
      Differential Revision: https://reviews.llvm.org/D61809
      
      llvm-svn: 365438
      048493f8
    • Yonghong Song's avatar
      Revert "[BPF] Preserve debuginfo array/union/struct type/access index" · e085b40e
      Yonghong Song authored
      This reverts commit r365435.
      
      Forgot adding the Differential Revision link. Will add to the
      commit message and resubmit.
      
      llvm-svn: 365436
      e085b40e
    • Yonghong Song's avatar
      [BPF] Preserve debuginfo array/union/struct type/access index · f21eeafc
      Yonghong Song authored
      For background of BPF CO-RE project, please refer to
        http://vger.kernel.org/bpfconf2019.html
      
      
      In summary, BPF CO-RE intends to compile bpf programs
      adjustable on struct/union layout change so the same
      program can run on multiple kernels with adjustment
      before loading based on native kernel structures.
      
      In order to do this, we need keep track of GEP(getelementptr)
      instruction base and result debuginfo types, so we
      can adjust on the host based on kernel BTF info.
      Capturing such information as an IR optimization is hard
      as various optimization may have tweaked GEP and also
      union is replaced by structure it is impossible to track
      fieldindex for union member accesses.
      
      Three intrinsic functions, preserve_{array,union,struct}_access_index,
      are introducted.
        addr = preserve_array_access_index(base, index, dimension)
        addr = preserve_union_access_index(base, di_index)
        addr = preserve_struct_access_index(base, gep_index, di_index)
      here,
        base: the base pointer for the array/union/struct access.
        index: the last access index for array, the same for IR/DebugInfo layout.
        dimension: the array dimension.
        gep_index: the access index based on IR layout.
        di_index: the access index based on user/debuginfo types.
      
      If using these intrinsics blindly, i.e., transforming all GEPs
      to these intrinsics and later on reducing them to GEPs, we have
      seen up to 7% more instructions generated. To avoid such an overhead,
      a clang builtin is proposed:
        base = __builtin_preserve_access_index(base)
      such that user wraps to-be-relocated GEPs in this builtin
      and preserve_*_access_index intrinsics only apply to
      those GEPs. Such a buyin will prevent performance degradation
      if people do not use CO-RE, even for programs which use
      bpf_probe_read().
      
      For example, for the following example,
        $ cat test.c
        struct sk_buff {
           int i;
           int b1:1;
           int b2:2;
           union {
             struct {
               int o1;
               int o2;
             } o;
             struct {
               char flags;
               char dev_id;
             } dev;
             int netid;
           } u[10];
        };
      
        static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr)
            = (void *) 4;
      
        #define _(x) (__builtin_preserve_access_index(x))
      
        int bpf_prog(struct sk_buff *ctx) {
          char dev_id;
          bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id));
          return dev_id;
        }
        $ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \
          test.c >& log
      
      The generated IR looks like below:
        ...
        define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 {
          %2 = alloca %struct.sk_buff*, align 8
          %3 = alloca i8, align 1
          store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45
          call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49
          call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50
          call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51
          %4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45
          %5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45
          %6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs(
               %struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19
          %7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons(
               [10 x %union.anon]* %6, i32 1, i32 5), !dbg !53
          %8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons(
               %union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26
          %9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53
          %10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s(
               %struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34
          %11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52
          %12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55
          %13 = sext i8 %12 to i32, !dbg !54
          call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56
          ret i32 %13, !dbg !57
        }
      
        !19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20)
        !26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27)
        !34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35)
      
      Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index
      attached to instructions to provide struct/union debuginfo type information.
      
      For &ctx->u[5].dev.dev_id,
        . The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout.
        . The "%7 = ..." represents array subscript "5".
        . The "%8 = ..." represents union member "dev" with index 1 for DI layout.
        . The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout.
      
      Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and
      examining all preserve_*_access_index calls, the debuginfo struct/union/array access index
      can be achieved.
      
      The intrinsics also contain enough information to regenerate codes for IR layout.
      For array and structure intrinsics, the proper GEP can be constructed.
      For union intrinsics, replacing all uses of "addr" with "base" should be enough.
      
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      llvm-svn: 365435
      f21eeafc
    • Erik Pilkington's avatar
      [ObjC] Add a -Wtautological-compare warning for BOOL · fa591c37
      Erik Pilkington authored
      On macOS, BOOL is a typedef for signed char, but it should never hold a value
      that isn't 1 or 0. Any code that expects a different value in their BOOL should
      be fixed.
      
      rdar://51954400
      
      Differential revision: https://reviews.llvm.org/D63856
      
      llvm-svn: 365408
      fa591c37
  7. Jul 03, 2019
  8. Jun 15, 2019
    • Gauthier Harnisch's avatar
      [clang] perform semantic checking in constant context · 0bb4d46b
      Gauthier Harnisch authored
      Summary:
      Since the addition of __builtin_is_constant_evaluated the result of an expression can change based on whether it is evaluated in constant context. a lot of semantic checking performs evaluations with out specifying context. which can lead to wrong diagnostics.
      for example:
      ```
      constexpr int i0 = (long long)__builtin_is_constant_evaluated() * (1ll << 33); //#1
      constexpr int i1 = (long long)!__builtin_is_constant_evaluated() * (1ll << 33); //#2
      ```
      before the patch, #2 was diagnosed incorrectly and #1 wasn't diagnosed.
      after the patch #1 is diagnosed as it should and #2 isn't.
      
      Changes:
       - add a flag to Sema to passe in constant context mode.
       - in SemaChecking.cpp calls to Expr::Evaluate* are now done in constant context when they should.
       - in SemaChecking.cpp diagnostics for UB are not checked for in constant context because an error will be emitted by the constant evaluator.
       - in SemaChecking.cpp diagnostics for construct that cannot appear in constant context are not checked for in constant context.
       - in SemaChecking.cpp diagnostics on constant expression are always emitted because constant expression are always evaluated.
       - semantic checking for initialization of constexpr variables is now done in constant context.
       - adapt test that were depending on warning changes.
       - add test.
      
      Reviewers: rsmith
      
      Reviewed By: rsmith
      
      Subscribers: cfe-commits
      
      Tags: #clang
      
      Differential Revision: https://reviews.llvm.org/D62009
      
      llvm-svn: 363488
      0bb4d46b
    • Craig Topper's avatar
      [X86] Add checks that immediate for reducesd/ss fits in 8-bits. · 9967a6c6
      Craig Topper authored
      llvm-svn: 363472
      9967a6c6
  9. Jun 11, 2019
  10. May 29, 2019
  11. May 08, 2019
  12. May 06, 2019
  13. May 05, 2019
  14. May 04, 2019
  15. Apr 27, 2019
    • Richard Smith's avatar
      Reinstate r359059, reverted in r359361, with a fix to properly prevent · 31cfb311
      Richard Smith authored
      us emitting the operand of __builtin_constant_p if it has side-effects.
      
      Original commit message:
      
      Fix interactions between __builtin_constant_p and constexpr to match
      current trunk GCC.
      
      GCC permits information from outside the operand of
      __builtin_constant_p (but in the same constant evaluation context) to be
      used within that operand; clang now does so too. A few other minor
      deviations from GCC's behavior showed up in my testing and are also
      fixed (matching GCC):
        * Clang now supports nullptr_t as the argument type for
          __builtin_constant_p
          * Clang now returns true from __builtin_constant_p if called with a
          null pointer
          * Clang now returns true from __builtin_constant_p if called with an
          integer cast to pointer type
      
      llvm-svn: 359367
      31cfb311
    • Jorge Gorbe Moya's avatar
      Revert Fix interactions between __builtin_constant_p and constexpr to match current trunk GCC. · 1dbd42ab
      Jorge Gorbe Moya authored
      This reverts r359059 (git commit 0b098754)
      
      llvm-svn: 359361
      1dbd42ab
  16. Apr 26, 2019
  17. Apr 24, 2019
    • Richard Smith's avatar
      Fix interactions between __builtin_constant_p and constexpr to match · 0b098754
      Richard Smith authored
      current trunk GCC.
      
      GCC permits information from outside the operand of
      __builtin_constant_p (but in the same constant evaluation context) to be
      used within that operand; clang now does so too. A few other minor
      deviations from GCC's behavior showed up in my testing and are also
      fixed (matching GCC):
       * Clang now supports nullptr_t as the argument type for
         __builtin_constant_p
       * Clang now returns true from __builtin_constant_p if called with a
         null pointer
       * Clang now returns true from __builtin_constant_p if called with an
         integer cast to pointer type
      
      llvm-svn: 359059
      0b098754
  18. Apr 08, 2019
    • Craig Topper's avatar
      [X86] Add some fp to integer conversion intrinsics to... · 1b62c758
      Craig Topper authored
      [X86] Add some fp to integer conversion intrinsics to Sema::CheckX86BuiltinRoundingOrSAE so their rounding controls will be checked.
      
      If we don't check this in the frontend we'll get an isel error in the backend later. This is far less friendly to users.
      
      llvm-svn: 357924
      1b62c758
  19. Mar 31, 2019
  20. Mar 29, 2019
  21. Mar 27, 2019
  22. Mar 25, 2019
  23. Mar 18, 2019
    • Erik Pilkington's avatar
      [Sema] Add some compile time _FORTIFY_SOURCE diagnostics · b6e16ea0
      Erik Pilkington authored
      These diagnose overflowing calls to subset of fortifiable functions. Some
      functions, like sprintf or strcpy aren't supported right not, but we should
      probably support these in the future. We previously supported this kind of
      functionality with -Wbuiltin-memcpy-chk-size, but that diagnostic doesn't work
      with _FORTIFY implementations that use wrapper functions. Also unlike that
      diagnostic, we emit these warnings regardless of whether _FORTIFY_SOURCE is
      actually enabled, which is nice for programs that don't enable the runtime
      checks.
      
      Why not just use diagnose_if, like Bionic does? We can get better diagnostics in
      the compiler (i.e. mention the sizes), and we have the potential to diagnose
      sprintf and strcpy which is impossible with diagnose_if (at least, in languages
      that don't support C++14 constexpr). This approach also saves standard libraries
      from having to add diagnose_if.
      
      rdar://48006655
      
      Differential revision: https://reviews.llvm.org/D58797
      
      llvm-svn: 356397
      b6e16ea0
  24. Mar 06, 2019
  25. Feb 16, 2019
  26. Feb 14, 2019
  27. Jan 30, 2019
    • Erik Pilkington's avatar
      Add a new builtin: __builtin_dynamic_object_size · 9c3b588d
      Erik Pilkington authored
      This builtin has the same UI as __builtin_object_size, but has the
      potential to be evaluated dynamically. It is meant to be used as a
      drop-in replacement for libraries that use __builtin_object_size when
      a dynamic checking mode is enabled. For instance,
      __builtin_object_size fails to provide any extra checking in the
      following function:
      
        void f(size_t alloc) {
          char* p = malloc(alloc);
          strcpy(p, "foobar"); // expands to __builtin___strcpy_chk(p, "foobar", __builtin_object_size(p, 0))
        }
      
      This is an overflow if alloc < 7, but because LLVM can't fold the
      object size intrinsic statically, it folds __builtin_object_size to
      -1. With __builtin_dynamic_object_size, alloc is passed through to
      __builtin___strcpy_chk.
      
      rdar://32212419
      
      Differential revision: https://reviews.llvm.org/D56760
      
      llvm-svn: 352665
      9c3b588d
  28. Jan 29, 2019
    • Matt Arsenault's avatar
      OpenCL: Use length modifier for warning on vector printf arguments · 58fc8082
      Matt Arsenault authored
      Re-enable format string warnings on printf.
      
      The warnings are still incomplete. Apparently it is undefined to use a
      vector specifier without a length modifier, which is not currently
      warned on. Additionally, type warnings appear to not be working with
      the hh modifier, and aren't warning on all of the special restrictions
      from c99 printf.
      
      llvm-svn: 352540
      58fc8082
  29. Jan 21, 2019
Loading