Skip to content
  1. Sep 27, 2016
    • Adam Nemet's avatar
      Output optimization remarks in YAML · a62b7e1a
      Adam Nemet authored
      (Re-committed after moving the template specialization under the yaml
      namespace.  GCC was complaining about this.)
      
      This allows various presentation of this data using an external tool.
      This was first recommended here[1].
      
      As an example, consider this module:
      
        1 int foo();
        2 int bar();
        3
        4 int baz() {
        5   return foo() + bar();
        6 }
      
      The inliner generates these missed-optimization remarks today (the
      hotness information is pulled from PGO):
      
        remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30)
        remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30)
      
      Now with -pass-remarks-output=<yaml-file>, we generate this YAML file:
      
        --- !Missed
        Pass:            inline
        Name:            NotInlined
        DebugLoc:        { File: /tmp/s.c, Line: 5, Column: 10 }
        Function:        baz
        Hotness:         30
        Args:
          - Callee: foo
          - String:  will not be inlined into
          - Caller: baz
        ...
        --- !Missed
        Pass:            inline
        Name:            NotInlined
        DebugLoc:        { File: /tmp/s.c, Line: 5, Column: 18 }
        Function:        baz
        Hotness:         30
        Args:
          - Callee: bar
          - String:  will not be inlined into
          - Caller: baz
        ...
      
      This is a summary of the high-level decisions:
      
      * There is a new streaming interface to emit optimization remarks.
      E.g. for the inliner remark above:
      
         ORE.emit(DiagnosticInfoOptimizationRemarkMissed(
                      DEBUG_TYPE, "NotInlined", &I)
                  << NV("Callee", Callee) << " will not be inlined into "
                  << NV("Caller", CS.getCaller()) << setIsVerbose());
      
      NV stands for named value and allows the YAML client to process a remark
      using its name (NotInlined) and the named arguments (Callee and Caller)
      without parsing the text of the message.
      
      Subsequent patches will update ORE users to use the new streaming API.
      
      * I am using YAML I/O for writing the YAML file.  YAML I/O requires you
      to specify reading and writing at once but reading is highly non-trivial
      for some of the more complex LLVM types.  Since it's not clear that we
      (ever) want to use LLVM to parse this YAML file, the code supports and
      asserts that we're writing only.
      
      On the other hand, I did experiment that the class hierarchy starting at
      DiagnosticInfoOptimizationBase can be mapped back from YAML generated
      here (see D24479).
      
      * The YAML stream is stored in the LLVM context.
      
      * In the example, we can probably further specify the IR value used,
      i.e. print "Function" rather than "Value".
      
      * As before hotness is computed in the analysis pass instead of
      DiganosticInfo.  This avoids the layering problem since BFI is in
      Analysis while DiagnosticInfo is in IR.
      
      [1] https://reviews.llvm.org/D19678#419445
      
      Differential Revision: https://reviews.llvm.org/D24587
      
      llvm-svn: 282539
      a62b7e1a
    • Adam Nemet's avatar
      Revert "Output optimization remarks in YAML" · cc2a3fa8
      Adam Nemet authored
      This reverts commit r282499.
      
      The GCC bots are failing
      
      llvm-svn: 282503
      cc2a3fa8
    • Adam Nemet's avatar
      Output optimization remarks in YAML · 92e928c1
      Adam Nemet authored
      This allows various presentation of this data using an external tool.
      This was first recommended here[1].
      
      As an example, consider this module:
      
        1 int foo();
        2 int bar();
        3
        4 int baz() {
        5   return foo() + bar();
        6 }
      
      The inliner generates these missed-optimization remarks today (the
      hotness information is pulled from PGO):
      
        remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30)
        remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30)
      
      Now with -pass-remarks-output=<yaml-file>, we generate this YAML file:
      
        --- !Missed
        Pass:            inline
        Name:            NotInlined
        DebugLoc:        { File: /tmp/s.c, Line: 5, Column: 10 }
        Function:        baz
        Hotness:         30
        Args:
          - Callee: foo
          - String:  will not be inlined into
          - Caller: baz
        ...
        --- !Missed
        Pass:            inline
        Name:            NotInlined
        DebugLoc:        { File: /tmp/s.c, Line: 5, Column: 18 }
        Function:        baz
        Hotness:         30
        Args:
          - Callee: bar
          - String:  will not be inlined into
          - Caller: baz
        ...
      
      This is a summary of the high-level decisions:
      
      * There is a new streaming interface to emit optimization remarks.
      E.g. for the inliner remark above:
      
         ORE.emit(DiagnosticInfoOptimizationRemarkMissed(
                      DEBUG_TYPE, "NotInlined", &I)
                  << NV("Callee", Callee) << " will not be inlined into "
                  << NV("Caller", CS.getCaller()) << setIsVerbose());
      
      NV stands for named value and allows the YAML client to process a remark
      using its name (NotInlined) and the named arguments (Callee and Caller)
      without parsing the text of the message.
      
      Subsequent patches will update ORE users to use the new streaming API.
      
      * I am using YAML I/O for writing the YAML file.  YAML I/O requires you
      to specify reading and writing at once but reading is highly non-trivial
      for some of the more complex LLVM types.  Since it's not clear that we
      (ever) want to use LLVM to parse this YAML file, the code supports and
      asserts that we're writing only.
      
      On the other hand, I did experiment that the class hierarchy starting at
      DiagnosticInfoOptimizationBase can be mapped back from YAML generated
      here (see D24479).
      
      * The YAML stream is stored in the LLVM context.
      
      * In the example, we can probably further specify the IR value used,
      i.e. print "Function" rather than "Value".
      
      * As before hotness is computed in the analysis pass instead of
      DiganosticInfo.  This avoids the layering problem since BFI is in
      Analysis while DiagnosticInfo is in IR.
      
      [1] https://reviews.llvm.org/D19678#419445
      
      Differential Revision: https://reviews.llvm.org/D24587
      
      llvm-svn: 282499
      92e928c1
  2. Aug 26, 2016
    • Adam Nemet's avatar
      [Inliner] Report when inlining fails because callee's def is unavailable · cef33141
      Adam Nemet authored
      Summary:
      This is obviously an interesting case because it may motivate code
      restructuring or LTO.
      
      Reporting this requires instantiation of ORE in the loop where the call
      sites are first gathered.  I've checked compile-time
      overhead *with* -Rpass-with-hotness and the worst slow-down was 6% in
      mcf and quickly tailing off.  As before without -Rpass-with-hotness
      there is no overhead.
      
      Because this could be a pretty noisy diagnostics, it is currently
      qualified as 'verbose'.  As of this patch, 'verbose' diagnostics are
      only emitted with -Rpass-with-hotness, i.e. when the output is expected
      to be filtered.
      
      Reviewers: eraman, chandlerc, davidxl, hfinkel
      
      Subscribers: tejohnson, Prazek, davide, llvm-commits
      
      Differential Revision: https://reviews.llvm.org/D23415
      
      llvm-svn: 279860
      cef33141
  3. Aug 17, 2016
    • Chandler Carruth's avatar
      [Inliner] Add a flag to disable manual alloca merging in the Inliner. · f702d8ec
      Chandler Carruth authored
      This is off for now while testing can take place to make sure that in
      fact we do sufficient stack coloring to fully obviate the manual alloca
      array merging.
      
      Some context on why we should be using stack coloring rather than
      merging allocas in this way:
      
      LLVM relies very heavily on analyzing pointers as coming from different
      allocas in order to make aliasing decisions. These are some of the most
      powerful aliasing signals available in LLVM. So merging allocas is an
      extremely destructive operation on the LLVM IR -- it takes away highly
      valuable and hard to reconstruct information.
      
      As a consequence, inlined functions which happen to have array allocas
      that this pattern matches will fail to be properly interleaved unless
      SROA manages to hoist everything to an SSA register. Instead, the
      inliner will have added an unnecessary dependence that one inlined
      function execute after the other because they will have been rewritten
      to refer to the same memory.
      
      All that said, folks will reasonably want some time to experiment here
      and make sure there are no significant regressions. A flag should give
      us an easy knob to test.
      
      For more context, see the thread here:
      http://lists.llvm.org/pipermail/llvm-dev/2016-July/103277.html
      http://lists.llvm.org/pipermail/llvm-dev/2016-August/103285.html
      
      Differential Revision: https://reviews.llvm.org/D23052
      
      llvm-svn: 278892
      f702d8ec
  4. Aug 10, 2016
    • Piotr Padlewski's avatar
      Changed sign of LastCallToStaticBouns · d89875ca
      Piotr Padlewski authored
      Summary:
      I think it is much better this way.
      When I firstly saw line:
        Cost += InlineConstants::LastCallToStaticBonus;
      I though that this is a bug, because everywhere where the cost is being reduced
      it is usuing -=.
      
      Reviewers: eraman, tejohnson, mehdi_amini
      
      Subscribers: llvm-commits, mehdi_amini
      
      Differential Revision: https://reviews.llvm.org/D23222
      
      llvm-svn: 278290
      d89875ca
    • Adam Nemet's avatar
      [Inliner,OptDiag] Add hotness attribute to opt diagnostics · 896c09bd
      Adam Nemet authored
      Summary:
      The inliner not being a function pass requires the work-around of
      generating the OptimizationRemarkEmitter and in turn BFI on demand.
      This will go away after the new PM is ready.
      
      BFI is only computed inside ORE if the user has requested hotness
      information for optimization diagnostitics (-pass-remark-with-hotness at
      the 'opt' level).  Thus there is no additional overhead without the
      flag.
      
      Reviewers: hfinkel, davidxl, eraman
      
      Subscribers: llvm-commits
      
      Differential Revision: https://reviews.llvm.org/D22694
      
      llvm-svn: 278185
      896c09bd
  5. Aug 06, 2016
  6. Aug 03, 2016
  7. Jul 29, 2016
    • Piotr Padlewski's avatar
      Added ThinLTO inlining statistics · 84abc74f
      Piotr Padlewski authored
      Summary:
      copypasta doc of ImportedFunctionsInliningStatistics class
       \brief Calculate and dump ThinLTO specific inliner stats.
       The main statistics are:
       (1) Number of inlined imported functions,
       (2) Number of imported functions inlined into importing module (indirect),
       (3) Number of non imported functions inlined into importing module
       (indirect).
       The difference between first and the second is that first stat counts
       all performed inlines on imported functions, but the second one only the
       functions that have been eventually inlined to a function in the importing
       module (by a chain of inlines). Because llvm uses bottom-up inliner, it is
       possible to e.g. import function `A`, `B` and then inline `B` to `A`,
       and after this `A` might be too big to be inlined into some other function
       that calls it. It calculates this statistic by building graph, where
       the nodes are functions, and edges are performed inlines and then by marking
       the edges starting from not imported function.
      
       If `Verbose` is set to true, then it also dumps statistics
       per each inlined function, sorted by the greatest inlines count like
       - number of performed inlines
       - number of performed inlines to importing module
      
      Reviewers: eraman, tejohnson, mehdi_amini
      
      Subscribers: mehdi_amini, llvm-commits
      
      Differential Revision: https://reviews.llvm.org/D22491
      
      llvm-svn: 277089
      84abc74f
  8. Jul 23, 2016
  9. Jun 10, 2016
  10. May 23, 2016
  11. Apr 30, 2016
  12. Apr 29, 2016
  13. Apr 23, 2016
  14. Apr 22, 2016
  15. Apr 21, 2016
    • Andrew Kaylor's avatar
      Initial implementation of optimization bisect support. · f0f27929
      Andrew Kaylor authored
      This patch implements a optimization bisect feature, which will allow optimizations to be selectively disabled at compile time in order to track down test failures that are caused by incorrect optimizations.
      
      The bisection is enabled using a new command line option (-opt-bisect-limit).  Individual passes that may be skipped call the OptBisect object (via an LLVMContext) to see if they should be skipped based on the bisect limit.  A finer level of control (disabling individual transformations) can be managed through an addition OptBisect method, but this is not yet used.
      
      The skip checking in this implementation is based on (and replaces) the skipOptnoneFunction check.  Where that check was being called, a new call has been inserted in its place which checks the bisect limit and the optnone attribute.  A new function call has been added for module and SCC passes that behaves in a similar way.
      
      Differential Revision: http://reviews.llvm.org/D19172
      
      llvm-svn: 267022
      f0f27929
  16. Apr 18, 2016
    • Mehdi Amini's avatar
      [NFC] Header cleanup · b550cb17
      Mehdi Amini authored
      Removed some unused headers, replaced some headers with forward class declarations.
      
      Found using simple scripts like this one:
      clear && ack --cpp -l '#include "llvm/ADT/IndexedMap.h"' | xargs grep -L 'IndexedMap[<]' | xargs grep -n --color=auto 'IndexedMap'
      
      Patch by Eugene Kosov <claprix@yandex.ru>
      
      Differential Revision: http://reviews.llvm.org/D19219
      
      From: Mehdi Amini <mehdi.amini@apple.com>
      llvm-svn: 266595
      b550cb17
  17. Mar 08, 2016
  18. Mar 04, 2016
  19. Mar 03, 2016
    • Easwaran Raman's avatar
      Infrastructure for PGO enhancements in inliner · 3035719c
      Easwaran Raman authored
      This patch provides the following infrastructure for PGO enhancements in inliner:
      
      Enable the use of block level profile information in inliner
      Incremental update of block frequency information during inlining
      Update the function entry counts of callees when they get inlined into callers.
      
      Differential Revision: http://reviews.llvm.org/D16381
      
      llvm-svn: 262636
      3035719c
  20. Mar 02, 2016
    • Chandler Carruth's avatar
      [AA] Hoist the logic to reformulate various AA queries in terms of other · 12884f7f
      Chandler Carruth authored
      parts of the AA interface out of the base class of every single AA
      result object.
      
      Because this logic reformulates the query in terms of some other aspect
      of the API, it would easily cause O(n^2) query patterns in alias
      analysis. These could in turn be magnified further based on the number
      of call arguments, and then further based on the number of AA queries
      made for a particular call. This ended up causing problems for Rust that
      were actually noticable enough to get a bug (PR26564) and probably other
      places as well.
      
      When originally re-working the AA infrastructure, the desire was to
      regularize the pattern of refinement without losing any generality.
      While I think it was successful, that is clearly proving to be too
      costly. And the cost is needless: we gain no actual improvement for this
      generality of making a direct query to tbaa actually be able to
      re-use some other alias analysis's refinement logic for one of the other
      APIs, or some such. In short, this is entirely wasted work.
      
      To the extent possible, delegation to other API surfaces should be done
      at the aggregation layer so that we can avoid re-walking the
      aggregation. In fact, this significantly simplifies the logic as we no
      longer need to smuggle the aggregation layer into each alias analysis
      (or the TargetLibraryInfo into each alias analysis just so we can form
      argument memory locations!).
      
      However, we also have some delegation logic inside of BasicAA and some
      of it even makes sense. When the delegation logic is baking in specific
      knowledge of aliasing properties of the LLVM IR, as opposed to simply
      reformulating the query to utilize a different alias analysis interface
      entry point, it makes a lot of sense to restrict that logic to
      a different layer such as BasicAA. So one aspect of the delegation that
      was in every AA base class is that when we don't have operand bundles,
      we re-use function AA results as a fallback for callsite alias results.
      This relies on the IR properties of calls and functions w.r.t. aliasing,
      and so seems a better fit to BasicAA. I've lifted the logic up to that
      point where it seems to be a natural fit. This still does a bit of
      redundant work (we query function attributes twice, once via the
      callsite and once via the function AA query) but it is *exactly* twice
      here, no more.
      
      The end result is that all of the delegation logic is hoisted out of the
      base class and into either the aggregation layer when it is a pure
      retargeting to a different API surface, or into BasicAA when it relies
      on the IR's aliasing properties. This should fix the quadratic query
      pattern reported in PR26564, although I don't have a stand-alone test
      case to reproduce it.
      
      It also seems general goodness. Now the numerous AAs that don't need
      target library info don't carry it around and depend on it. I think
      I can even rip out the general access to the aggregation layer and only
      expose that in BasicAA as it is the only place where we re-query in that
      manner.
      
      However, this is a non-trivial change to the AA infrastructure so I want
      to get some additional eyes on this before it lands. Sadly, it can't
      wait long because we should really cherry pick this into 3.8 if we're
      going to go this route.
      
      Differential Revision: http://reviews.llvm.org/D17329
      
      llvm-svn: 262490
      12884f7f
  21. Feb 09, 2016
    • Sanjoy Das's avatar
      Add an "addUsedAAAnalyses" helper function · 1c481f50
      Sanjoy Das authored
      Summary:
      Passes that call `getAnalysisIfAvailable<T>` also need to call
      `addUsedIfAvailable<T>` in `getAnalysisUsage` to indicate to the
      legacy pass manager that it uses `T`.  This contract was being
      violated by passes that used `createLegacyPMAAResults`.  This change
      fixes this by exposing a helper in AliasAnalysis.h,
      `addUsedAAAnalyses`, that is complementary to createLegacyPMAAResults
      and does the right thing when called from `getAnalysisUsage`.
      
      Reviewers: chandlerc
      
      Subscribers: mcrosier, llvm-commits
      
      Differential Revision: http://reviews.llvm.org/D17010
      
      llvm-svn: 260183
      1c481f50
  22. Jan 15, 2016
  23. Dec 28, 2015
  24. Dec 23, 2015
    • Akira Hatanaka's avatar
      Provide a way to specify inliner's attribute compatibility and merging. · 1cb242eb
      Akira Hatanaka authored
      This reapplies r256277 with two changes:
      
      - In emitFnAttrCompatCheck, change FuncName's type to std::string to fix
        a use-after-free bug.
      - Remove an unnecessary install-local target in lib/IR/Makefile. 
      
      Original commit message for r252949:
      
      Provide a way to specify inliner's attribute compatibility and merging
      rules using table-gen. NFC.
      
      This commit adds new classes CompatRule and MergeRule to Attributes.td,
      which are used to generate code to check attribute compatibility and
      merge attributes of the caller and callee.
      
      rdar://problem/19836465
      
      llvm-svn: 256304
      1cb242eb
  25. Dec 22, 2015
  26. Nov 13, 2015
    • Akira Hatanaka's avatar
      Revert r252990. · 5af7ace4
      Akira Hatanaka authored
      Some of the buildbots are still failing.
      
      llvm-svn: 252999
      5af7ace4
    • Akira Hatanaka's avatar
      Provide a way to specify inliner's attribute compatibility and merging. · c7dfb76f
      Akira Hatanaka authored
      This reapplies r252949. I've changed the type of FuncName to be
      std::string instead of StringRef in emitFnAttrCompatCheck.
      
      Original commit message for r252949:
      
      Provide a way to specify inliner's attribute compatibility and merging
      rules using table-gen. NFC.
      
      This commit adds new classes CompatRule and MergeRule to Attributes.td,
      which are used to generate code to check attribute compatibility and
      merge attributes of the caller and callee.
      
      rdar://problem/19836465
      
      llvm-svn: 252990
      c7dfb76f
  27. Nov 12, 2015
  28. Sep 29, 2015
    • Evgeniy Stepanov's avatar
      Move dbg.declare intrinsics when merging and replacing allocas. · d8b86f7c
      Evgeniy Stepanov authored
      Place new and update dbg.declare calls immediately after the
      corresponding alloca.
      
      Current code in replaceDbgDeclareForAlloca puts the new dbg.declare
      at the end of the basic block. LLVM codegen has problems emitting
      debug info in a situation when dbg.declare appears after all uses of
      the variable. This usually kinda works for inlining and ASan (two
      users of this function) but not for SafeStack (see the pending change
      in http://reviews.llvm.org/D13178).
      
      llvm-svn: 248769
      d8b86f7c
  29. Sep 09, 2015
    • Chandler Carruth's avatar
      [PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible · 7b560d40
      Chandler Carruth authored
      with the new pass manager, and no longer relying on analysis groups.
      
      This builds essentially a ground-up new AA infrastructure stack for
      LLVM. The core ideas are the same that are used throughout the new pass
      manager: type erased polymorphism and direct composition. The design is
      as follows:
      
      - FunctionAAResults is a type-erasing alias analysis results aggregation
        interface to walk a single query across a range of results from
        different alias analyses. Currently this is function-specific as we
        always assume that aliasing queries are *within* a function.
      
      - AAResultBase is a CRTP utility providing stub implementations of
        various parts of the alias analysis result concept, notably in several
        cases in terms of other more general parts of the interface. This can
        be used to implement only a narrow part of the interface rather than
        the entire interface. This isn't really ideal, this logic should be
        hoisted into FunctionAAResults as currently it will cause
        a significant amount of redundant work, but it faithfully models the
        behavior of the prior infrastructure.
      
      - All the alias analysis passes are ported to be wrapper passes for the
        legacy PM and new-style analysis passes for the new PM with a shared
        result object. In some cases (most notably CFL), this is an extremely
        naive approach that we should revisit when we can specialize for the
        new pass manager.
      
      - BasicAA has been restructured to reflect that it is much more
        fundamentally a function analysis because it uses dominator trees and
        loop info that need to be constructed for each function.
      
      All of the references to getting alias analysis results have been
      updated to use the new aggregation interface. All the preservation and
      other pass management code has been updated accordingly.
      
      The way the FunctionAAResultsWrapperPass works is to detect the
      available alias analyses when run, and add them to the results object.
      This means that we should be able to continue to respect when various
      passes are added to the pipeline, for example adding CFL or adding TBAA
      passes should just cause their results to be available and to get folded
      into this. The exception to this rule is BasicAA which really needs to
      be a function pass due to using dominator trees and loop info. As
      a consequence, the FunctionAAResultsWrapperPass directly depends on
      BasicAA and always includes it in the aggregation.
      
      This has significant implications for preserving analyses. Generally,
      most passes shouldn't bother preserving FunctionAAResultsWrapperPass
      because rebuilding the results just updates the set of known AA passes.
      The exception to this rule are LoopPass instances which need to preserve
      all the function analyses that the loop pass manager will end up
      needing. This means preserving both BasicAAWrapperPass and the
      aggregating FunctionAAResultsWrapperPass.
      
      Now, when preserving an alias analysis, you do so by directly preserving
      that analysis. This is only necessary for non-immutable-pass-provided
      alias analyses though, and there are only three of interest: BasicAA,
      GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is
      preserved when needed because it (like DominatorTree and LoopInfo) is
      marked as a CFG-only pass. I've expanded GlobalsAA into the preserved
      set everywhere we previously were preserving all of AliasAnalysis, and
      I've added SCEVAA in the intersection of that with where we preserve
      SCEV itself.
      
      One significant challenge to all of this is that the CGSCC passes were
      actually using the alias analysis implementations by taking advantage of
      a pretty amazing set of loop holes in the old pass manager's analysis
      management code which allowed analysis groups to slide through in many
      cases. Moving away from analysis groups makes this problem much more
      obvious. To fix it, I've leveraged the flexibility the design of the new
      PM components provides to just directly construct the relevant alias
      analyses for the relevant functions in the IPO passes that need them.
      This is a bit hacky, but should go away with the new pass manager, and
      is already in many ways cleaner than the prior state.
      
      Another significant challenge is that various facilities of the old
      alias analysis infrastructure just don't fit any more. The most
      significant of these is the alias analysis 'counter' pass. That pass
      relied on the ability to snoop on AA queries at different points in the
      analysis group chain. Instead, I'm planning to build printing
      functionality directly into the aggregation layer. I've not included
      that in this patch merely to keep it smaller.
      
      Note that all of this needs a nearly complete rewrite of the AA
      documentation. I'm planning to do that, but I'd like to make sure the
      new design settles, and to flesh out a bit more of what it looks like in
      the new pass manager first.
      
      Differential Revision: http://reviews.llvm.org/D12080
      
      llvm-svn: 247167
      7b560d40
  30. Aug 11, 2015
  31. Aug 05, 2015
  32. Aug 04, 2015
    • Sanjay Patel's avatar
      wrap OptSize and MinSize attributes for easier and consistent access (NFCI) · 924879ad
      Sanjay Patel authored
      Create wrapper methods in the Function class for the OptimizeForSize and MinSize
      attributes. We want to hide the logic of "or'ing" them together when optimizing
      just for size (-Os).
      
      Currently, we are not consistent about this and rely on a front-end to always set
      OptimizeForSize (-Os) if MinSize (-Oz) is on. Thus, there are 18 FIXME changes here
      that should be added as follow-on patches with regression tests.
      
      This patch is NFC-intended: it just replaces existing direct accesses of the attributes
      by the equivalent wrapper call.
      
      Differential Revision: http://reviews.llvm.org/D11734
      
      llvm-svn: 243994
      924879ad
Loading