1. 14 Nov, 2016 1 commit
    • tebbi's avatar
      This CL enables precise source positions for all V8 compilers. It merges... · c3a6ca68
      tebbi authored
      This CL enables precise source positions for all V8 compilers. It merges compiler::SourcePosition and internal::SourcePosition to a single class used throughout the codebase. The new internal::SourcePosition instances store an id identifying an inlined function in addition to a script offset.
      SourcePosition::InliningId() refers to a the new table DeoptimizationInputData::InliningPositions(), which provides the following data for every inlining id:
       - The inlined SharedFunctionInfo as an offset into DeoptimizationInfo::LiteralArray
       - The SourcePosition of the inlining. Recursively, this yields the full inlining stack.
      Before the Code object is created, the same information can be found in CompilationInfo::inlined_functions().
      
      If SourcePosition::InliningId() is SourcePosition::kNotInlined, it refers to the outer (non-inlined) function.
      So every SourcePosition has full information about its inlining stack, as long as the corresponding Code object is known. The internal represenation of a source position is a positive 64bit integer.
      
      All compilers create now appropriate source positions for inlined functions. In the case of Turbofan, this required using AstGraphBuilderWithPositions for inlined functions too. So this class is now moved to a header file.
      
      At the moment, the additional information in source positions is only used in --trace-deopt and --code-comments. The profiler needs to be updated, at the moment it gets the correct script offsets from the deopt info, but the wrong script id from the reconstructed deopt stack, which can lead to wrong outputs. This should be resolved by making the profiler use the new inlining information for deopts.
      
      I activated the inlined deoptimization tests in test-cpu-profiler.cc for Turbofan, changing them to a case where the deopt stack and the inlining position agree. It is currently still broken for other cases.
      
      The following additional changes were necessary:
       - The source position table (internal::SourcePositionTableBuilder etc.) supports now 64bit source positions. Encoding source positions in a single 64bit int together with the difference encoding in the source position table results in very little overhead for the inlining id, since only 12% of the source positions in Octane have a changed inlining id.
       - The class HPositionInfo was effectively dead code and is now removed.
       - SourcePosition has new printing and information facilities, including computing a full inlining stack.
       - I had to rename compiler/source-position.{h,cc} to compiler/compiler-source-position-table.{h,cc} to avoid clashes with the new src/source-position.cc file.
       - I wrote the new wrapper PodArray for ByteArray. It is a template working with any POD-type. This is used in DeoptimizationInputData::InliningPositions().
       - I removed HInlinedFunctionInfo and HGraph::inlined_function_infos, because they were only used for the now obsolete Crankshaft inlining ids.
       - Crankshaft managed a list of inlined functions in Lithium: LChunk::inlined_functions. This is an analog structure to CompilationInfo::inlined_functions. So I removed LChunk::inlined_functions and made Crankshaft use CompilationInfo::inlined_functions instead, because this was necessary to register the offsets into the literal array in a uniform way. This is a safe change because LChunk::inlined_functions has no other uses and the functions in CompilationInfo::inlined_functions have a strictly longer lifespan, being created earlier (in Hydrogen already).
      
      BUG=v8:5432
      
      Review-Url: https://codereview.chromium.org/2451853002
      Cr-Commit-Position: refs/heads/master@{#40975}
      c3a6ca68
  2. 14 Sep, 2016 1 commit
    • bmeurer's avatar
      [turbofan] Call frequencies for JSCallFunction and JSCallConstruct. · 0b8a6945
      bmeurer authored
      Extract the call counts from the type feedback vector during graph
      building (either via the AstGraphBuilder or the BytecodeGraphBuilder),
      and put them onto the JSCallFunction and JSCallConstruct operators,
      so that they work even across inlinine through .apply and .call (which
      was previously hacked by creating a temporary type feedback vector
      for those).
      
      The next logic step will be to make those call counts into real
      relative call frequencies (also during graph building), so that we
      can make inlining decisions that make sense for the function being
      optimized (where absolute values are misleading).
      
      R=jarin@chromium.org
      BUG=v8:5267,v8:5372
      
      Review-Url: https://codereview.chromium.org/2330883002
      Cr-Commit-Position: refs/heads/master@{#39400}
      0b8a6945
  3. 09 Sep, 2016 1 commit
    • bmeurer's avatar
      [turbofan] Initial support for polymorphic inlining. · 7d4ab7d4
      bmeurer authored
      For call sites where the target is not a known constant, but potentially
      a list of known constants (i.e. a Phi with all HeapConstant inputs), we
      still record the call site as a potential candidate for inlining.
      In case the heuristic picks that candidate for inlining, we
      expand the call site to a dispatched call site and invoke the
      actual inlining logic for all the nested call sites.
      
      Like Crankshaft, we currently allow up to 4 targets for polymorphic inlining,
      although we might want to refine that later.
      
      This approach is different from what Crankshaft does in
      that we don't duplicate the evaluation of the parameters per polymorphic
      case. Instead we first perform the load of the target (which usually
      dispatches based on the receiver map), then we evaluate all the
      parameters, and then we dispatch again based on the known targets. This
      might generate better or worse code compared to what Crankshaft does,
      and for the cases where we generate worse code (i.e. because we have
      only trivial parameters or no parameters at all), we might want to
      investigate optimizing away the double dispatch in the
      future.
      
      R=mvstanton@chromium.org
      BUG=v8:5267,v8:5365
      
      Review-Url: https://codereview.chromium.org/2325943002
      Cr-Commit-Position: refs/heads/master@{#39302}
      7d4ab7d4
  4. 09 Nov, 2015 1 commit
  5. 03 Nov, 2015 1 commit
    • mstarzinger's avatar
      [turbofan] Use sorted set in JSInliningHeuristic. · 2a4336d9
      mstarzinger authored
      This changes the inlining candidates to be stored in a sorted set of
      unique entries instead of a vector. We can avoid the final sorting
      operation by amortizing the cost across insertions and also duplicate
      entries are not created in the first place. Duplicate entries cause
      crashes when candidates are processed.
      
      R=bmeurer@chromium.org
      BUG=chromium:549113
      LOG=n
      
      Review URL: https://codereview.chromium.org/1430553003
      
      Cr-Commit-Position: refs/heads/master@{#31742}
      2a4336d9
  6. 29 Oct, 2015 1 commit
  7. 14 Oct, 2015 1 commit
  8. 07 Oct, 2015 1 commit