1. 14 Feb, 2019 1 commit
  2. 27 Dec, 2018 1 commit
  3. 23 Jul, 2018 1 commit
  4. 14 Apr, 2018 1 commit
    • Jakob Kummerow's avatar
      [ubsan] Change Address typedef to uintptr_t · 2459046c
      Jakob Kummerow authored
      The "Address" type is V8's general-purpose type for manipulating memory
      addresses. Per the C++ spec, pointer arithmetic and pointer comparisons
      are undefined behavior except within the same array; since we generally
      don't operate within a C++ array, our general-purpose type shouldn't be
      a pointer type.
      
      Bug: v8:3770
      Cq-Include-Trybots: luci.chromium.try:linux_chromium_rel_ng;master.tryserver.blink:linux_trusty_blink_rel
      Change-Id: Ib96016c24a0f18bcdba916dabd83e3f24a1b5779
      Reviewed-on: https://chromium-review.googlesource.com/988657
      Commit-Queue: Jakob Kummerow <jkummerow@chromium.org>
      Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#52601}
      2459046c
  5. 27 Feb, 2018 1 commit
  6. 24 Jan, 2018 1 commit
  7. 23 Jan, 2018 1 commit
    • Leszek Swirski's avatar
      [ignition] Single-switch generator bytecode · c869d40d
      Leszek Swirski authored
      Currently, yields and awaits inside loops compile to bytecode which
      switches to the top of the loop header, and switch again once inside the
      loop. This is to make loops reducible.
      
      This replaces this switching logic with a single switch bytecode that
      directly jumps to the bytecode being resumed. Among other things, this
      allows us to no longer maintain the generator state after the switch at
      the top of the function, and avoid having to track loop suspend counts.
      
      TurboFan still needs to have reducible loops, so we now insert loop
      header switches during bytecode graph building, for suspends that are
      discovered to be inside loops during bytecode analysis. We do, however,
      do some environment magic across loop headers since we know that we will
      continue switching if and only if we reached that loop header via a
      generator resume. This allows us to generate fewer phis and tighten
      liveness.
      
      Change-Id: Id2720ce1d6955be9a48178322cc209b3a4b8d385
      Reviewed-on: https://chromium-review.googlesource.com/866734
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarJaroslav Sevcik <jarin@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#50804}
      c869d40d
  8. 22 Jan, 2018 1 commit
  9. 21 Nov, 2017 1 commit
  10. 07 Nov, 2017 1 commit
  11. 20 Oct, 2017 1 commit
  12. 08 Sep, 2017 1 commit
  13. 27 Jun, 2017 1 commit
  14. 23 Jun, 2017 1 commit
  15. 06 Jun, 2017 1 commit
  16. 22 May, 2017 1 commit
  17. 16 May, 2017 1 commit
  18. 15 May, 2017 1 commit
  19. 12 Apr, 2017 1 commit
  20. 11 Apr, 2017 2 commits
  21. 03 Apr, 2017 1 commit
    • rmcilroy's avatar
      [Interpreter] Optimize code of the form 'if (x === undefined)'. · f4f58e31
      rmcilroy authored
      Translates code of the form 'if (x === undefined)' into the JumpIfUndefined
      bytecode, and similarly for comparisons with null. Also adds bytecodes for
      JumpIfNotUndefined / Null.
      
      Moves the peephole optimization for CompareUndefined out of the peephole
      optimizer and into the BytecodeGenerator, having the side-effect of enabling
      it for comparisons with undefined on both side of the compare operation.
      
      BUG=v8:6107
      
      Review-Url: https://codereview.chromium.org/2793923002
      Cr-Commit-Position: refs/heads/master@{#44341}
      f4f58e31
  22. 24 Jan, 2017 1 commit
  23. 09 Jan, 2017 1 commit
  24. 15 Dec, 2016 1 commit
  25. 07 Dec, 2016 1 commit
    • caitp's avatar
      [ignition] desugar GetIterator() via bytecode rather than via AST · b5f146a0
      caitp authored
      Introduces:
      - a new AST node representing the GetIterator() algorithm in the specification, to be used by ForOfStatement, YieldExpression (in the case of delegating yield*), and the future `for-await-of` loop proposed in http://tc39.github.io/proposal-async-iteration/#sec-async-iterator-value-unwrap-functions.
      - a new opcode (JumpIfJSReceiver), which is useful for `if Type(object) is not Object` checks which are common throughout the specification. This node is easily eliminated by TurboFan.
      
      The AST node is desugared specially in bytecode, rather than manually when building the AST. The benefit of this is that desugaring in the BytecodeGenerator is much simpler and easier to understand than desugaring the AST.
      
      This also reduces parse time very slightly, and allows us to use LoadIC rather than KeyedLoadIC, which seems to have  better baseline performance. This results in a ~20% improvement in test/js-perf-test/Iterators micro-benchmarks, which I believe owes to the use of the slightly faster LoadIC as opposed to the KeyedLoadIC in the baseline case. Both produce identical optimized code via TurboFan when the type check can be eliminated, and the load can be replaced with a constant value.
      
      BUG=v8:4280
      R=bmeurer@chromium.org, rmcilroy@chromium.org, adamk@chromium.org, neis@chromium.org, jarin@chromium.org
      TBR=rossberg@chromium.org
      
      Review-Url: https://codereview.chromium.org/2557593004
      Cr-Commit-Position: refs/heads/master@{#41555}
      b5f146a0
  26. 14 Nov, 2016 1 commit
    • tebbi's avatar
      This CL enables precise source positions for all V8 compilers. It merges... · c3a6ca68
      tebbi authored
      This CL enables precise source positions for all V8 compilers. It merges compiler::SourcePosition and internal::SourcePosition to a single class used throughout the codebase. The new internal::SourcePosition instances store an id identifying an inlined function in addition to a script offset.
      SourcePosition::InliningId() refers to a the new table DeoptimizationInputData::InliningPositions(), which provides the following data for every inlining id:
       - The inlined SharedFunctionInfo as an offset into DeoptimizationInfo::LiteralArray
       - The SourcePosition of the inlining. Recursively, this yields the full inlining stack.
      Before the Code object is created, the same information can be found in CompilationInfo::inlined_functions().
      
      If SourcePosition::InliningId() is SourcePosition::kNotInlined, it refers to the outer (non-inlined) function.
      So every SourcePosition has full information about its inlining stack, as long as the corresponding Code object is known. The internal represenation of a source position is a positive 64bit integer.
      
      All compilers create now appropriate source positions for inlined functions. In the case of Turbofan, this required using AstGraphBuilderWithPositions for inlined functions too. So this class is now moved to a header file.
      
      At the moment, the additional information in source positions is only used in --trace-deopt and --code-comments. The profiler needs to be updated, at the moment it gets the correct script offsets from the deopt info, but the wrong script id from the reconstructed deopt stack, which can lead to wrong outputs. This should be resolved by making the profiler use the new inlining information for deopts.
      
      I activated the inlined deoptimization tests in test-cpu-profiler.cc for Turbofan, changing them to a case where the deopt stack and the inlining position agree. It is currently still broken for other cases.
      
      The following additional changes were necessary:
       - The source position table (internal::SourcePositionTableBuilder etc.) supports now 64bit source positions. Encoding source positions in a single 64bit int together with the difference encoding in the source position table results in very little overhead for the inlining id, since only 12% of the source positions in Octane have a changed inlining id.
       - The class HPositionInfo was effectively dead code and is now removed.
       - SourcePosition has new printing and information facilities, including computing a full inlining stack.
       - I had to rename compiler/source-position.{h,cc} to compiler/compiler-source-position-table.{h,cc} to avoid clashes with the new src/source-position.cc file.
       - I wrote the new wrapper PodArray for ByteArray. It is a template working with any POD-type. This is used in DeoptimizationInputData::InliningPositions().
       - I removed HInlinedFunctionInfo and HGraph::inlined_function_infos, because they were only used for the now obsolete Crankshaft inlining ids.
       - Crankshaft managed a list of inlined functions in Lithium: LChunk::inlined_functions. This is an analog structure to CompilationInfo::inlined_functions. So I removed LChunk::inlined_functions and made Crankshaft use CompilationInfo::inlined_functions instead, because this was necessary to register the offsets into the literal array in a uniform way. This is a safe change because LChunk::inlined_functions has no other uses and the functions in CompilationInfo::inlined_functions have a strictly longer lifespan, being created earlier (in Hydrogen already).
      
      BUG=v8:5432
      
      Review-Url: https://codereview.chromium.org/2451853002
      Cr-Commit-Position: refs/heads/master@{#40975}
      c3a6ca68
  27. 11 Nov, 2016 1 commit
  28. 07 Oct, 2016 1 commit
  29. 22 Sep, 2016 3 commits
    • jyan's avatar
      Fix Endian issue in interpreter in EmitBytecode · 3b1691be
      jyan authored
      R=rmcilroy@chromium.org, mythria@chromium.org, leszeks@chromium.org
      BUG=
      
      Review-Url: https://codereview.chromium.org/2362453003
      Cr-Commit-Position: refs/heads/master@{#39644}
      3b1691be
    • rmcilroy's avatar
      [Interpreter] Optimize BytecodeArrayBuilder and BytecodeArrayWriter. · e5ac75c6
      rmcilroy authored
      This CL optimizes the code in BytecodeArrayBuilder and
      BytecodeArrayWriter by making the following main changes:
      
       - Move operand scale calculation out of BytecodeArrayWriter to the
      BytecodeNode constructor, where the decision on which operands are
      scalable can generally be statically decided by the compiler.
       - Move the maximum register calculation out of BytecodeArrayWriter
      and into BytecodeRegisterOptimizer (which is the only place outside
      BytecodeGenerator which updates which registers are used). This
      avoids the BytecodeArrayWriter needing to know the operand types
      of a node as it writes it.
       - Modify EmitBytecodes to use individual push_backs rather than
      building a buffer and calling insert, since this turns out to be faster.
       - Initialize BytecodeArrayWriter's bytecode vector by reserving 512
      bytes,
       - Make common functions in Bytecodes constexpr so that they
      can be statically calculated by the compiler.
       - Move common functions and constructors in Bytecodes and
      BytecodeNode to the header so that they can be inlined.
       - Change large static switch statements in Bytecodes to const array
      lookups, and move to the header to allow inlining.
      
      I also took the opportunity to remove a number of unused helper
      functions, and rework some others for consistency.
      
      This reduces the percentage of time spent in making BytecodeArrays
       in  CodeLoad from ~15% to ~11% according to perf. The
      CoadLoad score increase by around 2%.
      
      BUG=v8:4280
      
      Committed: https://crrev.com/b11a8b4d41bf09d6b3d6cf214fe3fb61faf01a64
      Review-Url: https://codereview.chromium.org/2351763002
      Cr-Original-Commit-Position: refs/heads/master@{#39599}
      Cr-Commit-Position: refs/heads/master@{#39637}
      e5ac75c6
    • hablich's avatar
      Revert of [Interpreter] Optimize BytecodeArrayBuilder and BytecodeArrayWriter.... · 5d693348
      hablich authored
      Revert of [Interpreter] Optimize BytecodeArrayBuilder and BytecodeArrayWriter. (patchset #6 id:200001 of https://codereview.chromium.org/2351763002/ )
      
      Reason for revert:
      Prime suspect for roll blocker: https://codereview.chromium.org/2362503002/
      
      Original issue's description:
      > [Interpreter] Optimize BytecodeArrayBuilder and BytecodeArrayWriter.
      >
      > This CL optimizes the code in BytecodeArrayBuilder and
      > BytecodeArrayWriter by making the following main changes:
      >
      >  - Move operand scale calculation out of BytecodeArrayWriter to the
      > BytecodeNode constructor, where the decision on which operands are
      > scalable can generally be statically decided by the compiler.
      >  - Move the maximum register calculation out of BytecodeArrayWriter
      > and into BytecodeRegisterOptimizer (which is the only place outside
      > BytecodeGenerator which updates which registers are used). This
      > avoids the BytecodeArrayWriter needing to know the operand types
      > of a node as it writes it.
      >  - Modify EmitBytecodes to use individual push_backs rather than
      > building a buffer and calling insert, since this turns out to be faster.
      >  - Initialize BytecodeArrayWriter's bytecode vector by reserving 512
      > bytes,
      >  - Make common functions in Bytecodes constexpr so that they
      > can be statically calculated by the compiler.
      >  - Move common functions and constructors in Bytecodes and
      > BytecodeNode to the header so that they can be inlined.
      >  - Change large static switch statements in Bytecodes to const array
      > lookups, and move to the header to allow inlining.
      >
      > I also took the opportunity to remove a number of unused helper
      > functions, and rework some others for consistency.
      >
      > This reduces the percentage of time spent in making BytecodeArrays
      >  in  CodeLoad from ~15% to ~11% according to perf. The
      > CoadLoad score increase by around 2%.
      >
      > BUG=v8:4280
      >
      > Committed: https://crrev.com/b11a8b4d41bf09d6b3d6cf214fe3fb61faf01a64
      > Cr-Commit-Position: refs/heads/master@{#39599}
      
      TBR=mythria@chromium.org,leszeks@chromium.org,rmcilroy@chromium.org
      # Skipping CQ checks because original CL landed less than 1 days ago.
      NOPRESUBMIT=true
      NOTREECHECKS=true
      NOTRY=true
      BUG=v8:4280
      
      Review-Url: https://codereview.chromium.org/2360193003
      Cr-Commit-Position: refs/heads/master@{#39612}
      5d693348
  30. 21 Sep, 2016 1 commit
    • rmcilroy's avatar
      [Interpreter] Optimize BytecodeArrayBuilder and BytecodeArrayWriter. · b11a8b4d
      rmcilroy authored
      This CL optimizes the code in BytecodeArrayBuilder and
      BytecodeArrayWriter by making the following main changes:
      
       - Move operand scale calculation out of BytecodeArrayWriter to the
      BytecodeNode constructor, where the decision on which operands are
      scalable can generally be statically decided by the compiler.
       - Move the maximum register calculation out of BytecodeArrayWriter
      and into BytecodeRegisterOptimizer (which is the only place outside
      BytecodeGenerator which updates which registers are used). This
      avoids the BytecodeArrayWriter needing to know the operand types
      of a node as it writes it.
       - Modify EmitBytecodes to use individual push_backs rather than
      building a buffer and calling insert, since this turns out to be faster.
       - Initialize BytecodeArrayWriter's bytecode vector by reserving 512
      bytes,
       - Make common functions in Bytecodes constexpr so that they
      can be statically calculated by the compiler.
       - Move common functions and constructors in Bytecodes and
      BytecodeNode to the header so that they can be inlined.
       - Change large static switch statements in Bytecodes to const array
      lookups, and move to the header to allow inlining.
      
      I also took the opportunity to remove a number of unused helper
      functions, and rework some others for consistency.
      
      This reduces the percentage of time spent in making BytecodeArrays
       in  CodeLoad from ~15% to ~11% according to perf. The
      CoadLoad score increase by around 2%.
      
      BUG=v8:4280
      
      Review-Url: https://codereview.chromium.org/2351763002
      Cr-Commit-Position: refs/heads/master@{#39599}
      b11a8b4d
  31. 13 Sep, 2016 1 commit
    • mstarzinger's avatar
      [interpreter] Merge {OsrPoll} with {Jump} bytecode. · c9864173
      mstarzinger authored
      This introduces a new {JumpLoop} bytecode to combine the OSR polling
      mechanism modeled by {OsrPoll} with the actual {Jump} performing the
      backwards branch. This reduces the overall size and also avoids one
      additional dispatch. It also makes sure that OSR polling is only done
      within real loops.
      
      R=rmcilroy@chromium.org
      BUG=v8:4764
      
      Review-Url: https://codereview.chromium.org/2331033002
      Cr-Commit-Position: refs/heads/master@{#39384}
      c9864173
  32. 18 Aug, 2016 1 commit
  33. 17 Aug, 2016 1 commit
    • rmcilroy's avatar
      Avoid accessing Isolate in source position logging. · b8b4a443
      rmcilroy authored
      Now that all backends use the source position builder to record source
      positions, simplify the code line logging events to take a source
      position table on code creation. This means that the source position
      table builder no longer needs to access the isolate until the table is
      generated. This is required for off-thread bytecode generation.
      
      BUG=v8:5203
      
      Review-Url: https://codereview.chromium.org/2248673002
      Cr-Commit-Position: refs/heads/master@{#38676}
      b8b4a443
  34. 10 Aug, 2016 2 commits
  35. 15 Jul, 2016 1 commit
  36. 14 Jul, 2016 1 commit