1. 26 Apr, 2018 1 commit
  2. 24 Apr, 2018 1 commit
  3. 19 Apr, 2018 1 commit
  4. 18 Apr, 2018 1 commit
  5. 15 Mar, 2018 1 commit
    • Benedikt Meurer's avatar
      [turbofan] Teach TurboFan about the TypedArray constructor. · 0875778f
      Benedikt Meurer authored
      This introduces a new JSCreateTypedArray operator, backed by a dedicated
      CreateTypedArray builtin, and adds support to lowering new TypedArray
      calls to this operator. This way we avoid the overhead of going through
      the generic construct stub machinery for hot code. This not only
      recovers the performance regression on the typed array constructor
      benchmarks, but even improves slightly beyond what we had in 6.6.
      
      We might in the future try to fully inline the TypedArray constructor
      into optimized code for certain cases.
      
      Bug: chromium:820726, v8:7503, v8:7518
      Change-Id: Ied465924d5695db576d533792f1db68456b9b5ea
      Reviewed-on: https://chromium-review.googlesource.com/959010
      Commit-Queue: Benedikt Meurer <bmeurer@chromium.org>
      Reviewed-by: 's avatarPeter Marshall <petermarshall@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#51973}
      0875778f
  6. 20 Feb, 2018 1 commit
    • Benedikt Meurer's avatar
      [turbofan] Optimize promise resolution. · be6d1292
      Benedikt Meurer authored
      This CL introduces new operators JSFulfillPromise and JSPromiseResolve,
      corresponding to the specification operations with the same name, and
      uses that to lower calls to Promise.resolve() builtin to JSPromiseResolve.
      
      We also optimize JSPromiseResolve and JSResolvePromise further based on
      information found about the value/resolution in the graph. This applies
      to both Promise.resolve() builtin calls and implicit resolve operations
      in async functions and async generators.
      
      On a very simple microbenchmark like
      
        console.time('resolve');
        for (let i = 0; i < 1e8; ++i) Promise.resolve({i});
        console.timeEnd('resolve');
      
      this CL reduces the execution time from around 3049ms to around 947ms,
      which is a pretty significant 3x improvement. On the wikipedia benchmark
      we observe an improvement around 2% with this CL.
      
      Bug: v8:7253
      Change-Id: Ic69086cdc1b724f35dbe83305795539c562ab817
      Reviewed-on: https://chromium-review.googlesource.com/913488Reviewed-by: 's avatarBenedikt Meurer <bmeurer@chromium.org>
      Reviewed-by: 's avatarGeorg Neis <neis@chromium.org>
      Commit-Queue: Benedikt Meurer <bmeurer@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#51387}
      be6d1292
  7. 13 Feb, 2018 1 commit
    • Benedikt Meurer's avatar
      [builtins] Refactor the promise resolution and rejection logic. · c0412961
      Benedikt Meurer authored
      This introduces dedicated builtins
      
        - FulfillPromise,
        - RejectPromise, and
        - ResolvePromise,
      
      which perform the corresponding operations from the language
      specification, and removes the redundant entry points and the
      excessive inlining of these operations into other builtins. We
      also add the same logic on the C++ side, so that we don't need
      to go into JavaScript land when resolving/rejecting from the
      API.
      
      The C++ side has a complete implementation, including full support
      for the debugger and the current PromiseHook machinery. This is to
      avoid constantly crossing the boundary for those cases, and to also
      simplify the CSA side (and soon the TurboFan side), where we only
      do the fast-path and bail out to the runtime for the general handling.
      
      On top of this we introduce %_RejectPromise and %_ResolvePromise,
      which are entry points used by the bytecode and parser desugarings
      for async functions, and also used by the V8 Extras API. Thanks to
      this we can uniformly optimize these in TurboFan, where we have
      corresponding operators JSRejectPromise and JSResolvePromise, which
      currently just call into the builtins, but middle-term can be further
      optimized, i.e. to skip the "then" lookup for JSResolvePromise when
      we know something about the resolution.
      
      In TurboFan we can also already inline the default PromiseCapability
      [[Reject]] and [[Resolve]] functions, although this is not as effective
      as it can be right now, until we have inlining support for the Promise
      constructor (being worked on by petermarshall@ right now) and/or SFI
      based CALL_IC feedback.
      
      Overall this change is meant as a refactoring without significant
      performance impact anywhere; it seems to improve performance of
      simple async functions a bit, but otherwise is neutral.
      
      Bug: v8:7253
      Change-Id: Id0b979f9b2843560e38cd8df4b02627dad4b6d8c
      Reviewed-on: https://chromium-review.googlesource.com/911632Reviewed-by: 's avatarSathya Gunasekaran <gsathya@chromium.org>
      Reviewed-by: 's avatarBenedikt Meurer <bmeurer@chromium.org>
      Reviewed-by: 's avatarGeorg Neis <neis@chromium.org>
      Commit-Queue: Benedikt Meurer <bmeurer@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#51260}
      c0412961
  8. 29 Nov, 2017 1 commit
    • Georg Neis's avatar
      No longer desugar the exponentiation (**) operator. · b97567a9
      Georg Neis authored
      Prior to this change, the exponentiation operator was rewritten by the
      parser to a call of the Math.pow builtin. However, Math.pow does not
      accept BigInt arguments, while the exponentiation operator must accept
      them.
      
      This CL
      - removes the parser's special treatment of ** and **=, treating them
        like any other binary op instead.
      - adds a TFC builtin Exponentiate that does the right thing for
        all inputs.
      - adds interpreter bytecodes Exp and ExpSmi whose handlers call the
        Exponentiate builtin. For simplicity, they currently always collect
        kAny feedback.
      - adds a Turbofan operator JSExponentiate with a typed-lowering to
        the existing NumberPow and a generic-lowering to the Exponentiate
        builtin. There is currently no speculative lowering.
      
      Note that exponentiation for BigInts is actually not implemented yet,
      so we can't yet test it.
      
      Bug: v8:6791
      Change-Id: Id90914c9c3fce310ce01e715c09eaa9f294f4f8a
      Reviewed-on: https://chromium-review.googlesource.com/785694Reviewed-by: 's avatarJakob Kummerow <jkummerow@chromium.org>
      Reviewed-by: 's avatarSathya Gunasekaran <gsathya@chromium.org>
      Reviewed-by: 's avatarYang Guo <yangguo@chromium.org>
      Reviewed-by: 's avatarMythri Alle <mythria@chromium.org>
      Reviewed-by: 's avatarJaroslav Sevcik <jarin@chromium.org>
      Commit-Queue: Georg Neis <neis@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#49696}
      b97567a9
  9. 28 Nov, 2017 1 commit
  10. 23 Nov, 2017 1 commit
  11. 21 Nov, 2017 3 commits
  12. 01 Sep, 2017 1 commit
    • Benedikt Meurer's avatar
      [turbofan] Optimize fast enum cache driven for..in. · f1ec44e2
      Benedikt Meurer authored
      This CL adds support to optimize for..in in fast enum-cache mode to the
      same degree that it was optimized in Crankshaft, without adding the same
      deoptimization loop that Crankshaft had with missing enum cache indices.
      That means code like
      
        for (var k in o) {
          var v = o[k];
          // ...
        }
      
      and code like
      
        for (var k in o) {
          if (Object.prototype.hasOwnProperty.call(o, k)) {
            var v = o[k];
            // ...
          }
        }
      
      which follows the https://eslint.org/docs/rules/guard-for-in linter
      rule, can now utilize the enum cache indices if o has only fast
      properties on the receiver, which speeds up the access o[k]
      significantly and reduces the pollution of the global megamorphic
      stub cache.
      
      For example the micro-benchmark in the tracking bug v8:6702 now runs
      faster than ever before:
      
       forIn: 1516 ms.
       forInHasOwnProperty: 1674 ms.
       forInHasOwnPropertySafe: 1595 ms.
       forInSum: 2051 ms.
       forInSumSafe: 2215 ms.
      
      Compared to numbers from V8 5.8 which is the last version running with
      Crankshaft
      
       forIn: 1641 ms.
       forInHasOwnProperty: 1719 ms.
       forInHasOwnPropertySafe: 1802 ms.
       forInSum: 2226 ms.
       forInSumSafe: 2409 ms.
      
      and V8 6.0 which is the current stable version with TurboFan:
      
       forIn: 1713 ms.
       forInHasOwnProperty: 5417 ms.
       forInHasOwnPropertySafe: 5324 ms.
       forInSum: 7556 ms.
       forInSumSafe: 11067 ms.
      
      It also improves the throughput on the string-fasta benchmark by
      around 7-10%, and there seems to be a ~5% improvement on the
      Speedometer/React benchmark locally.
      
      For this to work, the ForInPrepare bytecode was split into
      ForInEnumerate and ForInPrepare, which is very similar to how it was
      handled in Fullcodegen initially. In TurboFan we introduce a new
      operator LoadFieldByIndex that does the dynamic property load.
      
      This also removes the CheckMapValue operator again in favor of
      just using LoadField, ReferenceEqual and CheckIf, which work
      automatically with the EscapeAnalysis and the
      BranchConditionElimination.
      
      Bug: v8:6702
      Change-Id: I91235413eea478ba77ace7bd14bb2f62e155dd9a
      Reviewed-on: https://chromium-review.googlesource.com/645949
      Commit-Queue: Benedikt Meurer <bmeurer@chromium.org>
      Reviewed-by: 's avatarYang Guo <yangguo@chromium.org>
      Reviewed-by: 's avatarJaroslav Sevcik <jarin@chromium.org>
      Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#47768}
      f1ec44e2
  13. 19 Jul, 2017 1 commit
  14. 21 Jun, 2017 2 commits
    • Ross McIlroy's avatar
      [TurboFan] Enable typed lowering of JSStringConcat to ConsString allocation. · 69a645d3
      Ross McIlroy authored
      Adds typed lowering of JSStringConcat to ConsString allocation if the
      following conditions hold:
       - All concatinations will result in a ConsString of >= ConString::kMinLength
       - No concatinations will result in a empty string in the RHS unless there is
         a sequential string in the LHS.
      
      This also means JSStringConcat needs an eager checkpoint since it can
      deopt if throwing a RangeError when the string length protector is valid.
      
      BUG=v8:6243
      
      Change-Id: I01ca79f884df467c10f2c032c72d51b5199c1a3c
      Reviewed-on: https://chromium-review.googlesource.com/526636
      Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
      Reviewed-by: 's avatarJaroslav Sevcik <jarin@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#46093}
      69a645d3
    • bmeurer's avatar
      [turbofan] Introduce new JSConstructWithArrayLike operator. · 21701297
      bmeurer authored
      Add a new JSConstructWithArrayLike operator that is backed by the
      ConstructWithArrayLike builtin (similar to what was done before
      for the JSCallWithArrayLike operator), and use that operator to
      optimize Reflect.construct inlining in TurboFan. This is handled
      uniformly with JSConstructWithSpread in the JSCallReducer.
      
      Also add missing test coverage for Reflect.construct in optimized
      code, especially for some interesting corner cases.
      
      R=petermarshall@chromium.org
      BUG=v8:4587,v8:5269
      
      Review-Url: https://codereview.chromium.org/2949813002
      Cr-Commit-Position: refs/heads/master@{#46087}
      21701297
  15. 20 Jun, 2017 1 commit
    • bmeurer's avatar
      [turbofan] Introduce new JSCallWithArrayLike operator. · 767ce788
      bmeurer authored
      Add a new JSCallWithArrayLike operator that is backed by the
      CallWithArrayLike builtin, and use that operator for both
      Function.prototype.apply and Reflect.apply inlining. Also unify
      the handling of JSCallWithArrayLike and JSCallWithSpread in
      the JSCallReducer to reduce the copy&paste overhead.
      
      Drive-by-fix: Add a lot of test coverage for Reflect.apply and
      Function.prototype.apply in optimized code, especially for some
      corner cases, which was missing so far.
      
      BUG=v8:4587,v8:5269
      R=petermarshall@chromium.org
      
      Review-Url: https://codereview.chromium.org/2950773002
      Cr-Commit-Position: refs/heads/master@{#46041}
      767ce788
  16. 13 Jun, 2017 1 commit
    • bmeurer's avatar
      [builtins] Properly optimize Object.prototype.isPrototypeOf. · b11c557d
      bmeurer authored
      Port the baseline implementation of Object.prototype.isPrototypeOf to
      the CodeStubAssembler, sharing the existing prototype chain lookup logic
      with the instanceof / OrdinaryHasInstance implementation. Based on that,
      do the same in TurboFan, introducing a new JSHasInPrototypeChain
      operator, which encapsulates the central prototype chain walk logic.
      
      This speeds up Object.prototype.isPrototypeOf by more than a factor of
      four, so that the code
      
        A.prototype.isPrototypeOf(a)
      
      is now performance-wise on par with
      
        a instanceof A
      
      for the case where A is a regular constructor function and a is an
      instance of A.
      
      Since instanceof does more than just the fundamental prototype chain
      lookup, it was discovered in Node core that O.p.isPrototypeOf would
      be a more appropriate alternative for certain sanity checks, since
      it's less vulnerable to monkey-patching. In addition, the Object
      builtin would also avoid the performance-cliff associated with
      instanceof (due to the Symbol.hasInstance hook), as for example hit
      by https://github.com/nodejs/node/pull/13403#issuecomment-305915874.
      The main blocker was the missing performance of isPrototypeOf, since
      it was still a JS builtin backed by a runtime call.
      
      This CL also adds more test coverage for the
      Object.prototype.isPrototypeOf builtin, especially when called from
      optimized code.
      
      CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_chromium_rel_ng
      BUG=v8:5269,v8:5989,v8:6483
      R=jgruber@chromium.org
      
      Review-Url: https://codereview.chromium.org/2934893002
      Cr-Commit-Position: refs/heads/master@{#45925}
      b11c557d
  17. 07 Jun, 2017 1 commit
  18. 18 May, 2017 1 commit
    • bmeurer's avatar
      [turbofan] Avoid allocating rest parameters for spread calls. · bfa319e5
      bmeurer authored
      We already had an optimization to turn Function.prototype.apply with
      arguments object, i.e.
      
        function foo() { return bar.apply(this, arguments); }
      
      into a special operator JSCallForwardVarargs, which avoids the
      allocation and deconstruction of the arguments object, but just passes
      along the incoming parameters. We can do the same for rest parameters
      and spread calls/constructs, i.e.
      
        class A extends B {
          constructor(...args) { super(...args); }
        }
      
      or
      
        function foo(...args) { return bar(1, 2, 3, ...args); }
      
      where we basically pass along the parameters (plus maybe additional
      statically known parameters).
      
      For this, we introduce a new JSConstructForwardVarargs operator and
      generalize the CallForwardVarargs builtins that are backing this.
      
      BUG=v8:6407,v8:6278,v8:6344
      R=jarin@chromium.org
      
      Review-Url: https://codereview.chromium.org/2890023004
      Cr-Commit-Position: refs/heads/master@{#45388}
      bfa319e5
  19. 08 May, 2017 1 commit
  20. 07 Mar, 2017 1 commit
  21. 03 Mar, 2017 1 commit
  22. 17 Feb, 2017 1 commit
  23. 02 Feb, 2017 1 commit
  24. 01 Feb, 2017 2 commits
  25. 27 Jan, 2017 1 commit
    • yangguo's avatar
      [liveedit] reimplement frame restarting. · 3f47c63d
      yangguo authored
      Previously, when restarting a frame, we would rewrite all frames
      between the debugger activation and the frame to restart to squash
      them, and replace the return address with that of a builtin to
      leave that rewritten frame, and restart the function by calling it.
      
      We now simply remember the frame to drop to, and upon returning
      from the debugger, we check whether to drop the frame, load the
      new FP, and restart the function.
      
      R=jgruber@chromium.org, mstarzinger@chromium.org
      BUG=v8:5587
      
      Review-Url: https://codereview.chromium.org/2636913002
      Cr-Commit-Position: refs/heads/master@{#42725}
      3f47c63d
  26. 26 Jan, 2017 1 commit
    • bmeurer's avatar
      [turbofan] Introduce JSCallForwardVarargs operator. · 69747e26
      bmeurer authored
      We turn a JSCallFunction node for
      
        f.apply(receiver, arguments)
      
      into a JSCallForwardVarargs node, when the arguments refers to the
      arguments of the outermost optimized code object, i.e. not an inlined
      arguments, and the apply method refers to Function.prototype.apply,
      and there's no other user of arguments except in frame states.
      
      We also replace the arguments node in the graph with a marker for
      the Deoptimizer similar to Crankshaft to make sure we don't materialize
      unused arguments just for the sake of deoptimization. We plan to replace
      this with a saner EscapeAnalysis based solution soon.
      
      R=jarin@chromium.org
      BUG=v8:5267,v8:5726
      
      Review-Url: https://codereview.chromium.org/2655233002
      Cr-Commit-Position: refs/heads/master@{#42680}
      69747e26
  27. 23 Jan, 2017 1 commit
  28. 19 Dec, 2016 1 commit
  29. 12 Dec, 2016 1 commit
  30. 18 Nov, 2016 2 commits
  31. 03 Aug, 2016 2 commits
  32. 01 Aug, 2016 1 commit
  33. 11 Jul, 2016 1 commit
  34. 08 Jul, 2016 1 commit
    • mstarzinger's avatar
      [turbofan] Remove eager frame state from divisions. · b1cbb983
      mstarzinger authored
      This removes the frame state input representing the before-state from
      nodes having the {JSDivide} or the {JSModulus} operator. Lowering that
      inserts number conversions of the inputs has to be disabled when
      deoptimization is enabled, because the frame state layout is no longer
      known.
      
      R=jarin@chromium.org
      BUG=v8:5021
      
      Review-Url: https://codereview.chromium.org/2121153003
      Cr-Commit-Position: refs/heads/master@{#37608}
      b1cbb983