• Benedikt Meurer's avatar
    [builtins] Refactor promises to reduce GC overhead. · 8e7737cb
    Benedikt Meurer authored
    This implements the ideas outlined in the section "Microtask queue"
    of the exploration document "Promise and async/await performance" (at
    https://goo.gl/WHRar2), except that the microtask queue stays a linear
    FixedArray for now, to avoid running into trouble with the parallel
    scavenger. This way we can already save a significant amount of
    allocations, thereby reducing the GC frequency quite a bit.
    
    All items on the microtask queue are now proper structs that subclass
    Microtask, i.e. we also wrap JSFunction and MicrotaskCallback jobs
    into structs. We also consistently remember the context for every
    microtask (except for MicrotaskCallback where we don't have a
    context), and execute it later in exactly that context (as required
    by the spec anyways for the Promise related jobs). Particularly
    interesting is the PromiseReactionJobTask and its subclasses, since
    they are designed to have the same size as the PromiseReaction. When
    we resolve a JSPromise we just take the existing PromiseReaction
    instances and morph them into PromiseFulfillReactionJobTask or
    PromiseRejectReactionJobTask (depending whether you "Fulfill" or
    "Reject"). That way the JSPromise class is now only 6 words instead
    of 10 words.
    
    Also the PromiseReaction and the reaction tasks can either carry a
    JSPromise (for the fast native case) or a PromiseCapability (for the
    generic case), which means we don't always pay the overhead of having
    to also remember the "deferred resolve" and "deferred reject" handlers
    that are only relevant for the generic case anyways.
    
    It also fixes a spec violation where we called "then" before we actually
    enqueued the PromiseResolveThenableJob, which is observably wrong.
    Calling it later has the advantage that it should be fairly
    straight-forward now to completely avoid it for native Promise
    instances.
    
    This seems to save around 10-20% on the various Promise benchmarks and
    micro-benchmarks. We expect to gain even more as we're now able to
    inline various operations into TurboFan optimized code easily.
    
    Bug: v8:7253
    Cq-Include-Trybots: master.tryserver.chromium.linux:linux_chromium_rel_ng
    Change-Id: I893d24ca5bb046974b4f5826a8f6dd22f1210b6a
    Reviewed-on: https://chromium-review.googlesource.com/892819
    Commit-Queue: Benedikt Meurer <bmeurer@chromium.org>
    Reviewed-by: 's avatarSathya Gunasekaran <gsathya@chromium.org>
    Reviewed-by: 's avatarBenedikt Meurer <bmeurer@chromium.org>
    Cr-Commit-Position: refs/heads/master@{#50980}
    8e7737cb
Name
Last commit
Last update
..
libplatform Loading commit data...
APIDesign.md Loading commit data...
DEPS Loading commit data...
OWNERS Loading commit data...
PRESUBMIT.py Loading commit data...
v8-debug.h Loading commit data...
v8-inspector-protocol.h Loading commit data...
v8-inspector.h Loading commit data...
v8-platform.h Loading commit data...
v8-profiler.h Loading commit data...
v8-testing.h Loading commit data...
v8-util.h Loading commit data...
v8-value-serializer-version.h Loading commit data...
v8-version-string.h Loading commit data...
v8-version.h Loading commit data...
v8.h Loading commit data...
v8config.h Loading commit data...