1. 30 Nov, 2021 1 commit
  2. 03 Nov, 2021 2 commits
    • Nico Hartmann's avatar
      Revert "Reland "[torque] Don't generate k(?:Start|End)Of\w+FieldsOffset constants"" · 6a3dc05f
      Nico Hartmann authored
      This reverts commit a3480b55.
      
      Reason for revert: https://ci.chromium.org/ui/p/v8/builders/ci/V8%20Linux64%20-%20debug%20-%20header%20includes/22234/overview
      
      Original change's description:
      > Reland "[torque] Don't generate k(?:Start|End)Of\w+FieldsOffset constants"
      >
      > This is a reland of 7366f6e2
      >
      > The test that failed after the initial commit was just flaky and has
      > been fixed; see https://bugs.chromium.org/p/v8/issues/detail?id=12341
      >
      > Original change's description:
      > > [torque] Don't generate k(?:Start|End)Of\w+FieldsOffset constants
      > >
      > > Torque currently generates constants like kStartOfWeakFieldsOffset and
      > > kEndOfStrongFieldsOffset, which can be used when writing custom
      > > BodyDescriptors. However, these offsets have some potentially confusing
      > > behaviors:
      > >
      > > * They don't take inheritance into account and describe only the fields
      > >   defined by the current class itself, so there might be (for example)
      > >   strong fields before kStartOfStrongFieldsOffset if they were defined
      > >   by a superclass.
      > > * kStartOfWeakFieldsOffset points to the first field defined in Torque
      > >   using the keyword `weak`, which indicates fields with *custom*
      > >   weakness semantics (those that should be visited with
      > >   IterateCustomWeakPointers), not those that may contain standard weak
      > >   pointers (visited with IterateMaybeWeakPointers). (As a follow-up, I'd
      > >   like to also rename `weak` to `@customWeak`.)
      > >
      > > Given that these constants have very low usage and somewhat bizarre
      > > semantics, I propose that we remove them. This change does so, and
      > > updates the existing usages to either define the required constants
      > > directly in C++ or not use them. I know that defining these constants in
      > > C++ is more brittle, but I think that brittle and clear is better than
      > > automatic and incomprehensible.
      > >
      > > Bug: v8:7793
      > > Change-Id: I87f8c85ccae4027f61ac73d4e7e4e2820e92003b
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3199731
      > > Reviewed-by: Nico Hartmann <nicohartmann@chromium.org>
      > > Reviewed-by: Toon Verwaest <verwaest@chromium.org>
      > > Commit-Queue: Seth Brenith <seth.brenith@microsoft.com>
      > > Cr-Commit-Position: refs/heads/main@{#77411}
      >
      > Bug: v8:7793
      > Change-Id: Iefdd4014ce4b85b48c19ead79a0316774a5ecd45
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3258082
      > Reviewed-by: Toon Verwaest <verwaest@chromium.org>
      > Reviewed-by: Nico Hartmann <nicohartmann@chromium.org>
      > Reviewed-by: Michael Lippautz <mlippautz@chromium.org>
      > Commit-Queue: Seth Brenith <seth.brenith@microsoft.com>
      > Cr-Commit-Position: refs/heads/main@{#77688}
      
      Bug: v8:7793
      Change-Id: I7b9667268901b7aef85a95832d40860056e61050
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3259656Reviewed-by: 's avatarNico Hartmann <nicohartmann@chromium.org>
      Owners-Override: Nico Hartmann <nicohartmann@chromium.org>
      Commit-Queue: Nico Hartmann <nicohartmann@chromium.org>
      Cr-Commit-Position: refs/heads/main@{#77689}
      6a3dc05f
    • Seth Brenith's avatar
      Reland "[torque] Don't generate k(?:Start|End)Of\w+FieldsOffset constants" · a3480b55
      Seth Brenith authored
      This is a reland of 7366f6e2
      
      The test that failed after the initial commit was just flaky and has
      been fixed; see https://bugs.chromium.org/p/v8/issues/detail?id=12341
      
      Original change's description:
      > [torque] Don't generate k(?:Start|End)Of\w+FieldsOffset constants
      >
      > Torque currently generates constants like kStartOfWeakFieldsOffset and
      > kEndOfStrongFieldsOffset, which can be used when writing custom
      > BodyDescriptors. However, these offsets have some potentially confusing
      > behaviors:
      >
      > * They don't take inheritance into account and describe only the fields
      >   defined by the current class itself, so there might be (for example)
      >   strong fields before kStartOfStrongFieldsOffset if they were defined
      >   by a superclass.
      > * kStartOfWeakFieldsOffset points to the first field defined in Torque
      >   using the keyword `weak`, which indicates fields with *custom*
      >   weakness semantics (those that should be visited with
      >   IterateCustomWeakPointers), not those that may contain standard weak
      >   pointers (visited with IterateMaybeWeakPointers). (As a follow-up, I'd
      >   like to also rename `weak` to `@customWeak`.)
      >
      > Given that these constants have very low usage and somewhat bizarre
      > semantics, I propose that we remove them. This change does so, and
      > updates the existing usages to either define the required constants
      > directly in C++ or not use them. I know that defining these constants in
      > C++ is more brittle, but I think that brittle and clear is better than
      > automatic and incomprehensible.
      >
      > Bug: v8:7793
      > Change-Id: I87f8c85ccae4027f61ac73d4e7e4e2820e92003b
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3199731
      > Reviewed-by: Nico Hartmann <nicohartmann@chromium.org>
      > Reviewed-by: Toon Verwaest <verwaest@chromium.org>
      > Commit-Queue: Seth Brenith <seth.brenith@microsoft.com>
      > Cr-Commit-Position: refs/heads/main@{#77411}
      
      Bug: v8:7793
      Change-Id: Iefdd4014ce4b85b48c19ead79a0316774a5ecd45
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3258082Reviewed-by: 's avatarToon Verwaest <verwaest@chromium.org>
      Reviewed-by: 's avatarNico Hartmann <nicohartmann@chromium.org>
      Reviewed-by: 's avatarMichael Lippautz <mlippautz@chromium.org>
      Commit-Queue: Seth Brenith <seth.brenith@microsoft.com>
      Cr-Commit-Position: refs/heads/main@{#77688}
      a3480b55
  3. 15 Oct, 2021 2 commits
    • Leszek Swirski's avatar
      Revert "[torque] Don't generate k(?:Start|End)Of\w+FieldsOffset constants" · 6025b260
      Leszek Swirski authored
      This reverts commit 7366f6e2.
      
      Reason for revert: Speculative revert for cctest/test-debug-helper/GetObjectProperties failures
      https://logs.chromium.org/logs/v8/buildbucket/cr-buildbucket/8833300564873660401/+/u/Check/GetObjectProperties
      
      Original change's description:
      > [torque] Don't generate k(?:Start|End)Of\w+FieldsOffset constants
      >
      > Torque currently generates constants like kStartOfWeakFieldsOffset and
      > kEndOfStrongFieldsOffset, which can be used when writing custom
      > BodyDescriptors. However, these offsets have some potentially confusing
      > behaviors:
      >
      > * They don't take inheritance into account and describe only the fields
      >   defined by the current class itself, so there might be (for example)
      >   strong fields before kStartOfStrongFieldsOffset if they were defined
      >   by a superclass.
      > * kStartOfWeakFieldsOffset points to the first field defined in Torque
      >   using the keyword `weak`, which indicates fields with *custom*
      >   weakness semantics (those that should be visited with
      >   IterateCustomWeakPointers), not those that may contain standard weak
      >   pointers (visited with IterateMaybeWeakPointers). (As a follow-up, I'd
      >   like to also rename `weak` to `@customWeak`.)
      >
      > Given that these constants have very low usage and somewhat bizarre
      > semantics, I propose that we remove them. This change does so, and
      > updates the existing usages to either define the required constants
      > directly in C++ or not use them. I know that defining these constants in
      > C++ is more brittle, but I think that brittle and clear is better than
      > automatic and incomprehensible.
      >
      > Bug: v8:7793
      > Change-Id: I87f8c85ccae4027f61ac73d4e7e4e2820e92003b
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3199731
      > Reviewed-by: Nico Hartmann <nicohartmann@chromium.org>
      > Reviewed-by: Toon Verwaest <verwaest@chromium.org>
      > Commit-Queue: Seth Brenith <seth.brenith@microsoft.com>
      > Cr-Commit-Position: refs/heads/main@{#77411}
      
      Bug: v8:7793
      Change-Id: Ia12b5d773db35739283ca8871d3dd6922413cc82
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3226783
      Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      Bot-Commit: Rubber Stamper <rubber-stamper@appspot.gserviceaccount.com>
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Owners-Override: Leszek Swirski <leszeks@chromium.org>
      Cr-Commit-Position: refs/heads/main@{#77415}
      6025b260
    • Seth Brenith's avatar
      [torque] Don't generate k(?:Start|End)Of\w+FieldsOffset constants · 7366f6e2
      Seth Brenith authored
      Torque currently generates constants like kStartOfWeakFieldsOffset and
      kEndOfStrongFieldsOffset, which can be used when writing custom
      BodyDescriptors. However, these offsets have some potentially confusing
      behaviors:
      
      * They don't take inheritance into account and describe only the fields
        defined by the current class itself, so there might be (for example)
        strong fields before kStartOfStrongFieldsOffset if they were defined
        by a superclass.
      * kStartOfWeakFieldsOffset points to the first field defined in Torque
        using the keyword `weak`, which indicates fields with *custom*
        weakness semantics (those that should be visited with
        IterateCustomWeakPointers), not those that may contain standard weak
        pointers (visited with IterateMaybeWeakPointers). (As a follow-up, I'd
        like to also rename `weak` to `@customWeak`.)
      
      Given that these constants have very low usage and somewhat bizarre
      semantics, I propose that we remove them. This change does so, and
      updates the existing usages to either define the required constants
      directly in C++ or not use them. I know that defining these constants in
      C++ is more brittle, but I think that brittle and clear is better than
      automatic and incomprehensible.
      
      Bug: v8:7793
      Change-Id: I87f8c85ccae4027f61ac73d4e7e4e2820e92003b
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3199731Reviewed-by: 's avatarNico Hartmann <nicohartmann@chromium.org>
      Reviewed-by: 's avatarToon Verwaest <verwaest@chromium.org>
      Commit-Queue: Seth Brenith <seth.brenith@microsoft.com>
      Cr-Commit-Position: refs/heads/main@{#77411}
      7366f6e2
  4. 12 Oct, 2021 1 commit
  5. 08 Oct, 2021 1 commit
  6. 28 Sep, 2021 1 commit
    • Seth Brenith's avatar
      [heap] Don't age bytecode when GCing for devtools snapshot · 4b532343
      Seth Brenith authored
      When preparing to take a heap snapshot for the devtools, V8 uses
      CollectAllAvailableGarbage, which runs 2 to 7 rounds of garbage
      collection, depending on whether weak callbacks indicate that further
      rounds might be beneficial. Depending on how many rounds of GC run,
      varying amounts of bytecode and baseline code may be flushed, leading to
      inconsistent behavior and underreporting the amount of memory used by
      bytecode and baseline code. In this change, I propose that bytecode
      should not increase in age during these collections, so that the
      resulting snapshot is a better indication of actual memory usage.
      
      Change-Id: I644be37833f85bb58e2e2fad5da62949cbdc9bef
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3182885Reviewed-by: 's avatarDominik Inführ <dinfuehr@chromium.org>
      Commit-Queue: Seth Brenith <seth.brenith@microsoft.com>
      Cr-Commit-Position: refs/heads/main@{#77122}
      4b532343
  7. 27 Sep, 2021 1 commit
  8. 06 Sep, 2021 1 commit
  9. 19 Aug, 2021 1 commit
  10. 04 Aug, 2021 1 commit
  11. 29 Jul, 2021 1 commit
    • Mythri A's avatar
      [sparkplug] Introduce flush_baseline_code flag · 64556d13
      Mythri A authored
      Introduce a flush_baseline_code flag to control if baseline code is
      flushed or not. Currently flush_baseline_code implies flush_bytecode
      as well. So if flush_baseline_code is enabled both bytecode and baseline
      code are flushed. If the flag is disabled we only flush bytecode and
      not baseline code.
      
      In a follow-up CL we will add support to control baseline and bytecode
      flushing independently i.e. we can flush only bytecode / only baseline
      code / both.
      
      Bug: v8:11947
      Change-Id: I5a90ed38469de64ed1d736d1eaaeabc2985f0783
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3059684
      Commit-Queue: Mythri Alle <mythria@chromium.org>
      Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarOmer Katz <omerkatz@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#76003}
      64556d13
  12. 20 Jul, 2021 2 commits
    • Igor Sheludko's avatar
      [ext-code-space][heap] Implement custom marking of CodeObjectSlots · 69b1e0ec
      Igor Sheludko authored
      ... which will update both the CodeObjectSlot contents and the cached
      value of the code entry point when the pointed Code object is
      evacuated.
      This is done by introducing an OLD_TO_CODE remembered set which is
      populated with the recorded slots containing pointers to Code objects.
      CodeDataContainer is the only kind of holder that can contain Code
      pointers, so having a CodeObjectSlot is enough to compute the holder
      CodeDataContainer object and update the cached code entry point there.
      
      This CL fixes the data race in the previous implementation which were
      updating the code entry point during Code object migration.
      
      Bug: v8:11880
      Change-Id: I44aa46af4bad7eb4eaa922b6876d5f2f836e0791
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3035084
      Commit-Queue: Igor Sheludko <ishell@chromium.org>
      Reviewed-by: 's avatarCamillo Bruni <cbruni@chromium.org>
      Reviewed-by: 's avatarMichael Lippautz <mlippautz@chromium.org>
      Reviewed-by: 's avatarJakob Gruber <jgruber@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#75826}
      69b1e0ec
    • Mythri A's avatar
      Reland "[sparkplug] Support bytecode / baseline code flushing with sparkplug" · 3ae733f9
      Mythri A authored
      This is a reland of ea55438a. Relanding
      after a fix lands here:
      https://chromium-review.googlesource.com/c/v8/v8/+/3030711. The failures
      were caused because baseline code could be flushed during the process
      of deoptimization after we choose which entry (InterpreterEnterAt* /
      BaselineEnterAt* ) builtin to use. BaselineEnterAt* builtins expect
      baseline code but it could be flushed before we execute the builtin. The
      fix is to defer the decision.
      
      Original change's description:
      > [sparkplug] Support bytecode / baseline code flushing with sparkplug
      >
      > Currently with sparkplug we don't flush bytecode / baseline code of
      > functions that were tiered up to sparkplug. This CL adds the support to
      > flush baseline code / bytecode of functions that have baseline code too.
      > This CL:
      > 1. Updates the BodyDescriptor of JSFunction to treat the Code field of
      > JSFunction as a custom weak pointer where the code is treated as weak if
      > the bytecode corresponding to this function is old.
      > 2. Updates GC to handle the functions that had a weak code object during
      > the atomic phase of GC.
      > 3. Updates the check for old bytecode to also consider when there is
      > baseline code on the function.
      >
      > This CL doesn't change any heuristics for flushing. The baseline code
      > will be flushed at the same time as bytecode.
      >
      > Change-Id: I6b51e06ebadb917b9f4b0f43f2afebd7f64cd26a
      > Bug: v8:11947
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2992715
      > Commit-Queue: Mythri Alle <mythria@chromium.org>
      > Reviewed-by: Andreas Haas <ahaas@chromium.org>
      > Reviewed-by: Toon Verwaest <verwaest@chromium.org>
      > Reviewed-by: Dominik Inführ <dinfuehr@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#75674}
      
      Bug: v8:11947
      Change-Id: I63dce4cd9f6271c54049cc09f95d12e2795f15d1
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3035774Reviewed-by: 's avatarDominik Inführ <dinfuehr@chromium.org>
      Reviewed-by: 's avatarAndreas Haas <ahaas@chromium.org>
      Reviewed-by: 's avatarToon Verwaest <verwaest@chromium.org>
      Reviewed-by: 's avatarRoss McIlroy <rmcilroy@chromium.org>
      Commit-Queue: Mythri Alle <mythria@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#75810}
      3ae733f9
  13. 12 Jul, 2021 2 commits
    • Mythri Alle's avatar
      Revert "[sparkplug] Support bytecode / baseline code flushing with sparkplug" · a079f057
      Mythri Alle authored
      This reverts commit ea55438a.
      
      Reason for revert: Likely culprit for these failures: https://ci.chromium.org/ui/p/v8/builders/ci/V8%20NumFuzz/15494/overview
      
      Original change's description:
      > [sparkplug] Support bytecode / baseline code flushing with sparkplug
      >
      > Currently with sparkplug we don't flush bytecode / baseline code of
      > functions that were tiered up to sparkplug. This CL adds the support to
      > flush baseline code / bytecode of functions that have baseline code too.
      > This CL:
      > 1. Updates the BodyDescriptor of JSFunction to treat the Code field of
      > JSFunction as a custom weak pointer where the code is treated as weak if
      > the bytecode corresponding to this function is old.
      > 2. Updates GC to handle the functions that had a weak code object during
      > the atomic phase of GC.
      > 3. Updates the check for old bytecode to also consider when there is
      > baseline code on the function.
      >
      > This CL doesn't change any heuristics for flushing. The baseline code
      > will be flushed at the same time as bytecode.
      >
      > Change-Id: I6b51e06ebadb917b9f4b0f43f2afebd7f64cd26a
      > Bug: v8:11947
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2992715
      > Commit-Queue: Mythri Alle <mythria@chromium.org>
      > Reviewed-by: Andreas Haas <ahaas@chromium.org>
      > Reviewed-by: Toon Verwaest <verwaest@chromium.org>
      > Reviewed-by: Dominik Inführ <dinfuehr@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#75674}
      
      Bug: v8:11947
      Change-Id: I50535b9a6c6fc39eceb4f6c0e0c84c55bb92f30a
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3017811Reviewed-by: 's avatarMythri Alle <mythria@chromium.org>
      Commit-Queue: Mythri Alle <mythria@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#75679}
      a079f057
    • Mythri A's avatar
      [sparkplug] Support bytecode / baseline code flushing with sparkplug · ea55438a
      Mythri A authored
      Currently with sparkplug we don't flush bytecode / baseline code of
      functions that were tiered up to sparkplug. This CL adds the support to
      flush baseline code / bytecode of functions that have baseline code too.
      This CL:
      1. Updates the BodyDescriptor of JSFunction to treat the Code field of
      JSFunction as a custom weak pointer where the code is treated as weak if
      the bytecode corresponding to this function is old.
      2. Updates GC to handle the functions that had a weak code object during
      the atomic phase of GC.
      3. Updates the check for old bytecode to also consider when there is
      baseline code on the function.
      
      This CL doesn't change any heuristics for flushing. The baseline code
      will be flushed at the same time as bytecode.
      
      Change-Id: I6b51e06ebadb917b9f4b0f43f2afebd7f64cd26a
      Bug: v8:11947
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2992715
      Commit-Queue: Mythri Alle <mythria@chromium.org>
      Reviewed-by: 's avatarAndreas Haas <ahaas@chromium.org>
      Reviewed-by: 's avatarToon Verwaest <verwaest@chromium.org>
      Reviewed-by: 's avatarDominik Inführ <dinfuehr@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#75674}
      ea55438a
  14. 29 Jun, 2021 1 commit
  15. 02 Jun, 2021 1 commit
  16. 11 May, 2021 1 commit
  17. 28 Apr, 2021 1 commit
  18. 12 Apr, 2021 1 commit
    • Wenyu Zhao's avatar
      Allowing map word to be used for other state in GC header. · 5e0b94c4
      Wenyu Zhao authored
      This CL adds features to pack/unpack map words.
      
      Currently V8 cannot store extra metadata in object headers -- because V8
      objects do not have a proper header, but only a map pointer at the start
      of the object. To store per-object metadata like marking data, a side
      table is required as the per-object metadata storage.
      
      This CL enables V8 to use higher unused bits in a 64-bit map word as
      per-object metadata storage. Map pointer stores come with an extra step
      to encode the metadata into the pointer (we call it "map packing").
      Map pointer loads will also remove the metadata bits as well (we call it
      "map packing").
      
      Since the map word is no longer a valid pointer after packing, we also
      change the tag of the packed map word to make it looks like a Smi. This
      helps various GC and barrier code to correctly skip them instead of
      blindly dereferencing this invalid pointer.
      
      A ninja flag `v8_enable_map_packing` is provided to turn this
      map-packing feature on and off. It is disabled by default.
      
      * Only works on x64 platform, with `v8_enable_pointer_compression`
        set to `false`
      
      Bug: v8:11624
      Change-Id: Ia2bdf79553945e5fc0b0874c87803d2cc733e073
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2247561Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Reviewed-by: 's avatarNico Hartmann <nicohartmann@chromium.org>
      Reviewed-by: 's avatarToon Verwaest <verwaest@chromium.org>
      Reviewed-by: 's avatarGeorg Neis <neis@chromium.org>
      Commit-Queue: Ulan Degenbaev <ulan@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#73915}
      5e0b94c4
  19. 03 Dec, 2020 1 commit
    • Dominik Inführ's avatar
      Reland "[heap] Remove SWEEPING phase in incremental marking" · dec7565f
      Dominik Inführ authored
      This is a reland of 2afb00c0
      
      Original change's description:
      > [heap] Remove SWEEPING phase in incremental marking
      >
      > The SWEEPING phase in incremental marking was used to finish sweeping
      > of the last GC cycle concurrently before starting incremental marking.
      > This avoids potentially long pauses when starting incremental marking.
      > However this shouldn't be necessary in most cases where sweeping is
      > already finished when starting the next cycle. The implementation also
      > didn't cleanly separate the GC cycles.
      >
      > In case the sweeping phase is necessary for pause times, we can
      > introduce a "CompleteSweep" phase which runs right before starting
      > incremental marking.
      >
      > Change-Id: Iaff8c06d5691e584894f57941f181d0424051eec
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2567707
      > Commit-Queue: Dominik Inführ <dinfuehr@chromium.org>
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#71555}
      
      Change-Id: I173bdeaf342d4c0590453f7d9eeb8ab5cfddc73c
      Bug: v8:11220, v8:11221
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2571111
      Commit-Queue: Dominik Inführ <dinfuehr@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#71592}
      dec7565f
  20. 24 Nov, 2020 1 commit
  21. 07 Oct, 2020 1 commit
    • Leszek Swirski's avatar
      Reland^4 "[serializer] Allocate during deserialization" · 3c508b38
      Leszek Swirski authored
      This relands commit 3f4e9bbe.
      which was a reland of c4a062a9
      which was a reland of 28a30c57
      which was a reland of 5d7a29c9
      
      The change had an issue that embedders implementing heap tracing (e.g.
      Unified Heap with Blink) could be passed an uninitialized pointer if
      marking happened during deserialization of an object containing such a
      pointer. Because of the 0xdeadbed0 uninitialized filler value, these
      embedders would then receive the value 0xdeadbed0deadbed0 as the
      'pointer', and crash on dereference.
      
      There is, however, special handling already for null pointers in heap
      tracing, also for dealing with not-yet initialized values. So, we can
      make the uninitialized Smi filler be 0x00000000, and that will make such
      embedded fields have a nullptr representation, making them follow the
      normal uninitialized value bailouts.
      
      In addition, it relands the following dependent changes, which are
      relanding unchanged and are followup performance improvements.
      Relanding them in the same change should allow for cleaner reverts
      should they be needed.
      
      This relands commit 76ad3ab5
      [identity-map] Change resize heuristic
      
      This relands commit 77cc96aa
      [identity-map] Cache the calculated Hash
      
      This relands commit bee5b996
      [serializer] Remove Deserializer::Initialize
      
      This relands commit c8f73f22
      [serializer] Cache instance type in PostProcessNewObject
      
      This relands commit 4e7c99ab
      [identity-map] Remove double-lookups in IdentityMap
      
      Original change's description:
      > Reland^3 "[serializer] Allocate during deserialization"
      >
      > This is a reland of c4a062a9
      > which was a reland of 28a30c57
      > which was a reland of 5d7a29c9
      >
      > Fixes TSAN errors from non-atomic writes in the deserializer. Now all
      > writes are (relaxed) atomic.
      >
      > Original change's description:
      > > Reland^2 "[serializer] Allocate during deserialization"
      > >
      > > This is a reland of 28a30c57
      > > which was a reland of 5d7a29c9
      > >
      > > The crashes were from calling RegisterDeserializerFinished on a null
      > > Isolate pointer, for a deserializer that was never initialised
      > > (specifically, ReadOnlyDeserializer when ROHeap is shared).
      > >
      > > Original change's description:
      > > > Reland "[serializer] Allocate during deserialization"
      > > >
      > > > This is a reland of 5d7a29c9
      > > >
      > > > This reland shuffles around the order of checks in Heap::AllocateRawWith
      > > > to not check the new space addresses until it's known that this is a new
      > > > space allocation. This fixes an UBSan failure during read-only space
      > > > deserialization, which happens before the new space is initialized.
      > > >
      > > > It also fixes some issues discovered by --stress-snapshot, around
      > > > serializing ThinStrings (which are now elided as part of serialization),
      > > > handle counts (I bumped the maximum handle count in that check), and
      > > > clearing map transitions (the map backpointer field needed a Smi
      > > > uninitialized value check).
      > > >
      > > > Original change's description:
      > > > > [serializer] Allocate during deserialization
      > > > >
      > > > > This patch removes the concept of reservations and a specialized
      > > > > deserializer allocator, and instead makes the deserializer allocate
      > > > > directly with the Heap's Allocate method.
      > > > >
      > > > > The major consequence of this is that the GC can now run during
      > > > > deserialization, which means that:
      > > > >
      > > > >   a) Deserialized objects are visible to the GC, and
      > > > >   b) Objects that the deserializer/deserialized objects point to can
      > > > >      move.
      > > > >
      > > > > Point a) is mostly not a problem due to previous work in making
      > > > > deserialized objects "GC valid", i.e. making sure that they have a valid
      > > > > size before any subsequent allocation/safepoint. We now additionally
      > > > > have to initialize the allocated space with a valid tagged value -- this
      > > > > is a magic Smi value to keep "uninitialized" checks simple.
      > > > >
      > > > > Point b) is solved by Handlifying the deserializer. This involves
      > > > > changing any vectors of objects into vectors of Handles, and any object
      > > > > keyed map into an IdentityMap (we can't use Handles as keys because
      > > > > the object's address is no longer a stable hash).
      > > > >
      > > > > Back-references can no longer be direct chunk offsets, so instead the
      > > > > deserializer stores a Handle to each deserialized object, and the
      > > > > backreference is an index into this handle array. This encoding could
      > > > > be optimized in the future with e.g. a second pass over the serialized
      > > > > array which emits a different bytecode for objects that are and aren't
      > > > > back-referenced.
      > > > >
      > > > > Additionally, the slot-walk over objects to initialize them can no
      > > > > longer use absolute slot offsets, as again an object may move and its
      > > > > slot address would become invalid. Now, slots are walked as relative
      > > > > offsets to a Handle to the object, or as absolute slots for the case of
      > > > > root pointers. A concept of "slot accessor" is introduced to share the
      > > > > code between these two modes, and writing the slot (including write
      > > > > barriers) is abstracted into this accessor.
      > > > >
      > > > > Finally, the Code body walk is modified to deserialize all objects
      > > > > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > > > > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > > > > during a RelocInfo walk.
      > > > >
      > > > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > > > > size rather than byte size -- the size is expected to be tagged-aligned
      > > > > anyway, so now we get an extra few bits in the size encoding.
      > > > >
      > > > > Bug: chromium:1075999
      > > > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > > > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > > > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > > > Cr-Commit-Position: refs/heads/master@{#70229}
      
      Bug: chromium:1075999
      Change-Id: Ib514a4ef16bd02bfb60d046ecbf8fae1ead64a98
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2452689
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Reviewed-by: 's avatarJakob Gruber <jgruber@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70366}
      3c508b38
  22. 06 Oct, 2020 1 commit
    • Ulan Degenbaev's avatar
      Revert "[heap] Convert WeakObjects to heap::base::Worklist" · a282d2e9
      Ulan Degenbaev authored
      This reverts commit 969cdfe6.
      
      Reason for revert: speculative revert for crbug.com/1135472
      
      Original change's description:
      > [heap] Convert WeakObjects to heap::base::Worklist
      >
      > This splits WeakObjects into explicit global and local worklists.
      > The latter are defined in WeakObjects::Local and are thread-local.
      >
      > The main thread local worklist is stored in
      > MarkCompactCollector::local_weak_objects and exists during marking
      > similar to local_marking_worklists. Concurrent markers create their
      > own local worklists that are published at the end.
      >
      > Change-Id: I093fdc580b4609ce83455b860b90a5099085beac
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440607
      > Commit-Queue: Ulan Degenbaev <ulan@chromium.org>
      > Reviewed-by: Dominik Inführ <dinfuehr@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70317}
      
      TBR=ulan@chromium.org,dinfuehr@chromium.org
      
      Change-Id: I3fa3bfdcf3c359f46a3b56c19fb4e486883cde9d
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2452749Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Commit-Queue: Ulan Degenbaev <ulan@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70344}
      a282d2e9
  23. 05 Oct, 2020 2 commits
    • Adam Klein's avatar
      Revert "Reland^3 "[serializer] Allocate during deserialization"" · a10ec2be
      Adam Klein authored
      This reverts commit 3f4e9bbe, along
      with the following dependent changes (reverted to make this a clean revert):
      76ad3ab5 [identity-map] Change resize heuristic
      77cc96aa [identity-map] Cache the calculated Hash
      bee5b996 [serializer] Remove Deserializer::Initialize
      c8f73f22 [serializer] Cache instance type in PostProcessNewObject
      4e7c99ab [identity-map] Remove double-lookups in IdentityMap
      
      Reason for revert: major crash spike on Canary (https://crbug.com/1135027)
      
      Original change's description:
      > Reland^3 "[serializer] Allocate during deserialization"
      >
      > This is a reland of c4a062a9
      > which was a reland of 28a30c57
      > which was a reland of 5d7a29c9
      >
      > Fixes TSAN errors from non-atomic writes in the deserializer. Now all
      > writes are (relaxed) atomic.
      >
      > Original change's description:
      > > Reland^2 "[serializer] Allocate during deserialization"
      > >
      > > This is a reland of 28a30c57
      > > which was a reland of 5d7a29c9
      > >
      > > The crashes were from calling RegisterDeserializerFinished on a null
      > > Isolate pointer, for a deserializer that was never initialised
      > > (specifically, ReadOnlyDeserializer when ROHeap is shared).
      > >
      > > Original change's description:
      > > > Reland "[serializer] Allocate during deserialization"
      > > >
      > > > This is a reland of 5d7a29c9
      > > >
      > > > This reland shuffles around the order of checks in Heap::AllocateRawWith
      > > > to not check the new space addresses until it's known that this is a new
      > > > space allocation. This fixes an UBSan failure during read-only space
      > > > deserialization, which happens before the new space is initialized.
      > > >
      > > > It also fixes some issues discovered by --stress-snapshot, around
      > > > serializing ThinStrings (which are now elided as part of serialization),
      > > > handle counts (I bumped the maximum handle count in that check), and
      > > > clearing map transitions (the map backpointer field needed a Smi
      > > > uninitialized value check).
      > > >
      > > > Original change's description:
      > > > > [serializer] Allocate during deserialization
      > > > >
      > > > > This patch removes the concept of reservations and a specialized
      > > > > deserializer allocator, and instead makes the deserializer allocate
      > > > > directly with the Heap's Allocate method.
      > > > >
      > > > > The major consequence of this is that the GC can now run during
      > > > > deserialization, which means that:
      > > > >
      > > > >   a) Deserialized objects are visible to the GC, and
      > > > >   b) Objects that the deserializer/deserialized objects point to can
      > > > >      move.
      > > > >
      > > > > Point a) is mostly not a problem due to previous work in making
      > > > > deserialized objects "GC valid", i.e. making sure that they have a valid
      > > > > size before any subsequent allocation/safepoint. We now additionally
      > > > > have to initialize the allocated space with a valid tagged value -- this
      > > > > is a magic Smi value to keep "uninitialized" checks simple.
      > > > >
      > > > > Point b) is solved by Handlifying the deserializer. This involves
      > > > > changing any vectors of objects into vectors of Handles, and any object
      > > > > keyed map into an IdentityMap (we can't use Handles as keys because
      > > > > the object's address is no longer a stable hash).
      > > > >
      > > > > Back-references can no longer be direct chunk offsets, so instead the
      > > > > deserializer stores a Handle to each deserialized object, and the
      > > > > backreference is an index into this handle array. This encoding could
      > > > > be optimized in the future with e.g. a second pass over the serialized
      > > > > array which emits a different bytecode for objects that are and aren't
      > > > > back-referenced.
      > > > >
      > > > > Additionally, the slot-walk over objects to initialize them can no
      > > > > longer use absolute slot offsets, as again an object may move and its
      > > > > slot address would become invalid. Now, slots are walked as relative
      > > > > offsets to a Handle to the object, or as absolute slots for the case of
      > > > > root pointers. A concept of "slot accessor" is introduced to share the
      > > > > code between these two modes, and writing the slot (including write
      > > > > barriers) is abstracted into this accessor.
      > > > >
      > > > > Finally, the Code body walk is modified to deserialize all objects
      > > > > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > > > > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > > > > during a RelocInfo walk.
      > > > >
      > > > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > > > > size rather than byte size -- the size is expected to be tagged-aligned
      > > > > anyway, so now we get an extra few bits in the size encoding.
      > > > >
      > > > > Bug: chromium:1075999
      > > > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > > > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > > > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > > > Cr-Commit-Position: refs/heads/master@{#70229}
      > > >
      > > > Bug: chromium:1075999
      > > > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54
      > > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828
      > > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > > Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      > > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > > Cr-Commit-Position: refs/heads/master@{#70267}
      > >
      > > Tbr: jgruber@chromium.org,ulan@chromium.org
      > > Bug: chromium:1075999
      > > Change-Id: Iaa8dc54895866ada0e34a7c9e8fff9ae1cb13f2d
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2444991
      > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > Cr-Commit-Position: refs/heads/master@{#70279}
      >
      > Tbr: jgruber@chromium.org,ulan@chromium.org
      > Cq-Include-Trybots: luci.v8.try:v8_linux64_tsan_rel_ng,v8_linux64_tsan_no_cm_rel_ng,v8_linux64_tsan_isolates_rel_ng
      > Bug: chromium:1075999
      > Change-Id: I0b9b11644aebc4cc8b07c62a0f765b24e4d73d89
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2445872
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Dominik Inführ <dinfuehr@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70288}
      
      TBR=ulan@chromium.org,jgruber@chromium.org,leszeks@chromium.org,dinfuehr@chromium.org
      
      Bug: chromium:1075999, chromium:1135027
      Change-Id: I5d0d9e49c0302d94ff7291834f5f18e7a0839eb7
      Cq-Include-Trybots: luci.v8.try:v8_linux64_tsan_rel_ng,v8_linux64_tsan_no_cm_rel_ng,v8_linux64_tsan_isolates_rel_ng
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2451030Reviewed-by: 's avatarAdam Klein <adamk@chromium.org>
      Commit-Queue: Adam Klein <adamk@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70328}
      a10ec2be
    • Ulan Degenbaev's avatar
      [heap] Convert WeakObjects to heap::base::Worklist · 969cdfe6
      Ulan Degenbaev authored
      This splits WeakObjects into explicit global and local worklists.
      The latter are defined in WeakObjects::Local and are thread-local.
      
      The main thread local worklist is stored in
      MarkCompactCollector::local_weak_objects and exists during marking
      similar to local_marking_worklists. Concurrent markers create their
      own local worklists that are published at the end.
      
      Change-Id: I093fdc580b4609ce83455b860b90a5099085beac
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440607
      Commit-Queue: Ulan Degenbaev <ulan@chromium.org>
      Reviewed-by: 's avatarDominik Inführ <dinfuehr@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70317}
      969cdfe6
  24. 02 Oct, 2020 3 commits
    • Leszek Swirski's avatar
      Reland^3 "[serializer] Allocate during deserialization" · 3f4e9bbe
      Leszek Swirski authored
      This is a reland of c4a062a9
      which was a reland of 28a30c57
      which was a reland of 5d7a29c9
      
      Fixes TSAN errors from non-atomic writes in the deserializer. Now all
      writes are (relaxed) atomic.
      
      Original change's description:
      > Reland^2 "[serializer] Allocate during deserialization"
      >
      > This is a reland of 28a30c57
      > which was a reland of 5d7a29c9
      >
      > The crashes were from calling RegisterDeserializerFinished on a null
      > Isolate pointer, for a deserializer that was never initialised
      > (specifically, ReadOnlyDeserializer when ROHeap is shared).
      >
      > Original change's description:
      > > Reland "[serializer] Allocate during deserialization"
      > >
      > > This is a reland of 5d7a29c9
      > >
      > > This reland shuffles around the order of checks in Heap::AllocateRawWith
      > > to not check the new space addresses until it's known that this is a new
      > > space allocation. This fixes an UBSan failure during read-only space
      > > deserialization, which happens before the new space is initialized.
      > >
      > > It also fixes some issues discovered by --stress-snapshot, around
      > > serializing ThinStrings (which are now elided as part of serialization),
      > > handle counts (I bumped the maximum handle count in that check), and
      > > clearing map transitions (the map backpointer field needed a Smi
      > > uninitialized value check).
      > >
      > > Original change's description:
      > > > [serializer] Allocate during deserialization
      > > >
      > > > This patch removes the concept of reservations and a specialized
      > > > deserializer allocator, and instead makes the deserializer allocate
      > > > directly with the Heap's Allocate method.
      > > >
      > > > The major consequence of this is that the GC can now run during
      > > > deserialization, which means that:
      > > >
      > > >   a) Deserialized objects are visible to the GC, and
      > > >   b) Objects that the deserializer/deserialized objects point to can
      > > >      move.
      > > >
      > > > Point a) is mostly not a problem due to previous work in making
      > > > deserialized objects "GC valid", i.e. making sure that they have a valid
      > > > size before any subsequent allocation/safepoint. We now additionally
      > > > have to initialize the allocated space with a valid tagged value -- this
      > > > is a magic Smi value to keep "uninitialized" checks simple.
      > > >
      > > > Point b) is solved by Handlifying the deserializer. This involves
      > > > changing any vectors of objects into vectors of Handles, and any object
      > > > keyed map into an IdentityMap (we can't use Handles as keys because
      > > > the object's address is no longer a stable hash).
      > > >
      > > > Back-references can no longer be direct chunk offsets, so instead the
      > > > deserializer stores a Handle to each deserialized object, and the
      > > > backreference is an index into this handle array. This encoding could
      > > > be optimized in the future with e.g. a second pass over the serialized
      > > > array which emits a different bytecode for objects that are and aren't
      > > > back-referenced.
      > > >
      > > > Additionally, the slot-walk over objects to initialize them can no
      > > > longer use absolute slot offsets, as again an object may move and its
      > > > slot address would become invalid. Now, slots are walked as relative
      > > > offsets to a Handle to the object, or as absolute slots for the case of
      > > > root pointers. A concept of "slot accessor" is introduced to share the
      > > > code between these two modes, and writing the slot (including write
      > > > barriers) is abstracted into this accessor.
      > > >
      > > > Finally, the Code body walk is modified to deserialize all objects
      > > > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > > > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > > > during a RelocInfo walk.
      > > >
      > > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > > > size rather than byte size -- the size is expected to be tagged-aligned
      > > > anyway, so now we get an extra few bits in the size encoding.
      > > >
      > > > Bug: chromium:1075999
      > > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > > Cr-Commit-Position: refs/heads/master@{#70229}
      > >
      > > Bug: chromium:1075999
      > > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828
      > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > Cr-Commit-Position: refs/heads/master@{#70267}
      >
      > Tbr: jgruber@chromium.org,ulan@chromium.org
      > Bug: chromium:1075999
      > Change-Id: Iaa8dc54895866ada0e34a7c9e8fff9ae1cb13f2d
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2444991
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70279}
      
      Tbr: jgruber@chromium.org,ulan@chromium.org
      Cq-Include-Trybots: luci.v8.try:v8_linux64_tsan_rel_ng,v8_linux64_tsan_no_cm_rel_ng,v8_linux64_tsan_isolates_rel_ng
      Bug: chromium:1075999
      Change-Id: I0b9b11644aebc4cc8b07c62a0f765b24e4d73d89
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2445872
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarDominik Inführ <dinfuehr@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70288}
      3f4e9bbe
    • Clemens Backes's avatar
      Revert "Reland^2 "[serializer] Allocate during deserialization"" · a81da102
      Clemens Backes authored
      This reverts commit c4a062a9.
      
      Reason for revert: TSan issues: https://ci.chromium.org/p/v8/builders/ci/V8%20Linux64%20TSAN/33504
      
      Original change's description:
      > Reland^2 "[serializer] Allocate during deserialization"
      >
      > This is a reland of 28a30c57
      > which was a reland of 5d7a29c9
      >
      > The crashes were from calling RegisterDeserializerFinished on a null
      > Isolate pointer, for a deserializer that was never initialised
      > (specifically, ReadOnlyDeserializer when ROHeap is shared).
      >
      > Original change's description:
      > > Reland "[serializer] Allocate during deserialization"
      > >
      > > This is a reland of 5d7a29c9
      > >
      > > This reland shuffles around the order of checks in Heap::AllocateRawWith
      > > to not check the new space addresses until it's known that this is a new
      > > space allocation. This fixes an UBSan failure during read-only space
      > > deserialization, which happens before the new space is initialized.
      > >
      > > It also fixes some issues discovered by --stress-snapshot, around
      > > serializing ThinStrings (which are now elided as part of serialization),
      > > handle counts (I bumped the maximum handle count in that check), and
      > > clearing map transitions (the map backpointer field needed a Smi
      > > uninitialized value check).
      > >
      > > Original change's description:
      > > > [serializer] Allocate during deserialization
      > > >
      > > > This patch removes the concept of reservations and a specialized
      > > > deserializer allocator, and instead makes the deserializer allocate
      > > > directly with the Heap's Allocate method.
      > > >
      > > > The major consequence of this is that the GC can now run during
      > > > deserialization, which means that:
      > > >
      > > >   a) Deserialized objects are visible to the GC, and
      > > >   b) Objects that the deserializer/deserialized objects point to can
      > > >      move.
      > > >
      > > > Point a) is mostly not a problem due to previous work in making
      > > > deserialized objects "GC valid", i.e. making sure that they have a valid
      > > > size before any subsequent allocation/safepoint. We now additionally
      > > > have to initialize the allocated space with a valid tagged value -- this
      > > > is a magic Smi value to keep "uninitialized" checks simple.
      > > >
      > > > Point b) is solved by Handlifying the deserializer. This involves
      > > > changing any vectors of objects into vectors of Handles, and any object
      > > > keyed map into an IdentityMap (we can't use Handles as keys because
      > > > the object's address is no longer a stable hash).
      > > >
      > > > Back-references can no longer be direct chunk offsets, so instead the
      > > > deserializer stores a Handle to each deserialized object, and the
      > > > backreference is an index into this handle array. This encoding could
      > > > be optimized in the future with e.g. a second pass over the serialized
      > > > array which emits a different bytecode for objects that are and aren't
      > > > back-referenced.
      > > >
      > > > Additionally, the slot-walk over objects to initialize them can no
      > > > longer use absolute slot offsets, as again an object may move and its
      > > > slot address would become invalid. Now, slots are walked as relative
      > > > offsets to a Handle to the object, or as absolute slots for the case of
      > > > root pointers. A concept of "slot accessor" is introduced to share the
      > > > code between these two modes, and writing the slot (including write
      > > > barriers) is abstracted into this accessor.
      > > >
      > > > Finally, the Code body walk is modified to deserialize all objects
      > > > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > > > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > > > during a RelocInfo walk.
      > > >
      > > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > > > size rather than byte size -- the size is expected to be tagged-aligned
      > > > anyway, so now we get an extra few bits in the size encoding.
      > > >
      > > > Bug: chromium:1075999
      > > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > > Cr-Commit-Position: refs/heads/master@{#70229}
      > >
      > > Bug: chromium:1075999
      > > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828
      > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > Cr-Commit-Position: refs/heads/master@{#70267}
      >
      > Tbr: jgruber@chromium.org,ulan@chromium.org
      > Bug: chromium:1075999
      > Change-Id: Iaa8dc54895866ada0e34a7c9e8fff9ae1cb13f2d
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2444991
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70279}
      
      TBR=ulan@chromium.org,jgruber@chromium.org,leszeks@chromium.org
      
      Change-Id: Ib2f01db4cd9b55639d6a4af971bda865edb45e84
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Bug: chromium:1075999
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2445250Reviewed-by: 's avatarClemens Backes <clemensb@chromium.org>
      Commit-Queue: Clemens Backes <clemensb@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70280}
      a81da102
    • Leszek Swirski's avatar
      Reland^2 "[serializer] Allocate during deserialization" · c4a062a9
      Leszek Swirski authored
      This is a reland of 28a30c57
      which was a reland of 5d7a29c9
      
      The crashes were from calling RegisterDeserializerFinished on a null
      Isolate pointer, for a deserializer that was never initialised
      (specifically, ReadOnlyDeserializer when ROHeap is shared).
      
      Original change's description:
      > Reland "[serializer] Allocate during deserialization"
      >
      > This is a reland of 5d7a29c9
      >
      > This reland shuffles around the order of checks in Heap::AllocateRawWith
      > to not check the new space addresses until it's known that this is a new
      > space allocation. This fixes an UBSan failure during read-only space
      > deserialization, which happens before the new space is initialized.
      >
      > It also fixes some issues discovered by --stress-snapshot, around
      > serializing ThinStrings (which are now elided as part of serialization),
      > handle counts (I bumped the maximum handle count in that check), and
      > clearing map transitions (the map backpointer field needed a Smi
      > uninitialized value check).
      >
      > Original change's description:
      > > [serializer] Allocate during deserialization
      > >
      > > This patch removes the concept of reservations and a specialized
      > > deserializer allocator, and instead makes the deserializer allocate
      > > directly with the Heap's Allocate method.
      > >
      > > The major consequence of this is that the GC can now run during
      > > deserialization, which means that:
      > >
      > >   a) Deserialized objects are visible to the GC, and
      > >   b) Objects that the deserializer/deserialized objects point to can
      > >      move.
      > >
      > > Point a) is mostly not a problem due to previous work in making
      > > deserialized objects "GC valid", i.e. making sure that they have a valid
      > > size before any subsequent allocation/safepoint. We now additionally
      > > have to initialize the allocated space with a valid tagged value -- this
      > > is a magic Smi value to keep "uninitialized" checks simple.
      > >
      > > Point b) is solved by Handlifying the deserializer. This involves
      > > changing any vectors of objects into vectors of Handles, and any object
      > > keyed map into an IdentityMap (we can't use Handles as keys because
      > > the object's address is no longer a stable hash).
      > >
      > > Back-references can no longer be direct chunk offsets, so instead the
      > > deserializer stores a Handle to each deserialized object, and the
      > > backreference is an index into this handle array. This encoding could
      > > be optimized in the future with e.g. a second pass over the serialized
      > > array which emits a different bytecode for objects that are and aren't
      > > back-referenced.
      > >
      > > Additionally, the slot-walk over objects to initialize them can no
      > > longer use absolute slot offsets, as again an object may move and its
      > > slot address would become invalid. Now, slots are walked as relative
      > > offsets to a Handle to the object, or as absolute slots for the case of
      > > root pointers. A concept of "slot accessor" is introduced to share the
      > > code between these two modes, and writing the slot (including write
      > > barriers) is abstracted into this accessor.
      > >
      > > Finally, the Code body walk is modified to deserialize all objects
      > > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > > during a RelocInfo walk.
      > >
      > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > > size rather than byte size -- the size is expected to be tagged-aligned
      > > anyway, so now we get an extra few bits in the size encoding.
      > >
      > > Bug: chromium:1075999
      > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > Cr-Commit-Position: refs/heads/master@{#70229}
      >
      > Bug: chromium:1075999
      > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70267}
      
      Tbr: jgruber@chromium.org,ulan@chromium.org
      Bug: chromium:1075999
      Change-Id: Iaa8dc54895866ada0e34a7c9e8fff9ae1cb13f2d
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2444991Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70279}
      c4a062a9
  25. 01 Oct, 2020 2 commits
    • Zhi An Ng's avatar
      Revert "Reland "[serializer] Allocate during deserialization"" · c7c0e790
      Zhi An Ng authored
      This reverts commit 28a30c57.
      
      Reason for revert: Broke Test262 https://ci.chromium.org/p/v8/builders/ci/V8%20Linux%20-%20shared/38638?
      
      Original change's description:
      > Reland "[serializer] Allocate during deserialization"
      >
      > This is a reland of 5d7a29c9
      >
      > This reland shuffles around the order of checks in Heap::AllocateRawWith
      > to not check the new space addresses until it's known that this is a new
      > space allocation. This fixes an UBSan failure during read-only space
      > deserialization, which happens before the new space is initialized.
      >
      > It also fixes some issues discovered by --stress-snapshot, around
      > serializing ThinStrings (which are now elided as part of serialization),
      > handle counts (I bumped the maximum handle count in that check), and
      > clearing map transitions (the map backpointer field needed a Smi
      > uninitialized value check).
      >
      > Original change's description:
      > > [serializer] Allocate during deserialization
      > >
      > > This patch removes the concept of reservations and a specialized
      > > deserializer allocator, and instead makes the deserializer allocate
      > > directly with the Heap's Allocate method.
      > >
      > > The major consequence of this is that the GC can now run during
      > > deserialization, which means that:
      > >
      > >   a) Deserialized objects are visible to the GC, and
      > >   b) Objects that the deserializer/deserialized objects point to can
      > >      move.
      > >
      > > Point a) is mostly not a problem due to previous work in making
      > > deserialized objects "GC valid", i.e. making sure that they have a valid
      > > size before any subsequent allocation/safepoint. We now additionally
      > > have to initialize the allocated space with a valid tagged value -- this
      > > is a magic Smi value to keep "uninitialized" checks simple.
      > >
      > > Point b) is solved by Handlifying the deserializer. This involves
      > > changing any vectors of objects into vectors of Handles, and any object
      > > keyed map into an IdentityMap (we can't use Handles as keys because
      > > the object's address is no longer a stable hash).
      > >
      > > Back-references can no longer be direct chunk offsets, so instead the
      > > deserializer stores a Handle to each deserialized object, and the
      > > backreference is an index into this handle array. This encoding could
      > > be optimized in the future with e.g. a second pass over the serialized
      > > array which emits a different bytecode for objects that are and aren't
      > > back-referenced.
      > >
      > > Additionally, the slot-walk over objects to initialize them can no
      > > longer use absolute slot offsets, as again an object may move and its
      > > slot address would become invalid. Now, slots are walked as relative
      > > offsets to a Handle to the object, or as absolute slots for the case of
      > > root pointers. A concept of "slot accessor" is introduced to share the
      > > code between these two modes, and writing the slot (including write
      > > barriers) is abstracted into this accessor.
      > >
      > > Finally, the Code body walk is modified to deserialize all objects
      > > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > > during a RelocInfo walk.
      > >
      > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > > size rather than byte size -- the size is expected to be tagged-aligned
      > > anyway, so now we get an extra few bits in the size encoding.
      > >
      > > Bug: chromium:1075999
      > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > Cr-Commit-Position: refs/heads/master@{#70229}
      >
      > Bug: chromium:1075999
      > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70267}
      
      TBR=ulan@chromium.org,jgruber@chromium.org,leszeks@chromium.org
      
      Change-Id: Ieed68332ef6a7ad36db061e3f48be0f28673d7a2
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Bug: chromium:1075999
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2441608Reviewed-by: 's avatarZhi An Ng <zhin@chromium.org>
      Commit-Queue: Zhi An Ng <zhin@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70268}
      c7c0e790
    • Leszek Swirski's avatar
      Reland "[serializer] Allocate during deserialization" · 28a30c57
      Leszek Swirski authored
      This is a reland of 5d7a29c9
      
      This reland shuffles around the order of checks in Heap::AllocateRawWith
      to not check the new space addresses until it's known that this is a new
      space allocation. This fixes an UBSan failure during read-only space
      deserialization, which happens before the new space is initialized.
      
      It also fixes some issues discovered by --stress-snapshot, around
      serializing ThinStrings (which are now elided as part of serialization),
      handle counts (I bumped the maximum handle count in that check), and
      clearing map transitions (the map backpointer field needed a Smi
      uninitialized value check).
      
      Original change's description:
      > [serializer] Allocate during deserialization
      >
      > This patch removes the concept of reservations and a specialized
      > deserializer allocator, and instead makes the deserializer allocate
      > directly with the Heap's Allocate method.
      >
      > The major consequence of this is that the GC can now run during
      > deserialization, which means that:
      >
      >   a) Deserialized objects are visible to the GC, and
      >   b) Objects that the deserializer/deserialized objects point to can
      >      move.
      >
      > Point a) is mostly not a problem due to previous work in making
      > deserialized objects "GC valid", i.e. making sure that they have a valid
      > size before any subsequent allocation/safepoint. We now additionally
      > have to initialize the allocated space with a valid tagged value -- this
      > is a magic Smi value to keep "uninitialized" checks simple.
      >
      > Point b) is solved by Handlifying the deserializer. This involves
      > changing any vectors of objects into vectors of Handles, and any object
      > keyed map into an IdentityMap (we can't use Handles as keys because
      > the object's address is no longer a stable hash).
      >
      > Back-references can no longer be direct chunk offsets, so instead the
      > deserializer stores a Handle to each deserialized object, and the
      > backreference is an index into this handle array. This encoding could
      > be optimized in the future with e.g. a second pass over the serialized
      > array which emits a different bytecode for objects that are and aren't
      > back-referenced.
      >
      > Additionally, the slot-walk over objects to initialize them can no
      > longer use absolute slot offsets, as again an object may move and its
      > slot address would become invalid. Now, slots are walked as relative
      > offsets to a Handle to the object, or as absolute slots for the case of
      > root pointers. A concept of "slot accessor" is introduced to share the
      > code between these two modes, and writing the slot (including write
      > barriers) is abstracted into this accessor.
      >
      > Finally, the Code body walk is modified to deserialize all objects
      > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > during a RelocInfo walk.
      >
      > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > size rather than byte size -- the size is expected to be tagged-aligned
      > anyway, so now we get an extra few bits in the size encoding.
      >
      > Bug: chromium:1075999
      > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70229}
      
      Bug: chromium:1075999
      Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Reviewed-by: 's avatarJakob Gruber <jgruber@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70267}
      28a30c57
  26. 30 Sep, 2020 2 commits
    • Leszek Swirski's avatar
      Revert "[serializer] Allocate during deserialization" · 74f3665c
      Leszek Swirski authored
      This reverts commit 5d7a29c9.
      
      Reason for revert: UBSan -- https://ci.chromium.org/p/v8/builders/ci/V8%20Linux64%20UBSan/13100
      
      Original change's description:
      > [serializer] Allocate during deserialization
      >
      > This patch removes the concept of reservations and a specialized
      > deserializer allocator, and instead makes the deserializer allocate
      > directly with the Heap's Allocate method.
      >
      > The major consequence of this is that the GC can now run during
      > deserialization, which means that:
      >
      >   a) Deserialized objects are visible to the GC, and
      >   b) Objects that the deserializer/deserialized objects point to can
      >      move.
      >
      > Point a) is mostly not a problem due to previous work in making
      > deserialized objects "GC valid", i.e. making sure that they have a valid
      > size before any subsequent allocation/safepoint. We now additionally
      > have to initialize the allocated space with a valid tagged value -- this
      > is a magic Smi value to keep "uninitialized" checks simple.
      >
      > Point b) is solved by Handlifying the deserializer. This involves
      > changing any vectors of objects into vectors of Handles, and any object
      > keyed map into an IdentityMap (we can't use Handles as keys because
      > the object's address is no longer a stable hash).
      >
      > Back-references can no longer be direct chunk offsets, so instead the
      > deserializer stores a Handle to each deserialized object, and the
      > backreference is an index into this handle array. This encoding could
      > be optimized in the future with e.g. a second pass over the serialized
      > array which emits a different bytecode for objects that are and aren't
      > back-referenced.
      >
      > Additionally, the slot-walk over objects to initialize them can no
      > longer use absolute slot offsets, as again an object may move and its
      > slot address would become invalid. Now, slots are walked as relative
      > offsets to a Handle to the object, or as absolute slots for the case of
      > root pointers. A concept of "slot accessor" is introduced to share the
      > code between these two modes, and writing the slot (including write
      > barriers) is abstracted into this accessor.
      >
      > Finally, the Code body walk is modified to deserialize all objects
      > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > during a RelocInfo walk.
      >
      > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > size rather than byte size -- the size is expected to be tagged-aligned
      > anyway, so now we get an extra few bits in the size encoding.
      >
      > Bug: chromium:1075999
      > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70229}
      
      TBR=ulan@chromium.org,jgruber@chromium.org,leszeks@chromium.org
      
      Change-Id: I2bd792a24861e8f54897e51522769b50f8f814e2
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Bug: chromium:1075999
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440827
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70231}
      74f3665c
    • Leszek Swirski's avatar
      [serializer] Allocate during deserialization · 5d7a29c9
      Leszek Swirski authored
      This patch removes the concept of reservations and a specialized
      deserializer allocator, and instead makes the deserializer allocate
      directly with the Heap's Allocate method.
      
      The major consequence of this is that the GC can now run during
      deserialization, which means that:
      
        a) Deserialized objects are visible to the GC, and
        b) Objects that the deserializer/deserialized objects point to can
           move.
      
      Point a) is mostly not a problem due to previous work in making
      deserialized objects "GC valid", i.e. making sure that they have a valid
      size before any subsequent allocation/safepoint. We now additionally
      have to initialize the allocated space with a valid tagged value -- this
      is a magic Smi value to keep "uninitialized" checks simple.
      
      Point b) is solved by Handlifying the deserializer. This involves
      changing any vectors of objects into vectors of Handles, and any object
      keyed map into an IdentityMap (we can't use Handles as keys because
      the object's address is no longer a stable hash).
      
      Back-references can no longer be direct chunk offsets, so instead the
      deserializer stores a Handle to each deserialized object, and the
      backreference is an index into this handle array. This encoding could
      be optimized in the future with e.g. a second pass over the serialized
      array which emits a different bytecode for objects that are and aren't
      back-referenced.
      
      Additionally, the slot-walk over objects to initialize them can no
      longer use absolute slot offsets, as again an object may move and its
      slot address would become invalid. Now, slots are walked as relative
      offsets to a Handle to the object, or as absolute slots for the case of
      root pointers. A concept of "slot accessor" is introduced to share the
      code between these two modes, and writing the slot (including write
      barriers) is abstracted into this accessor.
      
      Finally, the Code body walk is modified to deserialize all objects
      referred to by RelocInfos before doing the RelocInfo walk itself. This
      is because RelocInfoIterator uses raw pointers, so we cannot allocate
      during a RelocInfo walk.
      
      As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      size rather than byte size -- the size is expected to be tagged-aligned
      anyway, so now we get an extra few bits in the size encoding.
      
      Bug: chromium:1075999
      Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarJakob Gruber <jgruber@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70229}
      5d7a29c9
  27. 24 Sep, 2020 1 commit
  28. 11 Aug, 2020 1 commit
    • Ulan Degenbaev's avatar
      [heap] Split marking worklist into global worklist and local worklists · 28133adc
      Ulan Degenbaev authored
      This is the first step in refactoring Worklist to allow arbitrary
      number of local worklists with private segments:
      - Introduce MarkingWorklistImpl<> which will eventually replace
        (and will be renamed to) Worklist.
      - MarkingWorklistImpl<> owns the global pool of segments but does not
        keep track of private segments.
      - MarkingWorklistImpl<>::Local owns private segments and can be
        constructed dynamically on background threads.
      - Rename the existing MarkingWorklistsHolder to MarkingWorklists.
      - Rename the existing MarkingWorklists to MarkingWorklists::Local.
      - Rename the existing marking_workists_holder to marking_worklists.
      - Rename the existing marking_worklists to local_marking_worklists.
      
      Design doc: https://bit.ly/2XMtjLi
      Bug: v8:10315
      
      Change-Id: I9da34883ad34f4572fccd40c51e51eaf50c617bc
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2343330Reviewed-by: 's avatarDominik Inführ <dinfuehr@chromium.org>
      Commit-Queue: Ulan Degenbaev <ulan@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#69330}
      28133adc
  29. 11 May, 2020 1 commit
  30. 30 Apr, 2020 2 commits
    • Deepti Gandluri's avatar
      Revert "Reland^4 "[runtime] Amortize descriptor array growing for fast-mode prototypes"" · accf95fc
      Deepti Gandluri authored
      This reverts commit fd2548f3.
      
      Reason for revert: Breaks telemetry benchmark, blocks deps roll.
      https://ci.chromium.org/p/chromium/builders/try/linux-rel/373686?
      https://chromium-swarm.appspot.com/task?id=4be57eb0279bbb10
      
      Original change's description:
      > Reland^4 "[runtime] Amortize descriptor array growing for fast-mode prototypes"
      > 
      > This CL:
      >  - stops tracking transitions for fast maps that are known to be detached
      >  - reuses descriptor arrays when transitioning detached maps to avoid O(n^2) performance and garbage creation
      > 
      > Fix2 in reland: constructor_or_backpointer can be a smi since it can also hold a user-provided function.prototype
      > Fix in reland: check whether the map of the back pointer is the metamap rather than reading the map of the constructor-or-backpointer slot. If the slot contains a constructor, it's possible that the object transitions while the concurrent marker is reading the map (from which it's reading the instance type); and it's possible that the transitioned map isn't set up yet fully when we read the instance type. An acquire load for the constructor-or-backpointer map would also fix it by serializing stores, but is more expensive. Checking the metamap is faster.
      > 
      > Original commit message:
      > > This avoids an O(n^2) algorithm that creates an equal amount of garbage.
      > > Even though the actual final descriptor array might be a little bigger,
      > > it reduces peak memory usage by allocating less.
      > 
      > Change-Id: Id99dc76a369057e5c4d76a31163605cb38a66867
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2172080
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Commit-Queue: Toon Verwaest <verwaest@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#67501}
      
      TBR=ulan@chromium.org,verwaest@chromium.org
      
      Change-Id: If305b5410ca37e04e9ec0ce50e9b494f5c4cd4dc
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2174767Reviewed-by: 's avatarDeepti Gandluri <gdeepti@chromium.org>
      Commit-Queue: Deepti Gandluri <gdeepti@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#67510}
      accf95fc
    • Toon Verwaest's avatar
      Reland^4 "[runtime] Amortize descriptor array growing for fast-mode prototypes" · fd2548f3
      Toon Verwaest authored
      This CL:
       - stops tracking transitions for fast maps that are known to be detached
       - reuses descriptor arrays when transitioning detached maps to avoid O(n^2) performance and garbage creation
      
      Fix2 in reland: constructor_or_backpointer can be a smi since it can also hold a user-provided function.prototype
      Fix in reland: check whether the map of the back pointer is the metamap rather than reading the map of the constructor-or-backpointer slot. If the slot contains a constructor, it's possible that the object transitions while the concurrent marker is reading the map (from which it's reading the instance type); and it's possible that the transitioned map isn't set up yet fully when we read the instance type. An acquire load for the constructor-or-backpointer map would also fix it by serializing stores, but is more expensive. Checking the metamap is faster.
      
      Original commit message:
      > This avoids an O(n^2) algorithm that creates an equal amount of garbage.
      > Even though the actual final descriptor array might be a little bigger,
      > it reduces peak memory usage by allocating less.
      
      Change-Id: Id99dc76a369057e5c4d76a31163605cb38a66867
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2172080Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Commit-Queue: Toon Verwaest <verwaest@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#67501}
      fd2548f3