1. 02 Feb, 2021 1 commit
    • Leszek Swirski's avatar
      [arm64/sim] Add a 'sim' gdb command · 1f72df06
      Leszek Swirski authored
      Extract out the command processing from Simulator::Debug(), and expose
      it to gdb as a new 'sim' command. Example usage:
      
          (gdb) sim p x15
          (gdb) sim stack
      
      The sim command will execute that one command, and will return to gdb.
      
      For a list of all commands, you can call
      
          (gdb) sim help
      
      Note that sim won't resume simulator execution until gdb continues
      execution; for example, `sim next` will set a breakpoint on the next
      instruction, and will return to gdb. The user then has to continue
      execution in gdb, at which point the simulator will break. The user can
      then re-enter gdb with the gdb command. This will look like this:
      
          (gdb) sim next
          (gdb) continue
          ...
          sim> gdb
          (gdb) ...
      
      Change-Id: I678e71e2642d8427950b5f7ed65890ceae69e18d
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2664448
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarDan Elphick <delphick@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#72479}
      1f72df06
  2. 29 Jan, 2021 1 commit
  3. 20 Jan, 2021 1 commit
    • Jakob Gruber's avatar
      [compiler] Rename type BailoutId to BytecodeOffset · 727d22be
      Jakob Gruber authored
      This reflects the actual contents of the type, which is an offset into
      the bytecode (or certain marker values). Historically, in the days of
      FCG the bailout id used to refer to node ids - this is why certain
      tracing output still calls the bailout id 'node id' and 'ast id'.
      These spots will be fixed in a follow-up CL.
      
      This change is mechanical:
      
       git grep -l BailoutId | while read f; do \
        sed -i 's/BailoutId/BytecodeOffset/g' $f; done
      
      With a manual component of updating the DeoptimizationData method
      name from 'BytecodeOffset' to 'GetBytecodeOffset'.
      
      Bug: v8:11332
      Change-Id: I956b947a480bf52263159c0eb1e895360bcbe6d2
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2639754
      Commit-Queue: Jakob Gruber <jgruber@chromium.org>
      Reviewed-by: 's avatarNico Hartmann <nicohartmann@chromium.org>
      Reviewed-by: 's avatarMythri Alle <mythria@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#72189}
      727d22be
  4. 26 Nov, 2020 1 commit
  5. 24 Nov, 2020 2 commits
  6. 17 Nov, 2020 1 commit
  7. 11 Nov, 2020 1 commit
  8. 10 Nov, 2020 1 commit
  9. 06 Nov, 2020 1 commit
  10. 03 Nov, 2020 1 commit
  11. 02 Nov, 2020 1 commit
  12. 20 Oct, 2020 1 commit
  13. 19 Oct, 2020 1 commit
  14. 14 Oct, 2020 1 commit
  15. 13 Oct, 2020 1 commit
    • Ng Zhi An's avatar
      Implement Min and Max using std::min and std::max · c90ff8bd
      Ng Zhi An authored
      The existing implementation gives different results for certain floating
      points values from std::min and std::max. This patch makes it the same,
      so it is less surprising.
      
      Took a quick look at some usages for Min and Max, they are all integral
      types, so this wouldn't change any behavior.
      
      Min and Max has been in the code base right from the initial import,
      and I'm not sure why we needed it, since it should simply be
      std::min/std::max. With C++14, std::min and std::max are constexpr,
      so this change is also fine.
      
      Change-Id: If8ec53bedff3ef336aa21b082f1a16ce716b8f87
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2464146Reviewed-by: 's avatarClemens Backes <clemensb@chromium.org>
      Commit-Queue: Zhi An Ng <zhin@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70494}
      c90ff8bd
  16. 07 Oct, 2020 3 commits
    • Jakob Kummerow's avatar
      [mac] Set MAP_JIT only when necessary · 4e077ff0
      Jakob Kummerow authored
      This is a "minimal" change to achieve the required goal: seeing that
      there is only one place where we need to indicate that memory should
      be reserved with MAP_JIT, we can add a value to the Permissions enum
      instead of adding a second, orthogonal parameter.
      That way we avoid changing public API functions, which makes this CL
      easier to undo once we have platform-independent w^x in Wasm.
      
      Bug: chromium:1117591
      Change-Id: I6333d69ab29d5900c689f08dcc892a5f1c1159b8
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2435365
      Commit-Queue: Jakob Kummerow <jkummerow@chromium.org>
      Reviewed-by: 's avatarMichael Lippautz <mlippautz@chromium.org>
      Reviewed-by: 's avatarClemens Backes <clemensb@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70379}
      4e077ff0
    • Ross McIlroy's avatar
      [TurboProp] Add support for deferred block spills in fast reg alloc · 4a601911
      Ross McIlroy authored
      Adds support for avoiding spills in non-deferred blocks by instead
      restricting the spill ranges to deferred blocks if the virtual
      register is only spilled in deferred blocks.
      
      It does this by tracking registers that reach the exit point of deferred
      blocks and spilling them them pre-emptively in the deferred block while
      treating them as committed from the point of view of the non-deferred
      blocks. We also now track whether virtual registers need to be spilled
      at their SSA definition point (where they are output by an instruction),
      or can instead be spilled at the entry to deferred blocks for use as
      spill slots within those deferred blocks. In both cases, the tracking
      of these deferred spills is kept as a pending operation until the
      allocator confirms that adding these spills will avoid spills in the
      non-deferred pathways, to avoid adding unnecessary extra spills in
      deferred blocks.
      
      BUG=v8:9684
      
      Change-Id: Ib151e795567f0e4e7f95538415a8cc117d235b64
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440603
      Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
      Reviewed-by: 's avatarTobias Tebbi <tebbi@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70374}
      4a601911
    • Leszek Swirski's avatar
      Reland^4 "[serializer] Allocate during deserialization" · 3c508b38
      Leszek Swirski authored
      This relands commit 3f4e9bbe.
      which was a reland of c4a062a9
      which was a reland of 28a30c57
      which was a reland of 5d7a29c9
      
      The change had an issue that embedders implementing heap tracing (e.g.
      Unified Heap with Blink) could be passed an uninitialized pointer if
      marking happened during deserialization of an object containing such a
      pointer. Because of the 0xdeadbed0 uninitialized filler value, these
      embedders would then receive the value 0xdeadbed0deadbed0 as the
      'pointer', and crash on dereference.
      
      There is, however, special handling already for null pointers in heap
      tracing, also for dealing with not-yet initialized values. So, we can
      make the uninitialized Smi filler be 0x00000000, and that will make such
      embedded fields have a nullptr representation, making them follow the
      normal uninitialized value bailouts.
      
      In addition, it relands the following dependent changes, which are
      relanding unchanged and are followup performance improvements.
      Relanding them in the same change should allow for cleaner reverts
      should they be needed.
      
      This relands commit 76ad3ab5
      [identity-map] Change resize heuristic
      
      This relands commit 77cc96aa
      [identity-map] Cache the calculated Hash
      
      This relands commit bee5b996
      [serializer] Remove Deserializer::Initialize
      
      This relands commit c8f73f22
      [serializer] Cache instance type in PostProcessNewObject
      
      This relands commit 4e7c99ab
      [identity-map] Remove double-lookups in IdentityMap
      
      Original change's description:
      > Reland^3 "[serializer] Allocate during deserialization"
      >
      > This is a reland of c4a062a9
      > which was a reland of 28a30c57
      > which was a reland of 5d7a29c9
      >
      > Fixes TSAN errors from non-atomic writes in the deserializer. Now all
      > writes are (relaxed) atomic.
      >
      > Original change's description:
      > > Reland^2 "[serializer] Allocate during deserialization"
      > >
      > > This is a reland of 28a30c57
      > > which was a reland of 5d7a29c9
      > >
      > > The crashes were from calling RegisterDeserializerFinished on a null
      > > Isolate pointer, for a deserializer that was never initialised
      > > (specifically, ReadOnlyDeserializer when ROHeap is shared).
      > >
      > > Original change's description:
      > > > Reland "[serializer] Allocate during deserialization"
      > > >
      > > > This is a reland of 5d7a29c9
      > > >
      > > > This reland shuffles around the order of checks in Heap::AllocateRawWith
      > > > to not check the new space addresses until it's known that this is a new
      > > > space allocation. This fixes an UBSan failure during read-only space
      > > > deserialization, which happens before the new space is initialized.
      > > >
      > > > It also fixes some issues discovered by --stress-snapshot, around
      > > > serializing ThinStrings (which are now elided as part of serialization),
      > > > handle counts (I bumped the maximum handle count in that check), and
      > > > clearing map transitions (the map backpointer field needed a Smi
      > > > uninitialized value check).
      > > >
      > > > Original change's description:
      > > > > [serializer] Allocate during deserialization
      > > > >
      > > > > This patch removes the concept of reservations and a specialized
      > > > > deserializer allocator, and instead makes the deserializer allocate
      > > > > directly with the Heap's Allocate method.
      > > > >
      > > > > The major consequence of this is that the GC can now run during
      > > > > deserialization, which means that:
      > > > >
      > > > >   a) Deserialized objects are visible to the GC, and
      > > > >   b) Objects that the deserializer/deserialized objects point to can
      > > > >      move.
      > > > >
      > > > > Point a) is mostly not a problem due to previous work in making
      > > > > deserialized objects "GC valid", i.e. making sure that they have a valid
      > > > > size before any subsequent allocation/safepoint. We now additionally
      > > > > have to initialize the allocated space with a valid tagged value -- this
      > > > > is a magic Smi value to keep "uninitialized" checks simple.
      > > > >
      > > > > Point b) is solved by Handlifying the deserializer. This involves
      > > > > changing any vectors of objects into vectors of Handles, and any object
      > > > > keyed map into an IdentityMap (we can't use Handles as keys because
      > > > > the object's address is no longer a stable hash).
      > > > >
      > > > > Back-references can no longer be direct chunk offsets, so instead the
      > > > > deserializer stores a Handle to each deserialized object, and the
      > > > > backreference is an index into this handle array. This encoding could
      > > > > be optimized in the future with e.g. a second pass over the serialized
      > > > > array which emits a different bytecode for objects that are and aren't
      > > > > back-referenced.
      > > > >
      > > > > Additionally, the slot-walk over objects to initialize them can no
      > > > > longer use absolute slot offsets, as again an object may move and its
      > > > > slot address would become invalid. Now, slots are walked as relative
      > > > > offsets to a Handle to the object, or as absolute slots for the case of
      > > > > root pointers. A concept of "slot accessor" is introduced to share the
      > > > > code between these two modes, and writing the slot (including write
      > > > > barriers) is abstracted into this accessor.
      > > > >
      > > > > Finally, the Code body walk is modified to deserialize all objects
      > > > > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > > > > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > > > > during a RelocInfo walk.
      > > > >
      > > > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > > > > size rather than byte size -- the size is expected to be tagged-aligned
      > > > > anyway, so now we get an extra few bits in the size encoding.
      > > > >
      > > > > Bug: chromium:1075999
      > > > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > > > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > > > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > > > Cr-Commit-Position: refs/heads/master@{#70229}
      
      Bug: chromium:1075999
      Change-Id: Ib514a4ef16bd02bfb60d046ecbf8fae1ead64a98
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2452689
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Reviewed-by: 's avatarJakob Gruber <jgruber@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70366}
      3c508b38
  17. 06 Oct, 2020 1 commit
  18. 05 Oct, 2020 4 commits
    • Adam Klein's avatar
      Revert "Reland^3 "[serializer] Allocate during deserialization"" · a10ec2be
      Adam Klein authored
      This reverts commit 3f4e9bbe, along
      with the following dependent changes (reverted to make this a clean revert):
      76ad3ab5 [identity-map] Change resize heuristic
      77cc96aa [identity-map] Cache the calculated Hash
      bee5b996 [serializer] Remove Deserializer::Initialize
      c8f73f22 [serializer] Cache instance type in PostProcessNewObject
      4e7c99ab [identity-map] Remove double-lookups in IdentityMap
      
      Reason for revert: major crash spike on Canary (https://crbug.com/1135027)
      
      Original change's description:
      > Reland^3 "[serializer] Allocate during deserialization"
      >
      > This is a reland of c4a062a9
      > which was a reland of 28a30c57
      > which was a reland of 5d7a29c9
      >
      > Fixes TSAN errors from non-atomic writes in the deserializer. Now all
      > writes are (relaxed) atomic.
      >
      > Original change's description:
      > > Reland^2 "[serializer] Allocate during deserialization"
      > >
      > > This is a reland of 28a30c57
      > > which was a reland of 5d7a29c9
      > >
      > > The crashes were from calling RegisterDeserializerFinished on a null
      > > Isolate pointer, for a deserializer that was never initialised
      > > (specifically, ReadOnlyDeserializer when ROHeap is shared).
      > >
      > > Original change's description:
      > > > Reland "[serializer] Allocate during deserialization"
      > > >
      > > > This is a reland of 5d7a29c9
      > > >
      > > > This reland shuffles around the order of checks in Heap::AllocateRawWith
      > > > to not check the new space addresses until it's known that this is a new
      > > > space allocation. This fixes an UBSan failure during read-only space
      > > > deserialization, which happens before the new space is initialized.
      > > >
      > > > It also fixes some issues discovered by --stress-snapshot, around
      > > > serializing ThinStrings (which are now elided as part of serialization),
      > > > handle counts (I bumped the maximum handle count in that check), and
      > > > clearing map transitions (the map backpointer field needed a Smi
      > > > uninitialized value check).
      > > >
      > > > Original change's description:
      > > > > [serializer] Allocate during deserialization
      > > > >
      > > > > This patch removes the concept of reservations and a specialized
      > > > > deserializer allocator, and instead makes the deserializer allocate
      > > > > directly with the Heap's Allocate method.
      > > > >
      > > > > The major consequence of this is that the GC can now run during
      > > > > deserialization, which means that:
      > > > >
      > > > >   a) Deserialized objects are visible to the GC, and
      > > > >   b) Objects that the deserializer/deserialized objects point to can
      > > > >      move.
      > > > >
      > > > > Point a) is mostly not a problem due to previous work in making
      > > > > deserialized objects "GC valid", i.e. making sure that they have a valid
      > > > > size before any subsequent allocation/safepoint. We now additionally
      > > > > have to initialize the allocated space with a valid tagged value -- this
      > > > > is a magic Smi value to keep "uninitialized" checks simple.
      > > > >
      > > > > Point b) is solved by Handlifying the deserializer. This involves
      > > > > changing any vectors of objects into vectors of Handles, and any object
      > > > > keyed map into an IdentityMap (we can't use Handles as keys because
      > > > > the object's address is no longer a stable hash).
      > > > >
      > > > > Back-references can no longer be direct chunk offsets, so instead the
      > > > > deserializer stores a Handle to each deserialized object, and the
      > > > > backreference is an index into this handle array. This encoding could
      > > > > be optimized in the future with e.g. a second pass over the serialized
      > > > > array which emits a different bytecode for objects that are and aren't
      > > > > back-referenced.
      > > > >
      > > > > Additionally, the slot-walk over objects to initialize them can no
      > > > > longer use absolute slot offsets, as again an object may move and its
      > > > > slot address would become invalid. Now, slots are walked as relative
      > > > > offsets to a Handle to the object, or as absolute slots for the case of
      > > > > root pointers. A concept of "slot accessor" is introduced to share the
      > > > > code between these two modes, and writing the slot (including write
      > > > > barriers) is abstracted into this accessor.
      > > > >
      > > > > Finally, the Code body walk is modified to deserialize all objects
      > > > > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > > > > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > > > > during a RelocInfo walk.
      > > > >
      > > > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > > > > size rather than byte size -- the size is expected to be tagged-aligned
      > > > > anyway, so now we get an extra few bits in the size encoding.
      > > > >
      > > > > Bug: chromium:1075999
      > > > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > > > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > > > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > > > Cr-Commit-Position: refs/heads/master@{#70229}
      > > >
      > > > Bug: chromium:1075999
      > > > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54
      > > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828
      > > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > > Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      > > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > > Cr-Commit-Position: refs/heads/master@{#70267}
      > >
      > > Tbr: jgruber@chromium.org,ulan@chromium.org
      > > Bug: chromium:1075999
      > > Change-Id: Iaa8dc54895866ada0e34a7c9e8fff9ae1cb13f2d
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2444991
      > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > Cr-Commit-Position: refs/heads/master@{#70279}
      >
      > Tbr: jgruber@chromium.org,ulan@chromium.org
      > Cq-Include-Trybots: luci.v8.try:v8_linux64_tsan_rel_ng,v8_linux64_tsan_no_cm_rel_ng,v8_linux64_tsan_isolates_rel_ng
      > Bug: chromium:1075999
      > Change-Id: I0b9b11644aebc4cc8b07c62a0f765b24e4d73d89
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2445872
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Dominik Inführ <dinfuehr@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70288}
      
      TBR=ulan@chromium.org,jgruber@chromium.org,leszeks@chromium.org,dinfuehr@chromium.org
      
      Bug: chromium:1075999, chromium:1135027
      Change-Id: I5d0d9e49c0302d94ff7291834f5f18e7a0839eb7
      Cq-Include-Trybots: luci.v8.try:v8_linux64_tsan_rel_ng,v8_linux64_tsan_no_cm_rel_ng,v8_linux64_tsan_isolates_rel_ng
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2451030Reviewed-by: 's avatarAdam Klein <adamk@chromium.org>
      Commit-Queue: Adam Klein <adamk@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70328}
      a10ec2be
    • Leszek Swirski's avatar
      [identity-map] Change resize heuristic · 76ad3ab5
      Leszek Swirski authored
      Change the resizing behaviour on insert to match that of the hash map
      in base. Specifically, resize when hitting 80% occupancy.
      
      Locally, I measure a ~6% improvement in serialization time from this
      change.
      
      Change-Id: I3fe84de39b2337859fe75fa6b3848198b82071ae
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2448798
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarRoss McIlroy <rmcilroy@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70324}
      76ad3ab5
    • Leszek Swirski's avatar
      [identity-map] Cache the calculated Hash · 77cc96aa
      Leszek Swirski authored
      In IdentityMap, explicitly pass the key's hash so that it can be cached
      between Lookup and Insert.
      
      Change-Id: Ib8a2d96cc399ae025f54c61c129dd4cd18d86c7e
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2448795
      Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
      Reviewed-by: 's avatarRoss McIlroy <rmcilroy@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70319}
      77cc96aa
    • Leszek Swirski's avatar
      [identity-map] Remove double-lookups in IdentityMap · 4e7c99ab
      Leszek Swirski authored
      Remove the pattern of calling 'Find' followed by 'Set' for IdentityMap,
      with a single 'FindOrInsert' that explicitly returns whether an existing
      entry was found, or the entry was inserted. This replaces 'Get', which
      would return either an initialised or uninitialised entry (and callers
      would rely on default initialisation to check this).
      
      Also replace 'Set' with 'Insert', which explicitly requires that the
      element didn't exist before. This matches expectations where it was
      used (where those weren't replaced wholesale with 'FindOrInsert'), and
      makes the naming consistent with 'FindOrInsert'.
      
      Change-Id: I8fb76f4ac14fb92b88474965aafb1ace5fb79145
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2443135
      Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarDominik Inführ <dinfuehr@chromium.org>
      Reviewed-by: 's avatarMaya Lekova <mslekova@chromium.org>
      Reviewed-by: 's avatarRoss McIlroy <rmcilroy@chromium.org>
      Commit-Queue: Dominik Inführ <dinfuehr@chromium.org>
      Commit-Queue: Maya Lekova <mslekova@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70300}
      4e7c99ab
  19. 02 Oct, 2020 3 commits
    • Leszek Swirski's avatar
      Reland^3 "[serializer] Allocate during deserialization" · 3f4e9bbe
      Leszek Swirski authored
      This is a reland of c4a062a9
      which was a reland of 28a30c57
      which was a reland of 5d7a29c9
      
      Fixes TSAN errors from non-atomic writes in the deserializer. Now all
      writes are (relaxed) atomic.
      
      Original change's description:
      > Reland^2 "[serializer] Allocate during deserialization"
      >
      > This is a reland of 28a30c57
      > which was a reland of 5d7a29c9
      >
      > The crashes were from calling RegisterDeserializerFinished on a null
      > Isolate pointer, for a deserializer that was never initialised
      > (specifically, ReadOnlyDeserializer when ROHeap is shared).
      >
      > Original change's description:
      > > Reland "[serializer] Allocate during deserialization"
      > >
      > > This is a reland of 5d7a29c9
      > >
      > > This reland shuffles around the order of checks in Heap::AllocateRawWith
      > > to not check the new space addresses until it's known that this is a new
      > > space allocation. This fixes an UBSan failure during read-only space
      > > deserialization, which happens before the new space is initialized.
      > >
      > > It also fixes some issues discovered by --stress-snapshot, around
      > > serializing ThinStrings (which are now elided as part of serialization),
      > > handle counts (I bumped the maximum handle count in that check), and
      > > clearing map transitions (the map backpointer field needed a Smi
      > > uninitialized value check).
      > >
      > > Original change's description:
      > > > [serializer] Allocate during deserialization
      > > >
      > > > This patch removes the concept of reservations and a specialized
      > > > deserializer allocator, and instead makes the deserializer allocate
      > > > directly with the Heap's Allocate method.
      > > >
      > > > The major consequence of this is that the GC can now run during
      > > > deserialization, which means that:
      > > >
      > > >   a) Deserialized objects are visible to the GC, and
      > > >   b) Objects that the deserializer/deserialized objects point to can
      > > >      move.
      > > >
      > > > Point a) is mostly not a problem due to previous work in making
      > > > deserialized objects "GC valid", i.e. making sure that they have a valid
      > > > size before any subsequent allocation/safepoint. We now additionally
      > > > have to initialize the allocated space with a valid tagged value -- this
      > > > is a magic Smi value to keep "uninitialized" checks simple.
      > > >
      > > > Point b) is solved by Handlifying the deserializer. This involves
      > > > changing any vectors of objects into vectors of Handles, and any object
      > > > keyed map into an IdentityMap (we can't use Handles as keys because
      > > > the object's address is no longer a stable hash).
      > > >
      > > > Back-references can no longer be direct chunk offsets, so instead the
      > > > deserializer stores a Handle to each deserialized object, and the
      > > > backreference is an index into this handle array. This encoding could
      > > > be optimized in the future with e.g. a second pass over the serialized
      > > > array which emits a different bytecode for objects that are and aren't
      > > > back-referenced.
      > > >
      > > > Additionally, the slot-walk over objects to initialize them can no
      > > > longer use absolute slot offsets, as again an object may move and its
      > > > slot address would become invalid. Now, slots are walked as relative
      > > > offsets to a Handle to the object, or as absolute slots for the case of
      > > > root pointers. A concept of "slot accessor" is introduced to share the
      > > > code between these two modes, and writing the slot (including write
      > > > barriers) is abstracted into this accessor.
      > > >
      > > > Finally, the Code body walk is modified to deserialize all objects
      > > > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > > > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > > > during a RelocInfo walk.
      > > >
      > > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > > > size rather than byte size -- the size is expected to be tagged-aligned
      > > > anyway, so now we get an extra few bits in the size encoding.
      > > >
      > > > Bug: chromium:1075999
      > > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > > Cr-Commit-Position: refs/heads/master@{#70229}
      > >
      > > Bug: chromium:1075999
      > > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828
      > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > Cr-Commit-Position: refs/heads/master@{#70267}
      >
      > Tbr: jgruber@chromium.org,ulan@chromium.org
      > Bug: chromium:1075999
      > Change-Id: Iaa8dc54895866ada0e34a7c9e8fff9ae1cb13f2d
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2444991
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70279}
      
      Tbr: jgruber@chromium.org,ulan@chromium.org
      Cq-Include-Trybots: luci.v8.try:v8_linux64_tsan_rel_ng,v8_linux64_tsan_no_cm_rel_ng,v8_linux64_tsan_isolates_rel_ng
      Bug: chromium:1075999
      Change-Id: I0b9b11644aebc4cc8b07c62a0f765b24e4d73d89
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2445872
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarDominik Inführ <dinfuehr@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70288}
      3f4e9bbe
    • Clemens Backes's avatar
      Revert "Reland^2 "[serializer] Allocate during deserialization"" · a81da102
      Clemens Backes authored
      This reverts commit c4a062a9.
      
      Reason for revert: TSan issues: https://ci.chromium.org/p/v8/builders/ci/V8%20Linux64%20TSAN/33504
      
      Original change's description:
      > Reland^2 "[serializer] Allocate during deserialization"
      >
      > This is a reland of 28a30c57
      > which was a reland of 5d7a29c9
      >
      > The crashes were from calling RegisterDeserializerFinished on a null
      > Isolate pointer, for a deserializer that was never initialised
      > (specifically, ReadOnlyDeserializer when ROHeap is shared).
      >
      > Original change's description:
      > > Reland "[serializer] Allocate during deserialization"
      > >
      > > This is a reland of 5d7a29c9
      > >
      > > This reland shuffles around the order of checks in Heap::AllocateRawWith
      > > to not check the new space addresses until it's known that this is a new
      > > space allocation. This fixes an UBSan failure during read-only space
      > > deserialization, which happens before the new space is initialized.
      > >
      > > It also fixes some issues discovered by --stress-snapshot, around
      > > serializing ThinStrings (which are now elided as part of serialization),
      > > handle counts (I bumped the maximum handle count in that check), and
      > > clearing map transitions (the map backpointer field needed a Smi
      > > uninitialized value check).
      > >
      > > Original change's description:
      > > > [serializer] Allocate during deserialization
      > > >
      > > > This patch removes the concept of reservations and a specialized
      > > > deserializer allocator, and instead makes the deserializer allocate
      > > > directly with the Heap's Allocate method.
      > > >
      > > > The major consequence of this is that the GC can now run during
      > > > deserialization, which means that:
      > > >
      > > >   a) Deserialized objects are visible to the GC, and
      > > >   b) Objects that the deserializer/deserialized objects point to can
      > > >      move.
      > > >
      > > > Point a) is mostly not a problem due to previous work in making
      > > > deserialized objects "GC valid", i.e. making sure that they have a valid
      > > > size before any subsequent allocation/safepoint. We now additionally
      > > > have to initialize the allocated space with a valid tagged value -- this
      > > > is a magic Smi value to keep "uninitialized" checks simple.
      > > >
      > > > Point b) is solved by Handlifying the deserializer. This involves
      > > > changing any vectors of objects into vectors of Handles, and any object
      > > > keyed map into an IdentityMap (we can't use Handles as keys because
      > > > the object's address is no longer a stable hash).
      > > >
      > > > Back-references can no longer be direct chunk offsets, so instead the
      > > > deserializer stores a Handle to each deserialized object, and the
      > > > backreference is an index into this handle array. This encoding could
      > > > be optimized in the future with e.g. a second pass over the serialized
      > > > array which emits a different bytecode for objects that are and aren't
      > > > back-referenced.
      > > >
      > > > Additionally, the slot-walk over objects to initialize them can no
      > > > longer use absolute slot offsets, as again an object may move and its
      > > > slot address would become invalid. Now, slots are walked as relative
      > > > offsets to a Handle to the object, or as absolute slots for the case of
      > > > root pointers. A concept of "slot accessor" is introduced to share the
      > > > code between these two modes, and writing the slot (including write
      > > > barriers) is abstracted into this accessor.
      > > >
      > > > Finally, the Code body walk is modified to deserialize all objects
      > > > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > > > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > > > during a RelocInfo walk.
      > > >
      > > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > > > size rather than byte size -- the size is expected to be tagged-aligned
      > > > anyway, so now we get an extra few bits in the size encoding.
      > > >
      > > > Bug: chromium:1075999
      > > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > > Cr-Commit-Position: refs/heads/master@{#70229}
      > >
      > > Bug: chromium:1075999
      > > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828
      > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > Cr-Commit-Position: refs/heads/master@{#70267}
      >
      > Tbr: jgruber@chromium.org,ulan@chromium.org
      > Bug: chromium:1075999
      > Change-Id: Iaa8dc54895866ada0e34a7c9e8fff9ae1cb13f2d
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2444991
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70279}
      
      TBR=ulan@chromium.org,jgruber@chromium.org,leszeks@chromium.org
      
      Change-Id: Ib2f01db4cd9b55639d6a4af971bda865edb45e84
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Bug: chromium:1075999
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2445250Reviewed-by: 's avatarClemens Backes <clemensb@chromium.org>
      Commit-Queue: Clemens Backes <clemensb@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70280}
      a81da102
    • Leszek Swirski's avatar
      Reland^2 "[serializer] Allocate during deserialization" · c4a062a9
      Leszek Swirski authored
      This is a reland of 28a30c57
      which was a reland of 5d7a29c9
      
      The crashes were from calling RegisterDeserializerFinished on a null
      Isolate pointer, for a deserializer that was never initialised
      (specifically, ReadOnlyDeserializer when ROHeap is shared).
      
      Original change's description:
      > Reland "[serializer] Allocate during deserialization"
      >
      > This is a reland of 5d7a29c9
      >
      > This reland shuffles around the order of checks in Heap::AllocateRawWith
      > to not check the new space addresses until it's known that this is a new
      > space allocation. This fixes an UBSan failure during read-only space
      > deserialization, which happens before the new space is initialized.
      >
      > It also fixes some issues discovered by --stress-snapshot, around
      > serializing ThinStrings (which are now elided as part of serialization),
      > handle counts (I bumped the maximum handle count in that check), and
      > clearing map transitions (the map backpointer field needed a Smi
      > uninitialized value check).
      >
      > Original change's description:
      > > [serializer] Allocate during deserialization
      > >
      > > This patch removes the concept of reservations and a specialized
      > > deserializer allocator, and instead makes the deserializer allocate
      > > directly with the Heap's Allocate method.
      > >
      > > The major consequence of this is that the GC can now run during
      > > deserialization, which means that:
      > >
      > >   a) Deserialized objects are visible to the GC, and
      > >   b) Objects that the deserializer/deserialized objects point to can
      > >      move.
      > >
      > > Point a) is mostly not a problem due to previous work in making
      > > deserialized objects "GC valid", i.e. making sure that they have a valid
      > > size before any subsequent allocation/safepoint. We now additionally
      > > have to initialize the allocated space with a valid tagged value -- this
      > > is a magic Smi value to keep "uninitialized" checks simple.
      > >
      > > Point b) is solved by Handlifying the deserializer. This involves
      > > changing any vectors of objects into vectors of Handles, and any object
      > > keyed map into an IdentityMap (we can't use Handles as keys because
      > > the object's address is no longer a stable hash).
      > >
      > > Back-references can no longer be direct chunk offsets, so instead the
      > > deserializer stores a Handle to each deserialized object, and the
      > > backreference is an index into this handle array. This encoding could
      > > be optimized in the future with e.g. a second pass over the serialized
      > > array which emits a different bytecode for objects that are and aren't
      > > back-referenced.
      > >
      > > Additionally, the slot-walk over objects to initialize them can no
      > > longer use absolute slot offsets, as again an object may move and its
      > > slot address would become invalid. Now, slots are walked as relative
      > > offsets to a Handle to the object, or as absolute slots for the case of
      > > root pointers. A concept of "slot accessor" is introduced to share the
      > > code between these two modes, and writing the slot (including write
      > > barriers) is abstracted into this accessor.
      > >
      > > Finally, the Code body walk is modified to deserialize all objects
      > > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > > during a RelocInfo walk.
      > >
      > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > > size rather than byte size -- the size is expected to be tagged-aligned
      > > anyway, so now we get an extra few bits in the size encoding.
      > >
      > > Bug: chromium:1075999
      > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > Cr-Commit-Position: refs/heads/master@{#70229}
      >
      > Bug: chromium:1075999
      > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70267}
      
      Tbr: jgruber@chromium.org,ulan@chromium.org
      Bug: chromium:1075999
      Change-Id: Iaa8dc54895866ada0e34a7c9e8fff9ae1cb13f2d
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2444991Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70279}
      c4a062a9
  20. 01 Oct, 2020 2 commits
    • Zhi An Ng's avatar
      Revert "Reland "[serializer] Allocate during deserialization"" · c7c0e790
      Zhi An Ng authored
      This reverts commit 28a30c57.
      
      Reason for revert: Broke Test262 https://ci.chromium.org/p/v8/builders/ci/V8%20Linux%20-%20shared/38638?
      
      Original change's description:
      > Reland "[serializer] Allocate during deserialization"
      >
      > This is a reland of 5d7a29c9
      >
      > This reland shuffles around the order of checks in Heap::AllocateRawWith
      > to not check the new space addresses until it's known that this is a new
      > space allocation. This fixes an UBSan failure during read-only space
      > deserialization, which happens before the new space is initialized.
      >
      > It also fixes some issues discovered by --stress-snapshot, around
      > serializing ThinStrings (which are now elided as part of serialization),
      > handle counts (I bumped the maximum handle count in that check), and
      > clearing map transitions (the map backpointer field needed a Smi
      > uninitialized value check).
      >
      > Original change's description:
      > > [serializer] Allocate during deserialization
      > >
      > > This patch removes the concept of reservations and a specialized
      > > deserializer allocator, and instead makes the deserializer allocate
      > > directly with the Heap's Allocate method.
      > >
      > > The major consequence of this is that the GC can now run during
      > > deserialization, which means that:
      > >
      > >   a) Deserialized objects are visible to the GC, and
      > >   b) Objects that the deserializer/deserialized objects point to can
      > >      move.
      > >
      > > Point a) is mostly not a problem due to previous work in making
      > > deserialized objects "GC valid", i.e. making sure that they have a valid
      > > size before any subsequent allocation/safepoint. We now additionally
      > > have to initialize the allocated space with a valid tagged value -- this
      > > is a magic Smi value to keep "uninitialized" checks simple.
      > >
      > > Point b) is solved by Handlifying the deserializer. This involves
      > > changing any vectors of objects into vectors of Handles, and any object
      > > keyed map into an IdentityMap (we can't use Handles as keys because
      > > the object's address is no longer a stable hash).
      > >
      > > Back-references can no longer be direct chunk offsets, so instead the
      > > deserializer stores a Handle to each deserialized object, and the
      > > backreference is an index into this handle array. This encoding could
      > > be optimized in the future with e.g. a second pass over the serialized
      > > array which emits a different bytecode for objects that are and aren't
      > > back-referenced.
      > >
      > > Additionally, the slot-walk over objects to initialize them can no
      > > longer use absolute slot offsets, as again an object may move and its
      > > slot address would become invalid. Now, slots are walked as relative
      > > offsets to a Handle to the object, or as absolute slots for the case of
      > > root pointers. A concept of "slot accessor" is introduced to share the
      > > code between these two modes, and writing the slot (including write
      > > barriers) is abstracted into this accessor.
      > >
      > > Finally, the Code body walk is modified to deserialize all objects
      > > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > > during a RelocInfo walk.
      > >
      > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > > size rather than byte size -- the size is expected to be tagged-aligned
      > > anyway, so now we get an extra few bits in the size encoding.
      > >
      > > Bug: chromium:1075999
      > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > Cr-Commit-Position: refs/heads/master@{#70229}
      >
      > Bug: chromium:1075999
      > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70267}
      
      TBR=ulan@chromium.org,jgruber@chromium.org,leszeks@chromium.org
      
      Change-Id: Ieed68332ef6a7ad36db061e3f48be0f28673d7a2
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Bug: chromium:1075999
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2441608Reviewed-by: 's avatarZhi An Ng <zhin@chromium.org>
      Commit-Queue: Zhi An Ng <zhin@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70268}
      c7c0e790
    • Leszek Swirski's avatar
      Reland "[serializer] Allocate during deserialization" · 28a30c57
      Leszek Swirski authored
      This is a reland of 5d7a29c9
      
      This reland shuffles around the order of checks in Heap::AllocateRawWith
      to not check the new space addresses until it's known that this is a new
      space allocation. This fixes an UBSan failure during read-only space
      deserialization, which happens before the new space is initialized.
      
      It also fixes some issues discovered by --stress-snapshot, around
      serializing ThinStrings (which are now elided as part of serialization),
      handle counts (I bumped the maximum handle count in that check), and
      clearing map transitions (the map backpointer field needed a Smi
      uninitialized value check).
      
      Original change's description:
      > [serializer] Allocate during deserialization
      >
      > This patch removes the concept of reservations and a specialized
      > deserializer allocator, and instead makes the deserializer allocate
      > directly with the Heap's Allocate method.
      >
      > The major consequence of this is that the GC can now run during
      > deserialization, which means that:
      >
      >   a) Deserialized objects are visible to the GC, and
      >   b) Objects that the deserializer/deserialized objects point to can
      >      move.
      >
      > Point a) is mostly not a problem due to previous work in making
      > deserialized objects "GC valid", i.e. making sure that they have a valid
      > size before any subsequent allocation/safepoint. We now additionally
      > have to initialize the allocated space with a valid tagged value -- this
      > is a magic Smi value to keep "uninitialized" checks simple.
      >
      > Point b) is solved by Handlifying the deserializer. This involves
      > changing any vectors of objects into vectors of Handles, and any object
      > keyed map into an IdentityMap (we can't use Handles as keys because
      > the object's address is no longer a stable hash).
      >
      > Back-references can no longer be direct chunk offsets, so instead the
      > deserializer stores a Handle to each deserialized object, and the
      > backreference is an index into this handle array. This encoding could
      > be optimized in the future with e.g. a second pass over the serialized
      > array which emits a different bytecode for objects that are and aren't
      > back-referenced.
      >
      > Additionally, the slot-walk over objects to initialize them can no
      > longer use absolute slot offsets, as again an object may move and its
      > slot address would become invalid. Now, slots are walked as relative
      > offsets to a Handle to the object, or as absolute slots for the case of
      > root pointers. A concept of "slot accessor" is introduced to share the
      > code between these two modes, and writing the slot (including write
      > barriers) is abstracted into this accessor.
      >
      > Finally, the Code body walk is modified to deserialize all objects
      > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > during a RelocInfo walk.
      >
      > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > size rather than byte size -- the size is expected to be tagged-aligned
      > anyway, so now we get an extra few bits in the size encoding.
      >
      > Bug: chromium:1075999
      > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70229}
      
      Bug: chromium:1075999
      Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Reviewed-by: 's avatarJakob Gruber <jgruber@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70267}
      28a30c57
  21. 30 Sep, 2020 2 commits
    • Leszek Swirski's avatar
      Revert "[serializer] Allocate during deserialization" · 74f3665c
      Leszek Swirski authored
      This reverts commit 5d7a29c9.
      
      Reason for revert: UBSan -- https://ci.chromium.org/p/v8/builders/ci/V8%20Linux64%20UBSan/13100
      
      Original change's description:
      > [serializer] Allocate during deserialization
      >
      > This patch removes the concept of reservations and a specialized
      > deserializer allocator, and instead makes the deserializer allocate
      > directly with the Heap's Allocate method.
      >
      > The major consequence of this is that the GC can now run during
      > deserialization, which means that:
      >
      >   a) Deserialized objects are visible to the GC, and
      >   b) Objects that the deserializer/deserialized objects point to can
      >      move.
      >
      > Point a) is mostly not a problem due to previous work in making
      > deserialized objects "GC valid", i.e. making sure that they have a valid
      > size before any subsequent allocation/safepoint. We now additionally
      > have to initialize the allocated space with a valid tagged value -- this
      > is a magic Smi value to keep "uninitialized" checks simple.
      >
      > Point b) is solved by Handlifying the deserializer. This involves
      > changing any vectors of objects into vectors of Handles, and any object
      > keyed map into an IdentityMap (we can't use Handles as keys because
      > the object's address is no longer a stable hash).
      >
      > Back-references can no longer be direct chunk offsets, so instead the
      > deserializer stores a Handle to each deserialized object, and the
      > backreference is an index into this handle array. This encoding could
      > be optimized in the future with e.g. a second pass over the serialized
      > array which emits a different bytecode for objects that are and aren't
      > back-referenced.
      >
      > Additionally, the slot-walk over objects to initialize them can no
      > longer use absolute slot offsets, as again an object may move and its
      > slot address would become invalid. Now, slots are walked as relative
      > offsets to a Handle to the object, or as absolute slots for the case of
      > root pointers. A concept of "slot accessor" is introduced to share the
      > code between these two modes, and writing the slot (including write
      > barriers) is abstracted into this accessor.
      >
      > Finally, the Code body walk is modified to deserialize all objects
      > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > during a RelocInfo walk.
      >
      > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > size rather than byte size -- the size is expected to be tagged-aligned
      > anyway, so now we get an extra few bits in the size encoding.
      >
      > Bug: chromium:1075999
      > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70229}
      
      TBR=ulan@chromium.org,jgruber@chromium.org,leszeks@chromium.org
      
      Change-Id: I2bd792a24861e8f54897e51522769b50f8f814e2
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Bug: chromium:1075999
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440827
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70231}
      74f3665c
    • Leszek Swirski's avatar
      [serializer] Allocate during deserialization · 5d7a29c9
      Leszek Swirski authored
      This patch removes the concept of reservations and a specialized
      deserializer allocator, and instead makes the deserializer allocate
      directly with the Heap's Allocate method.
      
      The major consequence of this is that the GC can now run during
      deserialization, which means that:
      
        a) Deserialized objects are visible to the GC, and
        b) Objects that the deserializer/deserialized objects point to can
           move.
      
      Point a) is mostly not a problem due to previous work in making
      deserialized objects "GC valid", i.e. making sure that they have a valid
      size before any subsequent allocation/safepoint. We now additionally
      have to initialize the allocated space with a valid tagged value -- this
      is a magic Smi value to keep "uninitialized" checks simple.
      
      Point b) is solved by Handlifying the deserializer. This involves
      changing any vectors of objects into vectors of Handles, and any object
      keyed map into an IdentityMap (we can't use Handles as keys because
      the object's address is no longer a stable hash).
      
      Back-references can no longer be direct chunk offsets, so instead the
      deserializer stores a Handle to each deserialized object, and the
      backreference is an index into this handle array. This encoding could
      be optimized in the future with e.g. a second pass over the serialized
      array which emits a different bytecode for objects that are and aren't
      back-referenced.
      
      Additionally, the slot-walk over objects to initialize them can no
      longer use absolute slot offsets, as again an object may move and its
      slot address would become invalid. Now, slots are walked as relative
      offsets to a Handle to the object, or as absolute slots for the case of
      root pointers. A concept of "slot accessor" is introduced to share the
      code between these two modes, and writing the slot (including write
      barriers) is abstracted into this accessor.
      
      Finally, the Code body walk is modified to deserialize all objects
      referred to by RelocInfos before doing the RelocInfo walk itself. This
      is because RelocInfoIterator uses raw pointers, so we cannot allocate
      during a RelocInfo walk.
      
      As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      size rather than byte size -- the size is expected to be tagged-aligned
      anyway, so now we get an extra few bits in the size encoding.
      
      Bug: chromium:1075999
      Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarJakob Gruber <jgruber@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70229}
      5d7a29c9
  22. 08 Sep, 2020 1 commit
  23. 20 Aug, 2020 1 commit
  24. 13 Aug, 2020 1 commit
    • Leszek Swirski's avatar
      [runtime] Compress the off-heap string table · 279bd3e1
      Leszek Swirski authored
      Rather than an Object array, use a Tagged_t array to store the
      elements of the off-heap string table. This matches the old on-heap
      string table's behaviour, and recovers memory regressions from that
      work.
      
      To be able to do this, this also introduces a new slot type,
      OffHeapObjectSlot. This is because CompressedObjectSlot assumes that
      the slot is on-heap, and that it can mask the slot location to
      recover the isolate root. OffHeapObjectSlot doesn't define an
      operator*, and instead provides a `load(const Isolate*)` method.
      The other slots also gain this method so that they can use it in
      slot-templated functions. Also, the RootVisitor gains an
      OffHeapObjectSlot overload, which is UNREACHABLE by default and only
      needs to be defined by visitors that can access the string table.
      
      As a drive-by, fix some non-atomic accesses to the off-heap string
      table, also using the new slot.
      
      Bug: chromium:1109553
      Bug: chromium:1115116
      Bug: chromium:1115559
      Bug: chromium:1115683
      Change-Id: I819ed7bf820e9ef98ad5d5f9d0d592efbb6f5aa6
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2352489
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarJakob Gruber <jgruber@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#69381}
      279bd3e1
  25. 31 Jul, 2020 1 commit
    • Dan Elphick's avatar
      [heap] Share RO_SPACE pages with pointer compression · c7d22c49
      Dan Elphick authored
      This allows the configuration v8_enable_shared_ro_heap and
      v8_enable_pointer_compression on Linux and Android, although it still
      defaults to off.
      
      When pointer compression and read-only heap sharing are enabled, sharing
      is achieved by allocating ReadOnlyPages in shared memory that are
      retained in the shared ReadOnlyArtifacts object. These ReadOnlyPages are
      then remapped into the address space of the Isolate ultimately using
      mremap.
      
      To simplify the creation process the ReadOnlySpace memory for the first
      Isolate is created as before without any sharing. It is only when the
      ReadOnlySpace memory has been finalized that the shared memory is
      allocated and has its contents copied into it. The original memory is
      then released (with PC this means it's just released back to the
      BoundedPageAllocator) and immediately re-allocated as a shared mapping.
      
      Because we would like to make v8_enable_shared_ro_heap default to true
      at some point but can't make this conditional on the value returned by
      a method in the code we are yet to compile, the code required for
      sharing has been mostly changed to use ifs with
      ReadOnlyHeap::IsReadOnlySpaceShared() instead of #ifdefs except where
      a compile error would result due to the absence of a class members
      without sharing. IsReadOnlySpaceShared() will evaluate
      CanAllocateSharedPages in the platform PageAllocator (with pointer
      compression and sharing enabled) once and cache that value so sharing
      cannot be toggled during the lifetime of the process.
      
      Bug: v8:10454
      Change-Id: I0236d752047ecce71bd64c159430517a712bc1e2
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2267300
      Commit-Queue: Dan Elphick <delphick@chromium.org>
      Reviewed-by: 's avatarIgor Sheludko <ishell@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#69174}
      c7d22c49
  26. 28 Jul, 2020 1 commit
    • Ross McIlroy's avatar
      [TurboProp] Add reference map population to fast reg alloc. · e9a37bf8
      Ross McIlroy authored
      Adds support for populating reference maps to the fast
      register allocator. In order to calculate whether a stack slot
      is live at a given instruction, we use the dominator tree to
      build a bitmap of blocks which are dominated by each block.
      A variable's spill operand is classed as alive for any blocks that are
      dominated by the block it was defined in, until the instruction index
      of the spill operand's last use. As such, it may be classified as live
      down a branch where the spill operand is never used, however it is safe
      since the spill slot won't be re-allocated until after it's last-use
      instruction index in any case.
      
      BUG=v8:9684
      
      Change-Id: I772374599ef916f57d82d468f66429e32c712ddf
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2298008
      Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
      Reviewed-by: 's avatarGeorg Neis <neis@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#69108}
      e9a37bf8
  27. 16 Jul, 2020 1 commit
  28. 13 Jul, 2020 1 commit
  29. 10 Jul, 2020 1 commit
  30. 09 Jul, 2020 1 commit