1. 01 Oct, 2020 2 commits
    • Zhi An Ng's avatar
      Revert "Reland "[serializer] Allocate during deserialization"" · c7c0e790
      Zhi An Ng authored
      This reverts commit 28a30c57.
      
      Reason for revert: Broke Test262 https://ci.chromium.org/p/v8/builders/ci/V8%20Linux%20-%20shared/38638?
      
      Original change's description:
      > Reland "[serializer] Allocate during deserialization"
      >
      > This is a reland of 5d7a29c9
      >
      > This reland shuffles around the order of checks in Heap::AllocateRawWith
      > to not check the new space addresses until it's known that this is a new
      > space allocation. This fixes an UBSan failure during read-only space
      > deserialization, which happens before the new space is initialized.
      >
      > It also fixes some issues discovered by --stress-snapshot, around
      > serializing ThinStrings (which are now elided as part of serialization),
      > handle counts (I bumped the maximum handle count in that check), and
      > clearing map transitions (the map backpointer field needed a Smi
      > uninitialized value check).
      >
      > Original change's description:
      > > [serializer] Allocate during deserialization
      > >
      > > This patch removes the concept of reservations and a specialized
      > > deserializer allocator, and instead makes the deserializer allocate
      > > directly with the Heap's Allocate method.
      > >
      > > The major consequence of this is that the GC can now run during
      > > deserialization, which means that:
      > >
      > >   a) Deserialized objects are visible to the GC, and
      > >   b) Objects that the deserializer/deserialized objects point to can
      > >      move.
      > >
      > > Point a) is mostly not a problem due to previous work in making
      > > deserialized objects "GC valid", i.e. making sure that they have a valid
      > > size before any subsequent allocation/safepoint. We now additionally
      > > have to initialize the allocated space with a valid tagged value -- this
      > > is a magic Smi value to keep "uninitialized" checks simple.
      > >
      > > Point b) is solved by Handlifying the deserializer. This involves
      > > changing any vectors of objects into vectors of Handles, and any object
      > > keyed map into an IdentityMap (we can't use Handles as keys because
      > > the object's address is no longer a stable hash).
      > >
      > > Back-references can no longer be direct chunk offsets, so instead the
      > > deserializer stores a Handle to each deserialized object, and the
      > > backreference is an index into this handle array. This encoding could
      > > be optimized in the future with e.g. a second pass over the serialized
      > > array which emits a different bytecode for objects that are and aren't
      > > back-referenced.
      > >
      > > Additionally, the slot-walk over objects to initialize them can no
      > > longer use absolute slot offsets, as again an object may move and its
      > > slot address would become invalid. Now, slots are walked as relative
      > > offsets to a Handle to the object, or as absolute slots for the case of
      > > root pointers. A concept of "slot accessor" is introduced to share the
      > > code between these two modes, and writing the slot (including write
      > > barriers) is abstracted into this accessor.
      > >
      > > Finally, the Code body walk is modified to deserialize all objects
      > > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > > during a RelocInfo walk.
      > >
      > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > > size rather than byte size -- the size is expected to be tagged-aligned
      > > anyway, so now we get an extra few bits in the size encoding.
      > >
      > > Bug: chromium:1075999
      > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > > Cr-Commit-Position: refs/heads/master@{#70229}
      >
      > Bug: chromium:1075999
      > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70267}
      
      TBR=ulan@chromium.org,jgruber@chromium.org,leszeks@chromium.org
      
      Change-Id: Ieed68332ef6a7ad36db061e3f48be0f28673d7a2
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Bug: chromium:1075999
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2441608Reviewed-by: 's avatarZhi An Ng <zhin@chromium.org>
      Commit-Queue: Zhi An Ng <zhin@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70268}
      c7c0e790
    • Leszek Swirski's avatar
      Reland "[serializer] Allocate during deserialization" · 28a30c57
      Leszek Swirski authored
      This is a reland of 5d7a29c9
      
      This reland shuffles around the order of checks in Heap::AllocateRawWith
      to not check the new space addresses until it's known that this is a new
      space allocation. This fixes an UBSan failure during read-only space
      deserialization, which happens before the new space is initialized.
      
      It also fixes some issues discovered by --stress-snapshot, around
      serializing ThinStrings (which are now elided as part of serialization),
      handle counts (I bumped the maximum handle count in that check), and
      clearing map transitions (the map backpointer field needed a Smi
      uninitialized value check).
      
      Original change's description:
      > [serializer] Allocate during deserialization
      >
      > This patch removes the concept of reservations and a specialized
      > deserializer allocator, and instead makes the deserializer allocate
      > directly with the Heap's Allocate method.
      >
      > The major consequence of this is that the GC can now run during
      > deserialization, which means that:
      >
      >   a) Deserialized objects are visible to the GC, and
      >   b) Objects that the deserializer/deserialized objects point to can
      >      move.
      >
      > Point a) is mostly not a problem due to previous work in making
      > deserialized objects "GC valid", i.e. making sure that they have a valid
      > size before any subsequent allocation/safepoint. We now additionally
      > have to initialize the allocated space with a valid tagged value -- this
      > is a magic Smi value to keep "uninitialized" checks simple.
      >
      > Point b) is solved by Handlifying the deserializer. This involves
      > changing any vectors of objects into vectors of Handles, and any object
      > keyed map into an IdentityMap (we can't use Handles as keys because
      > the object's address is no longer a stable hash).
      >
      > Back-references can no longer be direct chunk offsets, so instead the
      > deserializer stores a Handle to each deserialized object, and the
      > backreference is an index into this handle array. This encoding could
      > be optimized in the future with e.g. a second pass over the serialized
      > array which emits a different bytecode for objects that are and aren't
      > back-referenced.
      >
      > Additionally, the slot-walk over objects to initialize them can no
      > longer use absolute slot offsets, as again an object may move and its
      > slot address would become invalid. Now, slots are walked as relative
      > offsets to a Handle to the object, or as absolute slots for the case of
      > root pointers. A concept of "slot accessor" is introduced to share the
      > code between these two modes, and writing the slot (including write
      > barriers) is abstracted into this accessor.
      >
      > Finally, the Code body walk is modified to deserialize all objects
      > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > during a RelocInfo walk.
      >
      > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > size rather than byte size -- the size is expected to be tagged-aligned
      > anyway, so now we get an extra few bits in the size encoding.
      >
      > Bug: chromium:1075999
      > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70229}
      
      Bug: chromium:1075999
      Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Reviewed-by: 's avatarJakob Gruber <jgruber@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70267}
      28a30c57
  2. 30 Sep, 2020 2 commits
    • Leszek Swirski's avatar
      Revert "[serializer] Allocate during deserialization" · 74f3665c
      Leszek Swirski authored
      This reverts commit 5d7a29c9.
      
      Reason for revert: UBSan -- https://ci.chromium.org/p/v8/builders/ci/V8%20Linux64%20UBSan/13100
      
      Original change's description:
      > [serializer] Allocate during deserialization
      >
      > This patch removes the concept of reservations and a specialized
      > deserializer allocator, and instead makes the deserializer allocate
      > directly with the Heap's Allocate method.
      >
      > The major consequence of this is that the GC can now run during
      > deserialization, which means that:
      >
      >   a) Deserialized objects are visible to the GC, and
      >   b) Objects that the deserializer/deserialized objects point to can
      >      move.
      >
      > Point a) is mostly not a problem due to previous work in making
      > deserialized objects "GC valid", i.e. making sure that they have a valid
      > size before any subsequent allocation/safepoint. We now additionally
      > have to initialize the allocated space with a valid tagged value -- this
      > is a magic Smi value to keep "uninitialized" checks simple.
      >
      > Point b) is solved by Handlifying the deserializer. This involves
      > changing any vectors of objects into vectors of Handles, and any object
      > keyed map into an IdentityMap (we can't use Handles as keys because
      > the object's address is no longer a stable hash).
      >
      > Back-references can no longer be direct chunk offsets, so instead the
      > deserializer stores a Handle to each deserialized object, and the
      > backreference is an index into this handle array. This encoding could
      > be optimized in the future with e.g. a second pass over the serialized
      > array which emits a different bytecode for objects that are and aren't
      > back-referenced.
      >
      > Additionally, the slot-walk over objects to initialize them can no
      > longer use absolute slot offsets, as again an object may move and its
      > slot address would become invalid. Now, slots are walked as relative
      > offsets to a Handle to the object, or as absolute slots for the case of
      > root pointers. A concept of "slot accessor" is introduced to share the
      > code between these two modes, and writing the slot (including write
      > barriers) is abstracted into this accessor.
      >
      > Finally, the Code body walk is modified to deserialize all objects
      > referred to by RelocInfos before doing the RelocInfo walk itself. This
      > is because RelocInfoIterator uses raw pointers, so we cannot allocate
      > during a RelocInfo walk.
      >
      > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      > size rather than byte size -- the size is expected to be tagged-aligned
      > anyway, so now we get an extra few bits in the size encoding.
      >
      > Bug: chromium:1075999
      > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
      > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#70229}
      
      TBR=ulan@chromium.org,jgruber@chromium.org,leszeks@chromium.org
      
      Change-Id: I2bd792a24861e8f54897e51522769b50f8f814e2
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Bug: chromium:1075999
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440827
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70231}
      74f3665c
    • Leszek Swirski's avatar
      [serializer] Allocate during deserialization · 5d7a29c9
      Leszek Swirski authored
      This patch removes the concept of reservations and a specialized
      deserializer allocator, and instead makes the deserializer allocate
      directly with the Heap's Allocate method.
      
      The major consequence of this is that the GC can now run during
      deserialization, which means that:
      
        a) Deserialized objects are visible to the GC, and
        b) Objects that the deserializer/deserialized objects point to can
           move.
      
      Point a) is mostly not a problem due to previous work in making
      deserialized objects "GC valid", i.e. making sure that they have a valid
      size before any subsequent allocation/safepoint. We now additionally
      have to initialize the allocated space with a valid tagged value -- this
      is a magic Smi value to keep "uninitialized" checks simple.
      
      Point b) is solved by Handlifying the deserializer. This involves
      changing any vectors of objects into vectors of Handles, and any object
      keyed map into an IdentityMap (we can't use Handles as keys because
      the object's address is no longer a stable hash).
      
      Back-references can no longer be direct chunk offsets, so instead the
      deserializer stores a Handle to each deserialized object, and the
      backreference is an index into this handle array. This encoding could
      be optimized in the future with e.g. a second pass over the serialized
      array which emits a different bytecode for objects that are and aren't
      back-referenced.
      
      Additionally, the slot-walk over objects to initialize them can no
      longer use absolute slot offsets, as again an object may move and its
      slot address would become invalid. Now, slots are walked as relative
      offsets to a Handle to the object, or as absolute slots for the case of
      root pointers. A concept of "slot accessor" is introduced to share the
      code between these two modes, and writing the slot (including write
      barriers) is abstracted into this accessor.
      
      Finally, the Code body walk is modified to deserialize all objects
      referred to by RelocInfos before doing the RelocInfo walk itself. This
      is because RelocInfoIterator uses raw pointers, so we cannot allocate
      during a RelocInfo walk.
      
      As a drive-by, the VariableRawData bytecode is tweaked to use tagged
      size rather than byte size -- the size is expected to be tagged-aligned
      anyway, so now we get an extra few bits in the size encoding.
      
      Bug: chromium:1075999
      Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarJakob Gruber <jgruber@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#70229}
      5d7a29c9
  3. 08 Sep, 2020 1 commit
  4. 20 Aug, 2020 1 commit
  5. 13 Aug, 2020 1 commit
    • Leszek Swirski's avatar
      [runtime] Compress the off-heap string table · 279bd3e1
      Leszek Swirski authored
      Rather than an Object array, use a Tagged_t array to store the
      elements of the off-heap string table. This matches the old on-heap
      string table's behaviour, and recovers memory regressions from that
      work.
      
      To be able to do this, this also introduces a new slot type,
      OffHeapObjectSlot. This is because CompressedObjectSlot assumes that
      the slot is on-heap, and that it can mask the slot location to
      recover the isolate root. OffHeapObjectSlot doesn't define an
      operator*, and instead provides a `load(const Isolate*)` method.
      The other slots also gain this method so that they can use it in
      slot-templated functions. Also, the RootVisitor gains an
      OffHeapObjectSlot overload, which is UNREACHABLE by default and only
      needs to be defined by visitors that can access the string table.
      
      As a drive-by, fix some non-atomic accesses to the off-heap string
      table, also using the new slot.
      
      Bug: chromium:1109553
      Bug: chromium:1115116
      Bug: chromium:1115559
      Bug: chromium:1115683
      Change-Id: I819ed7bf820e9ef98ad5d5f9d0d592efbb6f5aa6
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2352489
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarJakob Gruber <jgruber@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#69381}
      279bd3e1
  6. 31 Jul, 2020 1 commit
    • Dan Elphick's avatar
      [heap] Share RO_SPACE pages with pointer compression · c7d22c49
      Dan Elphick authored
      This allows the configuration v8_enable_shared_ro_heap and
      v8_enable_pointer_compression on Linux and Android, although it still
      defaults to off.
      
      When pointer compression and read-only heap sharing are enabled, sharing
      is achieved by allocating ReadOnlyPages in shared memory that are
      retained in the shared ReadOnlyArtifacts object. These ReadOnlyPages are
      then remapped into the address space of the Isolate ultimately using
      mremap.
      
      To simplify the creation process the ReadOnlySpace memory for the first
      Isolate is created as before without any sharing. It is only when the
      ReadOnlySpace memory has been finalized that the shared memory is
      allocated and has its contents copied into it. The original memory is
      then released (with PC this means it's just released back to the
      BoundedPageAllocator) and immediately re-allocated as a shared mapping.
      
      Because we would like to make v8_enable_shared_ro_heap default to true
      at some point but can't make this conditional on the value returned by
      a method in the code we are yet to compile, the code required for
      sharing has been mostly changed to use ifs with
      ReadOnlyHeap::IsReadOnlySpaceShared() instead of #ifdefs except where
      a compile error would result due to the absence of a class members
      without sharing. IsReadOnlySpaceShared() will evaluate
      CanAllocateSharedPages in the platform PageAllocator (with pointer
      compression and sharing enabled) once and cache that value so sharing
      cannot be toggled during the lifetime of the process.
      
      Bug: v8:10454
      Change-Id: I0236d752047ecce71bd64c159430517a712bc1e2
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2267300
      Commit-Queue: Dan Elphick <delphick@chromium.org>
      Reviewed-by: 's avatarIgor Sheludko <ishell@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#69174}
      c7d22c49
  7. 28 Jul, 2020 1 commit
    • Ross McIlroy's avatar
      [TurboProp] Add reference map population to fast reg alloc. · e9a37bf8
      Ross McIlroy authored
      Adds support for populating reference maps to the fast
      register allocator. In order to calculate whether a stack slot
      is live at a given instruction, we use the dominator tree to
      build a bitmap of blocks which are dominated by each block.
      A variable's spill operand is classed as alive for any blocks that are
      dominated by the block it was defined in, until the instruction index
      of the spill operand's last use. As such, it may be classified as live
      down a branch where the spill operand is never used, however it is safe
      since the spill slot won't be re-allocated until after it's last-use
      instruction index in any case.
      
      BUG=v8:9684
      
      Change-Id: I772374599ef916f57d82d468f66429e32c712ddf
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2298008
      Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
      Reviewed-by: 's avatarGeorg Neis <neis@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#69108}
      e9a37bf8
  8. 16 Jul, 2020 1 commit
  9. 13 Jul, 2020 1 commit
  10. 10 Jul, 2020 1 commit
  11. 09 Jul, 2020 1 commit
  12. 18 Jun, 2020 2 commits
  13. 09 Jun, 2020 1 commit
    • Clemens Backes's avatar
      [utils] Add OwnedVector::NewForOverwrite · ff2e485f
      Clemens Backes authored
      The existing {OwnedVector::New} value-initializes all elements, which
      means zeroing them in case on integral types. In many cases though we
      know that we will overwrite the content anyway, so the initialization is
      redundant.
      In the case of assembly buffers for wasm compilation, this zeroing
      showed up with several percent of execution times for some benchmarks.
      
      Hence this CL introduces a new {OwnedVector::NewForOverwrite} (along the
      lines of {std::make_unique_for_overwrite}), which only
      default-initializes the values (meaning no initialization for integral
      values).
      
      R=thibaudm@chromium.org
      
      Bug: v8:10576
      Change-Id: I8d2806088acebe8a264dea2c7ed74b0423671d4f
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2237140
      Commit-Queue: Clemens Backes <clemensb@chromium.org>
      Reviewed-by: 's avatarThibaud Michaud <thibaudm@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#68268}
      ff2e485f
  14. 18 May, 2020 1 commit
    • Clemens Backes's avatar
      [utils] Synchronize across StdoutStream instances · a0687c71
      Clemens Backes authored
      We constantly fight against scrambled output with --print-wasm-code and
      other flags. Passing --single-threaded only partially mitigates this,
      because there could still be multiple isolates (e.g. Workers), and we
      sometimes failed to really execute in a single thread if that flag was
      set.
      Hence this CL solves the problem in a more fundamental way: Whenever a
      {StdoutStream} is constructed, it implicitly takes a global recursive
      mutex. The recursive mutex is needed because we still have some printing
      methods that don't take a stream as parameter, and instead create their
      own instance of {StdoutStream}, which should not crash of course.
      
      The overhead of taking a mutex should be acceptable, since output to
      stdout mostly happens if special tracing flags have been passed, and is
      slow anyway.
      
      This CL ensures that the {StdoutStream} is used at least for
      --print-code, --print-wasm-code, and --trace-turbo-graph.
      More flags can later be ported on demand.
      
      The {JSHeapBroker} class was modified to not contain a {StdoutStream},
      but instead create one on demand.
      
      R=mlippautz@chromium.org, tebbi@chromium.org
      CC=ahaas@chromium.org
      
      Bug: v8:10506
      Change-Id: Ib9cf8d76aa79553b4215bb7775e6d47a8179aafa
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2201767Reviewed-by: 's avatarAndreas Haas <ahaas@chromium.org>
      Reviewed-by: 's avatarMichael Lippautz <mlippautz@chromium.org>
      Reviewed-by: 's avatarTobias Tebbi <tebbi@chromium.org>
      Commit-Queue: Clemens Backes <clemensb@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#67855}
      a0687c71
  15. 14 May, 2020 1 commit
    • Leszek Swirski's avatar
      [offthread] Add off thread deserialization · 595609fb
      Leszek Swirski authored
      Add a new OffThreadObjectDeserializer, which can deserialize a snapshot
      into an OffThreadIsolate.
      
      This involves templating the Deserializer base class on Isolate, and
      amending OffThreadHeap to be able to create Reservations same as the
      main-thread Heap can. Various off-thread incompatible methods are
      stubbed out as UNREACHABLE in OffThreadIsolate overloads.
      
      There is currently no API entry into the off-thread deserialization, but
      under --stress-background-compile it now runs the CodeDeserializer (i.e.
      code cache deserialization) in a background thread.
      
      Bug: chromium:1075999
      
      Change-Id: I2453f51ae31df4d4b6aa94b0804a9d6d3a03781e
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2172741
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarJakob Gruber <jgruber@chromium.org>
      Reviewed-by: 's avatarIgor Sheludko <ishell@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#67799}
      595609fb
  16. 07 May, 2020 1 commit
  17. 06 May, 2020 1 commit
  18. 17 Apr, 2020 2 commits
    • Clemens Backes's avatar
      [base] Fix {StaticCharVector} and add {StaticOneByteVector} · e04eb281
      Clemens Backes authored
      {StaticCharVector}, according to its name, should return a
      {Vector<const char>}. For getting a {Vector<const uint8_t>}, the method
      should be called {StaticOneByteVector}, analog to the
      {OneByteVector} methods that already exist.
      
      Also, {StaticCharVector} is constexpr, but {StaticOneByteVector} cannot
      be, since it contains a {reinterpret_cast}. The same holds for
      {Vector::cast} in general.
      
      This CL
      - changes the return type of {StaticCharVector} to be
        {Vector<const char>},
      - introduces a new {StaticOneByteVector} which returns
        {Vector<const uint8_t>},
      - fixes constexpr annotations at various methods returning {Vector}s,
      - refactors users of {StaticCharVector} to either use
        {StaticOneByteVector} instead, or work on {char} if that makes more
        sense.
      
      R=leszeks@chromium.org
      
      Bug: v8:10426
      Change-Id: I71e336097e41ad30f982aa6344ca3d67b3a01fe3
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2154196
      Commit-Queue: Clemens Backes <clemensb@chromium.org>
      Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#67213}
      e04eb281
    • Clemens Backes's avatar
      [base][vector] Test constexpr factories · 80e5e2b4
      Clemens Backes authored
      Test some constexpr factories. StaticCharVector is not actually
      constexpr, this will be fixed in a follow-up CL.
      
      R=leszeks@chromium.org
      
      Bug: v8:10426
      Change-Id: I16fdf79cd7d4b3f54d7cf73e15bdff2306810f06
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2154192
      Commit-Queue: Clemens Backes <clemensb@chromium.org>
      Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#67210}
      80e5e2b4
  19. 05 Mar, 2020 3 commits
    • Clemens Backes's avatar
      Reland "[wasm] Further reduce the size of WasmCode" · 13cdf3a7
      Clemens Backes authored
      This is a reland of 79398ab0
      
      Original change's description:
      > [wasm] Further reduce the size of WasmCode
      >
      > Also, save dynamic allocations (plus their memory overhead).
      > This is realized by storing the relocation information, source position
      > table, and protected instruction information together in one "metadata"
      > byte array.
      > For each of the three components, we just store their size, such that
      > the accessors can return the respecitive {Vector} views as before.
      >
      > This makes each WasmCode object 24 bytes smaller on 64-bit
      > architectures. It also saves a few more bytes per code object because
      > less padding is needed for the individual allocations, and each dynamic
      > allocation comes with some constant memory overhead.
      >
      > Since the protected instructions will just be stored in a byte array
      > now, some APIs are refactored to just return that byte array directly
      > (instead of an array of {ProtectedInstructionData}). This also
      > simplifies serialization and deserialization, and will allow for
      > switching to a more compact representation in the future.
      >
      > Drive-by: Add some more checks to {Vector::cast} to protect against
      >   undefined behaviour.
      >
      > R=ahaas@chromium.org
      >
      > Bug: v8:10254
      > Change-Id: I81ca847023841110e3e52cc402fcb0349325d7af
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2078545
      > Reviewed-by: Andreas Haas <ahaas@chromium.org>
      > Reviewed-by: Tobias Tebbi <tebbi@chromium.org>
      > Commit-Queue: Clemens Backes <clemensb@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#66596}
      
      Tbr: ahaas@chromium.org
      Bug: v8:10254
      Change-Id: Idcdcb4f13c3eb7a3f7fb5ef8a1229103ca0ae975
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2089934Reviewed-by: 's avatarClemens Backes <clemensb@chromium.org>
      Reviewed-by: 's avatarJakob Kummerow <jkummerow@chromium.org>
      Commit-Queue: Clemens Backes <clemensb@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#66598}
      13cdf3a7
    • Clemens Backes's avatar
      Revert "[wasm] Further reduce the size of WasmCode" · 28afd1c9
      Clemens Backes authored
      This reverts commit 79398ab0.
      
      Reason for revert: Makes UBSan unhappy: https://ci.chromium.org/p/v8/builders/ci/V8%20Linux64%20UBSan/10186
      
      Original change's description:
      > [wasm] Further reduce the size of WasmCode
      > 
      > Also, save dynamic allocations (plus their memory overhead).
      > This is realized by storing the relocation information, source position
      > table, and protected instruction information together in one "metadata"
      > byte array.
      > For each of the three components, we just store their size, such that
      > the accessors can return the respecitive {Vector} views as before.
      > 
      > This makes each WasmCode object 24 bytes smaller on 64-bit
      > architectures. It also saves a few more bytes per code object because
      > less padding is needed for the individual allocations, and each dynamic
      > allocation comes with some constant memory overhead.
      > 
      > Since the protected instructions will just be stored in a byte array
      > now, some APIs are refactored to just return that byte array directly
      > (instead of an array of {ProtectedInstructionData}). This also
      > simplifies serialization and deserialization, and will allow for
      > switching to a more compact representation in the future.
      > 
      > Drive-by: Add some more checks to {Vector::cast} to protect against
      >   undefined behaviour.
      > 
      > R=​ahaas@chromium.org
      > 
      > Bug: v8:10254
      > Change-Id: I81ca847023841110e3e52cc402fcb0349325d7af
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2078545
      > Reviewed-by: Andreas Haas <ahaas@chromium.org>
      > Reviewed-by: Tobias Tebbi <tebbi@chromium.org>
      > Commit-Queue: Clemens Backes <clemensb@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#66596}
      
      TBR=jkummerow@chromium.org,ahaas@chromium.org,clemensb@chromium.org,tebbi@chromium.org
      
      Change-Id: Id80aa82cfce8942879031032b322ee66855b5600
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Bug: v8:10254
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2089933Reviewed-by: 's avatarClemens Backes <clemensb@chromium.org>
      Commit-Queue: Clemens Backes <clemensb@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#66597}
      28afd1c9
    • Clemens Backes's avatar
      [wasm] Further reduce the size of WasmCode · 79398ab0
      Clemens Backes authored
      Also, save dynamic allocations (plus their memory overhead).
      This is realized by storing the relocation information, source position
      table, and protected instruction information together in one "metadata"
      byte array.
      For each of the three components, we just store their size, such that
      the accessors can return the respecitive {Vector} views as before.
      
      This makes each WasmCode object 24 bytes smaller on 64-bit
      architectures. It also saves a few more bytes per code object because
      less padding is needed for the individual allocations, and each dynamic
      allocation comes with some constant memory overhead.
      
      Since the protected instructions will just be stored in a byte array
      now, some APIs are refactored to just return that byte array directly
      (instead of an array of {ProtectedInstructionData}). This also
      simplifies serialization and deserialization, and will allow for
      switching to a more compact representation in the future.
      
      Drive-by: Add some more checks to {Vector::cast} to protect against
        undefined behaviour.
      
      R=ahaas@chromium.org
      
      Bug: v8:10254
      Change-Id: I81ca847023841110e3e52cc402fcb0349325d7af
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2078545Reviewed-by: 's avatarAndreas Haas <ahaas@chromium.org>
      Reviewed-by: 's avatarTobias Tebbi <tebbi@chromium.org>
      Commit-Queue: Clemens Backes <clemensb@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#66596}
      79398ab0
  20. 20 Feb, 2020 1 commit
    • Clemens Backes's avatar
      [wasm] Avoid unnecessary jump tables · 1403fd7d
      Clemens Backes authored
      If multiple code spaces are created, each of them currently gets its own
      jump table (on 64 bit platforms). Since we try to allocate new code
      spaces right after existing ones, this is often not necessary. We could
      instead reuse the existing jump table(s).
      This saves code space for the unneeded jump tables and avoid the cost of
      patching the redundant jump tables when we replace code objects.
      
      This CL implements this by checking whether an existing jump table (or
      pair of far jump table and (near) jump table) fully covers a new code
      space, and reuses the existing jump table in that case.
      
      R=ahaas@chromium.org
      
      Change-Id: Id8751b9c4036cf8f85f9baa2b0be8b2cfb5716ff
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2043846Reviewed-by: 's avatarAndreas Haas <ahaas@chromium.org>
      Commit-Queue: Clemens Backes <clemensb@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#66364}
      1403fd7d
  21. 20 Jan, 2020 1 commit
  22. 16 Jan, 2020 1 commit
  23. 02 Dec, 2019 1 commit
  24. 26 Nov, 2019 1 commit
  25. 25 Nov, 2019 4 commits
  26. 20 Nov, 2019 1 commit
  27. 15 Nov, 2019 1 commit
  28. 14 Nov, 2019 1 commit
  29. 12 Nov, 2019 3 commits