- 25 Mar, 2021 2 commits
-
-
Michael Achenbach authored
This reverts commit db16dce2. Reason for revert: https://ci.chromium.org/p/v8/builders/ci/V8%20Blink%20Linux%20Debug/8771 Original change's description: > [api] Assign serial numbers when template infos are added to cache > > Instead of assigning serial numbers when the template infos are > created, this patch creates serial numbers only when they are added to > cache. > > This way only the ones that are first instantiated are allocated the > fast template cache. Previously, various accessors and methods that > would almost never get instantiated got assigned to the fast template > cache. > > Bug: v8:11284 > Change-Id: I6b633e56e59cbfc3fa5d4ee2db53ca2849eecdd7 > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2621081 > Reviewed-by: Camillo Bruni <cbruni@chromium.org> > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > Commit-Queue: Sathya Gunasekaran <gsathya@chromium.org> > Cr-Commit-Position: refs/heads/master@{#73655} Bug: v8:11284 Change-Id: I382915b2c1be1d87d7a7a961d13e1dd5e3951a4f No-Presubmit: true No-Tree-Checks: true No-Try: true Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2786844 Auto-Submit: Michael Achenbach <machenbach@chromium.org> Commit-Queue: Rubber Stamper <rubber-stamper@appspot.gserviceaccount.com> Bot-Commit: Rubber Stamper <rubber-stamper@appspot.gserviceaccount.com> Cr-Commit-Position: refs/heads/master@{#73659}
-
Sathya Gunasekaran authored
Instead of assigning serial numbers when the template infos are created, this patch creates serial numbers only when they are added to cache. This way only the ones that are first instantiated are allocated the fast template cache. Previously, various accessors and methods that would almost never get instantiated got assigned to the fast template cache. Bug: v8:11284 Change-Id: I6b633e56e59cbfc3fa5d4ee2db53ca2849eecdd7 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2621081Reviewed-by:
Camillo Bruni <cbruni@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Commit-Queue: Sathya Gunasekaran <gsathya@chromium.org> Cr-Commit-Position: refs/heads/master@{#73655}
-
- 13 Jan, 2021 1 commit
-
-
Mythri A authored
This is a reland of 8aa6b15f with a fix for TSAN failures. Original change's description: > Disable bytecode flushing once we toggle coverage mode. > > Changing coverage mode generated different bytecode in some cases. > Hence it is not safe to flush bytecode once we toggle coverage mode. > > Bug: chromium:1147917 > Change-Id: I9e640aeaec664d3d4a4aaedf809c568e9ad924fc > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2615020 > Commit-Queue: Mythri Alle <mythria@chromium.org> > Reviewed-by: Ross McIlroy <rmcilroy@chromium.org> > Cr-Commit-Position: refs/heads/master@{#71985} Bug: chromium:1147917 Change-Id: Ibd8c4feb8615ba7b92fe547c55d455958c94c526 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2624612 Commit-Queue: Mythri Alle <mythria@chromium.org> Reviewed-by:
Ross McIlroy <rmcilroy@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Cr-Commit-Position: refs/heads/master@{#72062}
-
- 09 Jan, 2021 1 commit
-
-
Sathya Gunasekaran authored
This reverts commit 8aa6b15f. Reason for revert: broke TSAN https://ci.chromium.org/ui/p/v8/builders/ci/V8%20Linux64%20TSAN%20-%20stress-incremental-marking/1497/overview Original change's description: > Disable bytecode flushing once we toggle coverage mode. > > Changing coverage mode generated different bytecode in some cases. > Hence it is not safe to flush bytecode once we toggle coverage mode. > > Bug: chromium:1147917 > Change-Id: I9e640aeaec664d3d4a4aaedf809c568e9ad924fc > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2615020 > Commit-Queue: Mythri Alle <mythria@chromium.org> > Reviewed-by: Ross McIlroy <rmcilroy@chromium.org> > Cr-Commit-Position: refs/heads/master@{#71985} TBR=rmcilroy@chromium.org,mythria@chromium.org Change-Id: Id4c95da337e291437b7856e2fe7004e1e6405515 No-Presubmit: true No-Tree-Checks: true No-Try: true Bug: chromium:1147917 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2619402Reviewed-by:
Sathya Gunasekaran <gsathya@chromium.org> Commit-Queue: Sathya Gunasekaran <gsathya@chromium.org> Cr-Commit-Position: refs/heads/master@{#71989}
-
- 08 Jan, 2021 1 commit
-
-
Mythri A authored
Changing coverage mode generated different bytecode in some cases. Hence it is not safe to flush bytecode once we toggle coverage mode. Bug: chromium:1147917 Change-Id: I9e640aeaec664d3d4a4aaedf809c568e9ad924fc Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2615020 Commit-Queue: Mythri Alle <mythria@chromium.org> Reviewed-by:
Ross McIlroy <rmcilroy@chromium.org> Cr-Commit-Position: refs/heads/master@{#71985}
-
- 24 Nov, 2020 1 commit
-
-
Georg Neis authored
Apart from removing Min and Max (utils.h), this is mostly a renaming. In a few cases I had to add a cast. In a bunch of cases I had to use initializer lists to force call-by-value for static member constants because call-by-reference wouldn't compile (like in the previous CL). In a few places I used initializer lists in place of nested min/max operations. Bug: v8:11074 Change-Id: I53a5411be6334ff41e7a8517e6b87fb46f14d086 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2545523 Commit-Queue: Georg Neis <neis@chromium.org> Reviewed-by:
Hannes Payer <hpayer@chromium.org> Reviewed-by:
Jakob Gruber <jgruber@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Cr-Commit-Position: refs/heads/master@{#71380}
-
- 17 Nov, 2020 1 commit
-
-
Leszek Swirski authored
Add a "combination" assert scope class, which combines multiple existing assert scopes. This will allow scopes with functional overlap, e.g. DisallowGarbageCollection and DisallowHeapAllocation, to share an assert type rather than rather than requiring users to remember to set both. To demonstrate this, this redefines DisallowGarbageCollection to a combination of DisallowHeapAllocation and a new DisallowSafepoints, and some of the DCHECKs checking both are simplified to only check one or the other, as appropriate. The combination classes become subclasses of the existing assert scopes, so that they can be used in their place as e.g. a function parameter, e.g. DisallowGarbageCollection can be passed to a function expecting const DisallowHeapAllocation&. As a drive-by, this also changes the per-thread assert scopes to use a bitmask, rather than a bool array, to store their per-thread data. The per-isolate scopes already used a bitmask, so this unifies the behaviour between the two. Change-Id: I209e0a56f45e124c0ccadbd9fb77f39e070612fe Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2534814 Commit-Queue: Leszek Swirski <leszeks@chromium.org> Reviewed-by:
Igor Sheludko <ishell@chromium.org> Reviewed-by:
Georg Neis <neis@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Cr-Commit-Position: refs/heads/master@{#71231}
-
- 11 Nov, 2020 1 commit
-
-
Ulan Degenbaev authored
The new predicate allows a background thread to check if the given object was recently allocated and may potentially be unsafe to read from the background thread. The current implementation has relatively high overhead as it loads two pointers per heap space. It will be optimized in the future. Bug: v8:11148 Change-Id: I2a9dfb2c70de4b8214b8f8a35681a8bab1a63ca8 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2532296 Commit-Queue: Ulan Degenbaev <ulan@chromium.org> Reviewed-by:
Dominik Inführ <dinfuehr@chromium.org> Cr-Commit-Position: refs/heads/master@{#71130}
-
- 16 Oct, 2020 1 commit
-
-
Pierre Langlois authored
Executable V8 pages include 3 reserved OS pages: one for the writable header and two as guards. On systems with 64k OS pages, the amount of allocatable space left for objects can then be quite smaller than the page size, only 64k for each 256k page. This means regular code objects cannot be larger than 64k, while the maximum regular object size is fixed to 128k, half of the page size. As a result code object never reach this limit and we can end up filling regular pages with few large code objects. To fix this, we change the maximum code object size to be runtime value, set to half of the allocatable space per page. On systems with 64k OS pages, the limit will be 32k. Alternatively, we could increase the V8 page size to 512k on Arm64 linux so we wouldn't waste code space. However, systems with 4k OS pages are more common, and those with 64k pages tend to have more memory available so we should be able to live with it. Bug: v8:10808 Change-Id: I5d807e7a3df89f1e9c648899e9ba2f8e2648264c Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2460809Reviewed-by:
Igor Sheludko <ishell@chromium.org> Reviewed-by:
Georg Neis <neis@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Commit-Queue: Pierre Langlois <pierre.langlois@arm.com> Cr-Commit-Position: refs/heads/master@{#70569}
-
- 15 Oct, 2020 1 commit
-
-
Javad Amiri authored
Bug: v8:9533 Change-Id: I912bd5acd2cdb4c9d111711d17a01ba635b76660 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2463006 Commit-Queue: Ulan Degenbaev <ulan@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Reviewed-by:
Jakob Gruber <jgruber@chromium.org> Cr-Commit-Position: refs/heads/master@{#70523}
-
- 07 Oct, 2020 1 commit
-
-
Leszek Swirski authored
This relands commit 3f4e9bbe. which was a reland of c4a062a9 which was a reland of 28a30c57 which was a reland of 5d7a29c9 The change had an issue that embedders implementing heap tracing (e.g. Unified Heap with Blink) could be passed an uninitialized pointer if marking happened during deserialization of an object containing such a pointer. Because of the 0xdeadbed0 uninitialized filler value, these embedders would then receive the value 0xdeadbed0deadbed0 as the 'pointer', and crash on dereference. There is, however, special handling already for null pointers in heap tracing, also for dealing with not-yet initialized values. So, we can make the uninitialized Smi filler be 0x00000000, and that will make such embedded fields have a nullptr representation, making them follow the normal uninitialized value bailouts. In addition, it relands the following dependent changes, which are relanding unchanged and are followup performance improvements. Relanding them in the same change should allow for cleaner reverts should they be needed. This relands commit 76ad3ab5 [identity-map] Change resize heuristic This relands commit 77cc96aa [identity-map] Cache the calculated Hash This relands commit bee5b996 [serializer] Remove Deserializer::Initialize This relands commit c8f73f22 [serializer] Cache instance type in PostProcessNewObject This relands commit 4e7c99ab [identity-map] Remove double-lookups in IdentityMap Original change's description: > Reland^3 "[serializer] Allocate during deserialization" > > This is a reland of c4a062a9 > which was a reland of 28a30c57 > which was a reland of 5d7a29c9 > > Fixes TSAN errors from non-atomic writes in the deserializer. Now all > writes are (relaxed) atomic. > > Original change's description: > > Reland^2 "[serializer] Allocate during deserialization" > > > > This is a reland of 28a30c57 > > which was a reland of 5d7a29c9 > > > > The crashes were from calling RegisterDeserializerFinished on a null > > Isolate pointer, for a deserializer that was never initialised > > (specifically, ReadOnlyDeserializer when ROHeap is shared). > > > > Original change's description: > > > Reland "[serializer] Allocate during deserialization" > > > > > > This is a reland of 5d7a29c9 > > > > > > This reland shuffles around the order of checks in Heap::AllocateRawWith > > > to not check the new space addresses until it's known that this is a new > > > space allocation. This fixes an UBSan failure during read-only space > > > deserialization, which happens before the new space is initialized. > > > > > > It also fixes some issues discovered by --stress-snapshot, around > > > serializing ThinStrings (which are now elided as part of serialization), > > > handle counts (I bumped the maximum handle count in that check), and > > > clearing map transitions (the map backpointer field needed a Smi > > > uninitialized value check). > > > > > > Original change's description: > > > > [serializer] Allocate during deserialization > > > > > > > > This patch removes the concept of reservations and a specialized > > > > deserializer allocator, and instead makes the deserializer allocate > > > > directly with the Heap's Allocate method. > > > > > > > > The major consequence of this is that the GC can now run during > > > > deserialization, which means that: > > > > > > > > a) Deserialized objects are visible to the GC, and > > > > b) Objects that the deserializer/deserialized objects point to can > > > > move. > > > > > > > > Point a) is mostly not a problem due to previous work in making > > > > deserialized objects "GC valid", i.e. making sure that they have a valid > > > > size before any subsequent allocation/safepoint. We now additionally > > > > have to initialize the allocated space with a valid tagged value -- this > > > > is a magic Smi value to keep "uninitialized" checks simple. > > > > > > > > Point b) is solved by Handlifying the deserializer. This involves > > > > changing any vectors of objects into vectors of Handles, and any object > > > > keyed map into an IdentityMap (we can't use Handles as keys because > > > > the object's address is no longer a stable hash). > > > > > > > > Back-references can no longer be direct chunk offsets, so instead the > > > > deserializer stores a Handle to each deserialized object, and the > > > > backreference is an index into this handle array. This encoding could > > > > be optimized in the future with e.g. a second pass over the serialized > > > > array which emits a different bytecode for objects that are and aren't > > > > back-referenced. > > > > > > > > Additionally, the slot-walk over objects to initialize them can no > > > > longer use absolute slot offsets, as again an object may move and its > > > > slot address would become invalid. Now, slots are walked as relative > > > > offsets to a Handle to the object, or as absolute slots for the case of > > > > root pointers. A concept of "slot accessor" is introduced to share the > > > > code between these two modes, and writing the slot (including write > > > > barriers) is abstracted into this accessor. > > > > > > > > Finally, the Code body walk is modified to deserialize all objects > > > > referred to by RelocInfos before doing the RelocInfo walk itself. This > > > > is because RelocInfoIterator uses raw pointers, so we cannot allocate > > > > during a RelocInfo walk. > > > > > > > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged > > > > size rather than byte size -- the size is expected to be tagged-aligned > > > > anyway, so now we get an extra few bits in the size encoding. > > > > > > > > Bug: chromium:1075999 > > > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e > > > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451 > > > > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > > > > Reviewed-by: Jakob Gruber <jgruber@chromium.org> > > > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > > > > Cr-Commit-Position: refs/heads/master@{#70229} Bug: chromium:1075999 Change-Id: Ib514a4ef16bd02bfb60d046ecbf8fae1ead64a98 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2452689 Commit-Queue: Leszek Swirski <leszeks@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Reviewed-by:
Jakob Gruber <jgruber@chromium.org> Cr-Commit-Position: refs/heads/master@{#70366}
-
- 05 Oct, 2020 1 commit
-
-
Adam Klein authored
This reverts commit 3f4e9bbe, along with the following dependent changes (reverted to make this a clean revert): 76ad3ab5 [identity-map] Change resize heuristic 77cc96aa [identity-map] Cache the calculated Hash bee5b996 [serializer] Remove Deserializer::Initialize c8f73f22 [serializer] Cache instance type in PostProcessNewObject 4e7c99ab [identity-map] Remove double-lookups in IdentityMap Reason for revert: major crash spike on Canary (https://crbug.com/1135027) Original change's description: > Reland^3 "[serializer] Allocate during deserialization" > > This is a reland of c4a062a9 > which was a reland of 28a30c57 > which was a reland of 5d7a29c9 > > Fixes TSAN errors from non-atomic writes in the deserializer. Now all > writes are (relaxed) atomic. > > Original change's description: > > Reland^2 "[serializer] Allocate during deserialization" > > > > This is a reland of 28a30c57 > > which was a reland of 5d7a29c9 > > > > The crashes were from calling RegisterDeserializerFinished on a null > > Isolate pointer, for a deserializer that was never initialised > > (specifically, ReadOnlyDeserializer when ROHeap is shared). > > > > Original change's description: > > > Reland "[serializer] Allocate during deserialization" > > > > > > This is a reland of 5d7a29c9 > > > > > > This reland shuffles around the order of checks in Heap::AllocateRawWith > > > to not check the new space addresses until it's known that this is a new > > > space allocation. This fixes an UBSan failure during read-only space > > > deserialization, which happens before the new space is initialized. > > > > > > It also fixes some issues discovered by --stress-snapshot, around > > > serializing ThinStrings (which are now elided as part of serialization), > > > handle counts (I bumped the maximum handle count in that check), and > > > clearing map transitions (the map backpointer field needed a Smi > > > uninitialized value check). > > > > > > Original change's description: > > > > [serializer] Allocate during deserialization > > > > > > > > This patch removes the concept of reservations and a specialized > > > > deserializer allocator, and instead makes the deserializer allocate > > > > directly with the Heap's Allocate method. > > > > > > > > The major consequence of this is that the GC can now run during > > > > deserialization, which means that: > > > > > > > > a) Deserialized objects are visible to the GC, and > > > > b) Objects that the deserializer/deserialized objects point to can > > > > move. > > > > > > > > Point a) is mostly not a problem due to previous work in making > > > > deserialized objects "GC valid", i.e. making sure that they have a valid > > > > size before any subsequent allocation/safepoint. We now additionally > > > > have to initialize the allocated space with a valid tagged value -- this > > > > is a magic Smi value to keep "uninitialized" checks simple. > > > > > > > > Point b) is solved by Handlifying the deserializer. This involves > > > > changing any vectors of objects into vectors of Handles, and any object > > > > keyed map into an IdentityMap (we can't use Handles as keys because > > > > the object's address is no longer a stable hash). > > > > > > > > Back-references can no longer be direct chunk offsets, so instead the > > > > deserializer stores a Handle to each deserialized object, and the > > > > backreference is an index into this handle array. This encoding could > > > > be optimized in the future with e.g. a second pass over the serialized > > > > array which emits a different bytecode for objects that are and aren't > > > > back-referenced. > > > > > > > > Additionally, the slot-walk over objects to initialize them can no > > > > longer use absolute slot offsets, as again an object may move and its > > > > slot address would become invalid. Now, slots are walked as relative > > > > offsets to a Handle to the object, or as absolute slots for the case of > > > > root pointers. A concept of "slot accessor" is introduced to share the > > > > code between these two modes, and writing the slot (including write > > > > barriers) is abstracted into this accessor. > > > > > > > > Finally, the Code body walk is modified to deserialize all objects > > > > referred to by RelocInfos before doing the RelocInfo walk itself. This > > > > is because RelocInfoIterator uses raw pointers, so we cannot allocate > > > > during a RelocInfo walk. > > > > > > > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged > > > > size rather than byte size -- the size is expected to be tagged-aligned > > > > anyway, so now we get an extra few bits in the size encoding. > > > > > > > > Bug: chromium:1075999 > > > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e > > > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451 > > > > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > > > > Reviewed-by: Jakob Gruber <jgruber@chromium.org> > > > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > > > > Cr-Commit-Position: refs/heads/master@{#70229} > > > > > > Bug: chromium:1075999 > > > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54 > > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828 > > > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > > > Auto-Submit: Leszek Swirski <leszeks@chromium.org> > > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > > > Reviewed-by: Jakob Gruber <jgruber@chromium.org> > > > Cr-Commit-Position: refs/heads/master@{#70267} > > > > Tbr: jgruber@chromium.org,ulan@chromium.org > > Bug: chromium:1075999 > > Change-Id: Iaa8dc54895866ada0e34a7c9e8fff9ae1cb13f2d > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2444991 > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > > Cr-Commit-Position: refs/heads/master@{#70279} > > Tbr: jgruber@chromium.org,ulan@chromium.org > Cq-Include-Trybots: luci.v8.try:v8_linux64_tsan_rel_ng,v8_linux64_tsan_no_cm_rel_ng,v8_linux64_tsan_isolates_rel_ng > Bug: chromium:1075999 > Change-Id: I0b9b11644aebc4cc8b07c62a0f765b24e4d73d89 > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2445872 > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > Auto-Submit: Leszek Swirski <leszeks@chromium.org> > Reviewed-by: Dominik Inführ <dinfuehr@chromium.org> > Cr-Commit-Position: refs/heads/master@{#70288} TBR=ulan@chromium.org,jgruber@chromium.org,leszeks@chromium.org,dinfuehr@chromium.org Bug: chromium:1075999, chromium:1135027 Change-Id: I5d0d9e49c0302d94ff7291834f5f18e7a0839eb7 Cq-Include-Trybots: luci.v8.try:v8_linux64_tsan_rel_ng,v8_linux64_tsan_no_cm_rel_ng,v8_linux64_tsan_isolates_rel_ng Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2451030Reviewed-by:
Adam Klein <adamk@chromium.org> Commit-Queue: Adam Klein <adamk@chromium.org> Cr-Commit-Position: refs/heads/master@{#70328}
-
- 02 Oct, 2020 3 commits
-
-
Leszek Swirski authored
This is a reland of c4a062a9 which was a reland of 28a30c57 which was a reland of 5d7a29c9 Fixes TSAN errors from non-atomic writes in the deserializer. Now all writes are (relaxed) atomic. Original change's description: > Reland^2 "[serializer] Allocate during deserialization" > > This is a reland of 28a30c57 > which was a reland of 5d7a29c9 > > The crashes were from calling RegisterDeserializerFinished on a null > Isolate pointer, for a deserializer that was never initialised > (specifically, ReadOnlyDeserializer when ROHeap is shared). > > Original change's description: > > Reland "[serializer] Allocate during deserialization" > > > > This is a reland of 5d7a29c9 > > > > This reland shuffles around the order of checks in Heap::AllocateRawWith > > to not check the new space addresses until it's known that this is a new > > space allocation. This fixes an UBSan failure during read-only space > > deserialization, which happens before the new space is initialized. > > > > It also fixes some issues discovered by --stress-snapshot, around > > serializing ThinStrings (which are now elided as part of serialization), > > handle counts (I bumped the maximum handle count in that check), and > > clearing map transitions (the map backpointer field needed a Smi > > uninitialized value check). > > > > Original change's description: > > > [serializer] Allocate during deserialization > > > > > > This patch removes the concept of reservations and a specialized > > > deserializer allocator, and instead makes the deserializer allocate > > > directly with the Heap's Allocate method. > > > > > > The major consequence of this is that the GC can now run during > > > deserialization, which means that: > > > > > > a) Deserialized objects are visible to the GC, and > > > b) Objects that the deserializer/deserialized objects point to can > > > move. > > > > > > Point a) is mostly not a problem due to previous work in making > > > deserialized objects "GC valid", i.e. making sure that they have a valid > > > size before any subsequent allocation/safepoint. We now additionally > > > have to initialize the allocated space with a valid tagged value -- this > > > is a magic Smi value to keep "uninitialized" checks simple. > > > > > > Point b) is solved by Handlifying the deserializer. This involves > > > changing any vectors of objects into vectors of Handles, and any object > > > keyed map into an IdentityMap (we can't use Handles as keys because > > > the object's address is no longer a stable hash). > > > > > > Back-references can no longer be direct chunk offsets, so instead the > > > deserializer stores a Handle to each deserialized object, and the > > > backreference is an index into this handle array. This encoding could > > > be optimized in the future with e.g. a second pass over the serialized > > > array which emits a different bytecode for objects that are and aren't > > > back-referenced. > > > > > > Additionally, the slot-walk over objects to initialize them can no > > > longer use absolute slot offsets, as again an object may move and its > > > slot address would become invalid. Now, slots are walked as relative > > > offsets to a Handle to the object, or as absolute slots for the case of > > > root pointers. A concept of "slot accessor" is introduced to share the > > > code between these two modes, and writing the slot (including write > > > barriers) is abstracted into this accessor. > > > > > > Finally, the Code body walk is modified to deserialize all objects > > > referred to by RelocInfos before doing the RelocInfo walk itself. This > > > is because RelocInfoIterator uses raw pointers, so we cannot allocate > > > during a RelocInfo walk. > > > > > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged > > > size rather than byte size -- the size is expected to be tagged-aligned > > > anyway, so now we get an extra few bits in the size encoding. > > > > > > Bug: chromium:1075999 > > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e > > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451 > > > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > > > Reviewed-by: Jakob Gruber <jgruber@chromium.org> > > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > > > Cr-Commit-Position: refs/heads/master@{#70229} > > > > Bug: chromium:1075999 > > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54 > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828 > > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > > Auto-Submit: Leszek Swirski <leszeks@chromium.org> > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > > Reviewed-by: Jakob Gruber <jgruber@chromium.org> > > Cr-Commit-Position: refs/heads/master@{#70267} > > Tbr: jgruber@chromium.org,ulan@chromium.org > Bug: chromium:1075999 > Change-Id: Iaa8dc54895866ada0e34a7c9e8fff9ae1cb13f2d > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2444991 > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > Cr-Commit-Position: refs/heads/master@{#70279} Tbr: jgruber@chromium.org,ulan@chromium.org Cq-Include-Trybots: luci.v8.try:v8_linux64_tsan_rel_ng,v8_linux64_tsan_no_cm_rel_ng,v8_linux64_tsan_isolates_rel_ng Bug: chromium:1075999 Change-Id: I0b9b11644aebc4cc8b07c62a0f765b24e4d73d89 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2445872 Commit-Queue: Leszek Swirski <leszeks@chromium.org> Auto-Submit: Leszek Swirski <leszeks@chromium.org> Reviewed-by:
Dominik Inführ <dinfuehr@chromium.org> Cr-Commit-Position: refs/heads/master@{#70288}
-
Clemens Backes authored
This reverts commit c4a062a9. Reason for revert: TSan issues: https://ci.chromium.org/p/v8/builders/ci/V8%20Linux64%20TSAN/33504 Original change's description: > Reland^2 "[serializer] Allocate during deserialization" > > This is a reland of 28a30c57 > which was a reland of 5d7a29c9 > > The crashes were from calling RegisterDeserializerFinished on a null > Isolate pointer, for a deserializer that was never initialised > (specifically, ReadOnlyDeserializer when ROHeap is shared). > > Original change's description: > > Reland "[serializer] Allocate during deserialization" > > > > This is a reland of 5d7a29c9 > > > > This reland shuffles around the order of checks in Heap::AllocateRawWith > > to not check the new space addresses until it's known that this is a new > > space allocation. This fixes an UBSan failure during read-only space > > deserialization, which happens before the new space is initialized. > > > > It also fixes some issues discovered by --stress-snapshot, around > > serializing ThinStrings (which are now elided as part of serialization), > > handle counts (I bumped the maximum handle count in that check), and > > clearing map transitions (the map backpointer field needed a Smi > > uninitialized value check). > > > > Original change's description: > > > [serializer] Allocate during deserialization > > > > > > This patch removes the concept of reservations and a specialized > > > deserializer allocator, and instead makes the deserializer allocate > > > directly with the Heap's Allocate method. > > > > > > The major consequence of this is that the GC can now run during > > > deserialization, which means that: > > > > > > a) Deserialized objects are visible to the GC, and > > > b) Objects that the deserializer/deserialized objects point to can > > > move. > > > > > > Point a) is mostly not a problem due to previous work in making > > > deserialized objects "GC valid", i.e. making sure that they have a valid > > > size before any subsequent allocation/safepoint. We now additionally > > > have to initialize the allocated space with a valid tagged value -- this > > > is a magic Smi value to keep "uninitialized" checks simple. > > > > > > Point b) is solved by Handlifying the deserializer. This involves > > > changing any vectors of objects into vectors of Handles, and any object > > > keyed map into an IdentityMap (we can't use Handles as keys because > > > the object's address is no longer a stable hash). > > > > > > Back-references can no longer be direct chunk offsets, so instead the > > > deserializer stores a Handle to each deserialized object, and the > > > backreference is an index into this handle array. This encoding could > > > be optimized in the future with e.g. a second pass over the serialized > > > array which emits a different bytecode for objects that are and aren't > > > back-referenced. > > > > > > Additionally, the slot-walk over objects to initialize them can no > > > longer use absolute slot offsets, as again an object may move and its > > > slot address would become invalid. Now, slots are walked as relative > > > offsets to a Handle to the object, or as absolute slots for the case of > > > root pointers. A concept of "slot accessor" is introduced to share the > > > code between these two modes, and writing the slot (including write > > > barriers) is abstracted into this accessor. > > > > > > Finally, the Code body walk is modified to deserialize all objects > > > referred to by RelocInfos before doing the RelocInfo walk itself. This > > > is because RelocInfoIterator uses raw pointers, so we cannot allocate > > > during a RelocInfo walk. > > > > > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged > > > size rather than byte size -- the size is expected to be tagged-aligned > > > anyway, so now we get an extra few bits in the size encoding. > > > > > > Bug: chromium:1075999 > > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e > > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451 > > > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > > > Reviewed-by: Jakob Gruber <jgruber@chromium.org> > > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > > > Cr-Commit-Position: refs/heads/master@{#70229} > > > > Bug: chromium:1075999 > > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54 > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828 > > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > > Auto-Submit: Leszek Swirski <leszeks@chromium.org> > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > > Reviewed-by: Jakob Gruber <jgruber@chromium.org> > > Cr-Commit-Position: refs/heads/master@{#70267} > > Tbr: jgruber@chromium.org,ulan@chromium.org > Bug: chromium:1075999 > Change-Id: Iaa8dc54895866ada0e34a7c9e8fff9ae1cb13f2d > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2444991 > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > Cr-Commit-Position: refs/heads/master@{#70279} TBR=ulan@chromium.org,jgruber@chromium.org,leszeks@chromium.org Change-Id: Ib2f01db4cd9b55639d6a4af971bda865edb45e84 No-Presubmit: true No-Tree-Checks: true No-Try: true Bug: chromium:1075999 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2445250Reviewed-by:
Clemens Backes <clemensb@chromium.org> Commit-Queue: Clemens Backes <clemensb@chromium.org> Cr-Commit-Position: refs/heads/master@{#70280}
-
Leszek Swirski authored
This is a reland of 28a30c57 which was a reland of 5d7a29c9 The crashes were from calling RegisterDeserializerFinished on a null Isolate pointer, for a deserializer that was never initialised (specifically, ReadOnlyDeserializer when ROHeap is shared). Original change's description: > Reland "[serializer] Allocate during deserialization" > > This is a reland of 5d7a29c9 > > This reland shuffles around the order of checks in Heap::AllocateRawWith > to not check the new space addresses until it's known that this is a new > space allocation. This fixes an UBSan failure during read-only space > deserialization, which happens before the new space is initialized. > > It also fixes some issues discovered by --stress-snapshot, around > serializing ThinStrings (which are now elided as part of serialization), > handle counts (I bumped the maximum handle count in that check), and > clearing map transitions (the map backpointer field needed a Smi > uninitialized value check). > > Original change's description: > > [serializer] Allocate during deserialization > > > > This patch removes the concept of reservations and a specialized > > deserializer allocator, and instead makes the deserializer allocate > > directly with the Heap's Allocate method. > > > > The major consequence of this is that the GC can now run during > > deserialization, which means that: > > > > a) Deserialized objects are visible to the GC, and > > b) Objects that the deserializer/deserialized objects point to can > > move. > > > > Point a) is mostly not a problem due to previous work in making > > deserialized objects "GC valid", i.e. making sure that they have a valid > > size before any subsequent allocation/safepoint. We now additionally > > have to initialize the allocated space with a valid tagged value -- this > > is a magic Smi value to keep "uninitialized" checks simple. > > > > Point b) is solved by Handlifying the deserializer. This involves > > changing any vectors of objects into vectors of Handles, and any object > > keyed map into an IdentityMap (we can't use Handles as keys because > > the object's address is no longer a stable hash). > > > > Back-references can no longer be direct chunk offsets, so instead the > > deserializer stores a Handle to each deserialized object, and the > > backreference is an index into this handle array. This encoding could > > be optimized in the future with e.g. a second pass over the serialized > > array which emits a different bytecode for objects that are and aren't > > back-referenced. > > > > Additionally, the slot-walk over objects to initialize them can no > > longer use absolute slot offsets, as again an object may move and its > > slot address would become invalid. Now, slots are walked as relative > > offsets to a Handle to the object, or as absolute slots for the case of > > root pointers. A concept of "slot accessor" is introduced to share the > > code between these two modes, and writing the slot (including write > > barriers) is abstracted into this accessor. > > > > Finally, the Code body walk is modified to deserialize all objects > > referred to by RelocInfos before doing the RelocInfo walk itself. This > > is because RelocInfoIterator uses raw pointers, so we cannot allocate > > during a RelocInfo walk. > > > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged > > size rather than byte size -- the size is expected to be tagged-aligned > > anyway, so now we get an extra few bits in the size encoding. > > > > Bug: chromium:1075999 > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451 > > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > > Reviewed-by: Jakob Gruber <jgruber@chromium.org> > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > > Cr-Commit-Position: refs/heads/master@{#70229} > > Bug: chromium:1075999 > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54 > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828 > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > Auto-Submit: Leszek Swirski <leszeks@chromium.org> > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > Reviewed-by: Jakob Gruber <jgruber@chromium.org> > Cr-Commit-Position: refs/heads/master@{#70267} Tbr: jgruber@chromium.org,ulan@chromium.org Bug: chromium:1075999 Change-Id: Iaa8dc54895866ada0e34a7c9e8fff9ae1cb13f2d Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2444991Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Commit-Queue: Leszek Swirski <leszeks@chromium.org> Cr-Commit-Position: refs/heads/master@{#70279}
-
- 01 Oct, 2020 2 commits
-
-
Zhi An Ng authored
This reverts commit 28a30c57. Reason for revert: Broke Test262 https://ci.chromium.org/p/v8/builders/ci/V8%20Linux%20-%20shared/38638? Original change's description: > Reland "[serializer] Allocate during deserialization" > > This is a reland of 5d7a29c9 > > This reland shuffles around the order of checks in Heap::AllocateRawWith > to not check the new space addresses until it's known that this is a new > space allocation. This fixes an UBSan failure during read-only space > deserialization, which happens before the new space is initialized. > > It also fixes some issues discovered by --stress-snapshot, around > serializing ThinStrings (which are now elided as part of serialization), > handle counts (I bumped the maximum handle count in that check), and > clearing map transitions (the map backpointer field needed a Smi > uninitialized value check). > > Original change's description: > > [serializer] Allocate during deserialization > > > > This patch removes the concept of reservations and a specialized > > deserializer allocator, and instead makes the deserializer allocate > > directly with the Heap's Allocate method. > > > > The major consequence of this is that the GC can now run during > > deserialization, which means that: > > > > a) Deserialized objects are visible to the GC, and > > b) Objects that the deserializer/deserialized objects point to can > > move. > > > > Point a) is mostly not a problem due to previous work in making > > deserialized objects "GC valid", i.e. making sure that they have a valid > > size before any subsequent allocation/safepoint. We now additionally > > have to initialize the allocated space with a valid tagged value -- this > > is a magic Smi value to keep "uninitialized" checks simple. > > > > Point b) is solved by Handlifying the deserializer. This involves > > changing any vectors of objects into vectors of Handles, and any object > > keyed map into an IdentityMap (we can't use Handles as keys because > > the object's address is no longer a stable hash). > > > > Back-references can no longer be direct chunk offsets, so instead the > > deserializer stores a Handle to each deserialized object, and the > > backreference is an index into this handle array. This encoding could > > be optimized in the future with e.g. a second pass over the serialized > > array which emits a different bytecode for objects that are and aren't > > back-referenced. > > > > Additionally, the slot-walk over objects to initialize them can no > > longer use absolute slot offsets, as again an object may move and its > > slot address would become invalid. Now, slots are walked as relative > > offsets to a Handle to the object, or as absolute slots for the case of > > root pointers. A concept of "slot accessor" is introduced to share the > > code between these two modes, and writing the slot (including write > > barriers) is abstracted into this accessor. > > > > Finally, the Code body walk is modified to deserialize all objects > > referred to by RelocInfos before doing the RelocInfo walk itself. This > > is because RelocInfoIterator uses raw pointers, so we cannot allocate > > during a RelocInfo walk. > > > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged > > size rather than byte size -- the size is expected to be tagged-aligned > > anyway, so now we get an extra few bits in the size encoding. > > > > Bug: chromium:1075999 > > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451 > > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > > Reviewed-by: Jakob Gruber <jgruber@chromium.org> > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > > Cr-Commit-Position: refs/heads/master@{#70229} > > Bug: chromium:1075999 > Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54 > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828 > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > Auto-Submit: Leszek Swirski <leszeks@chromium.org> > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > Reviewed-by: Jakob Gruber <jgruber@chromium.org> > Cr-Commit-Position: refs/heads/master@{#70267} TBR=ulan@chromium.org,jgruber@chromium.org,leszeks@chromium.org Change-Id: Ieed68332ef6a7ad36db061e3f48be0f28673d7a2 No-Presubmit: true No-Tree-Checks: true No-Try: true Bug: chromium:1075999 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2441608Reviewed-by:
Zhi An Ng <zhin@chromium.org> Commit-Queue: Zhi An Ng <zhin@chromium.org> Cr-Commit-Position: refs/heads/master@{#70268}
-
Leszek Swirski authored
This is a reland of 5d7a29c9 This reland shuffles around the order of checks in Heap::AllocateRawWith to not check the new space addresses until it's known that this is a new space allocation. This fixes an UBSan failure during read-only space deserialization, which happens before the new space is initialized. It also fixes some issues discovered by --stress-snapshot, around serializing ThinStrings (which are now elided as part of serialization), handle counts (I bumped the maximum handle count in that check), and clearing map transitions (the map backpointer field needed a Smi uninitialized value check). Original change's description: > [serializer] Allocate during deserialization > > This patch removes the concept of reservations and a specialized > deserializer allocator, and instead makes the deserializer allocate > directly with the Heap's Allocate method. > > The major consequence of this is that the GC can now run during > deserialization, which means that: > > a) Deserialized objects are visible to the GC, and > b) Objects that the deserializer/deserialized objects point to can > move. > > Point a) is mostly not a problem due to previous work in making > deserialized objects "GC valid", i.e. making sure that they have a valid > size before any subsequent allocation/safepoint. We now additionally > have to initialize the allocated space with a valid tagged value -- this > is a magic Smi value to keep "uninitialized" checks simple. > > Point b) is solved by Handlifying the deserializer. This involves > changing any vectors of objects into vectors of Handles, and any object > keyed map into an IdentityMap (we can't use Handles as keys because > the object's address is no longer a stable hash). > > Back-references can no longer be direct chunk offsets, so instead the > deserializer stores a Handle to each deserialized object, and the > backreference is an index into this handle array. This encoding could > be optimized in the future with e.g. a second pass over the serialized > array which emits a different bytecode for objects that are and aren't > back-referenced. > > Additionally, the slot-walk over objects to initialize them can no > longer use absolute slot offsets, as again an object may move and its > slot address would become invalid. Now, slots are walked as relative > offsets to a Handle to the object, or as absolute slots for the case of > root pointers. A concept of "slot accessor" is introduced to share the > code between these two modes, and writing the slot (including write > barriers) is abstracted into this accessor. > > Finally, the Code body walk is modified to deserialize all objects > referred to by RelocInfos before doing the RelocInfo walk itself. This > is because RelocInfoIterator uses raw pointers, so we cannot allocate > during a RelocInfo walk. > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged > size rather than byte size -- the size is expected to be tagged-aligned > anyway, so now we get an extra few bits in the size encoding. > > Bug: chromium:1075999 > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451 > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > Reviewed-by: Jakob Gruber <jgruber@chromium.org> > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > Cr-Commit-Position: refs/heads/master@{#70229} Bug: chromium:1075999 Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828 Commit-Queue: Leszek Swirski <leszeks@chromium.org> Auto-Submit: Leszek Swirski <leszeks@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Reviewed-by:
Jakob Gruber <jgruber@chromium.org> Cr-Commit-Position: refs/heads/master@{#70267}
-
- 30 Sep, 2020 2 commits
-
-
Leszek Swirski authored
This reverts commit 5d7a29c9. Reason for revert: UBSan -- https://ci.chromium.org/p/v8/builders/ci/V8%20Linux64%20UBSan/13100 Original change's description: > [serializer] Allocate during deserialization > > This patch removes the concept of reservations and a specialized > deserializer allocator, and instead makes the deserializer allocate > directly with the Heap's Allocate method. > > The major consequence of this is that the GC can now run during > deserialization, which means that: > > a) Deserialized objects are visible to the GC, and > b) Objects that the deserializer/deserialized objects point to can > move. > > Point a) is mostly not a problem due to previous work in making > deserialized objects "GC valid", i.e. making sure that they have a valid > size before any subsequent allocation/safepoint. We now additionally > have to initialize the allocated space with a valid tagged value -- this > is a magic Smi value to keep "uninitialized" checks simple. > > Point b) is solved by Handlifying the deserializer. This involves > changing any vectors of objects into vectors of Handles, and any object > keyed map into an IdentityMap (we can't use Handles as keys because > the object's address is no longer a stable hash). > > Back-references can no longer be direct chunk offsets, so instead the > deserializer stores a Handle to each deserialized object, and the > backreference is an index into this handle array. This encoding could > be optimized in the future with e.g. a second pass over the serialized > array which emits a different bytecode for objects that are and aren't > back-referenced. > > Additionally, the slot-walk over objects to initialize them can no > longer use absolute slot offsets, as again an object may move and its > slot address would become invalid. Now, slots are walked as relative > offsets to a Handle to the object, or as absolute slots for the case of > root pointers. A concept of "slot accessor" is introduced to share the > code between these two modes, and writing the slot (including write > barriers) is abstracted into this accessor. > > Finally, the Code body walk is modified to deserialize all objects > referred to by RelocInfos before doing the RelocInfo walk itself. This > is because RelocInfoIterator uses raw pointers, so we cannot allocate > during a RelocInfo walk. > > As a drive-by, the VariableRawData bytecode is tweaked to use tagged > size rather than byte size -- the size is expected to be tagged-aligned > anyway, so now we get an extra few bits in the size encoding. > > Bug: chromium:1075999 > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451 > Commit-Queue: Leszek Swirski <leszeks@chromium.org> > Reviewed-by: Jakob Gruber <jgruber@chromium.org> > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > Cr-Commit-Position: refs/heads/master@{#70229} TBR=ulan@chromium.org,jgruber@chromium.org,leszeks@chromium.org Change-Id: I2bd792a24861e8f54897e51522769b50f8f814e2 No-Presubmit: true No-Tree-Checks: true No-Try: true Bug: chromium:1075999 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440827 Commit-Queue: Leszek Swirski <leszeks@chromium.org> Reviewed-by:
Leszek Swirski <leszeks@chromium.org> Cr-Commit-Position: refs/heads/master@{#70231}
-
Leszek Swirski authored
This patch removes the concept of reservations and a specialized deserializer allocator, and instead makes the deserializer allocate directly with the Heap's Allocate method. The major consequence of this is that the GC can now run during deserialization, which means that: a) Deserialized objects are visible to the GC, and b) Objects that the deserializer/deserialized objects point to can move. Point a) is mostly not a problem due to previous work in making deserialized objects "GC valid", i.e. making sure that they have a valid size before any subsequent allocation/safepoint. We now additionally have to initialize the allocated space with a valid tagged value -- this is a magic Smi value to keep "uninitialized" checks simple. Point b) is solved by Handlifying the deserializer. This involves changing any vectors of objects into vectors of Handles, and any object keyed map into an IdentityMap (we can't use Handles as keys because the object's address is no longer a stable hash). Back-references can no longer be direct chunk offsets, so instead the deserializer stores a Handle to each deserialized object, and the backreference is an index into this handle array. This encoding could be optimized in the future with e.g. a second pass over the serialized array which emits a different bytecode for objects that are and aren't back-referenced. Additionally, the slot-walk over objects to initialize them can no longer use absolute slot offsets, as again an object may move and its slot address would become invalid. Now, slots are walked as relative offsets to a Handle to the object, or as absolute slots for the case of root pointers. A concept of "slot accessor" is introduced to share the code between these two modes, and writing the slot (including write barriers) is abstracted into this accessor. Finally, the Code body walk is modified to deserialize all objects referred to by RelocInfos before doing the RelocInfo walk itself. This is because RelocInfoIterator uses raw pointers, so we cannot allocate during a RelocInfo walk. As a drive-by, the VariableRawData bytecode is tweaked to use tagged size rather than byte size -- the size is expected to be tagged-aligned anyway, so now we get an extra few bits in the size encoding. Bug: chromium:1075999 Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451 Commit-Queue: Leszek Swirski <leszeks@chromium.org> Reviewed-by:
Jakob Gruber <jgruber@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Cr-Commit-Position: refs/heads/master@{#70229}
-
- 22 Sep, 2020 1 commit
-
-
Dominik Inführ authored
Added scopes to diallow/allow GCs from happening using a DCHECK. It is stricter than DisallowHeapAllocation, since this also doesn't allow safepoints. As soon as Turbofan is ready, we can replace all usages of DisallowHeapAllocation with DisallowGarbageCollection. Bug: v8:10315 Change-Id: I12c144ec099d9af57d692ff343adbe7aec46c0c7 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2362960Reviewed-by:
Igor Sheludko <ishell@chromium.org> Reviewed-by:
Jakob Gruber <jgruber@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Reviewed-by:
Georg Neis <neis@chromium.org> Commit-Queue: Dominik Inführ <dinfuehr@chromium.org> Cr-Commit-Position: refs/heads/master@{#70042}
-
- 14 Sep, 2020 1 commit
-
-
Daniel Bevenius authored
This commit adds a check in Heap::AllocateRaw when setting the large_object variable, when the AllocationType is of type kCode, to take into account the size of the CodeSpace's area size. The motivation for this change is that without this check it is possible that size_in_bytes is less than 128, and hence not considered a large object, but it might be larger than the available space in code_space->AreaSize(), which will cause the object to be created in the CodeLargeObjectSpace. This will later cause a segmentation fault when calling the following chain of functions: if (!large_object) { MemoryChunk::FromHeapObject(heap_object) ->GetCodeObjectRegistry() ->RegisterNewlyAllocatedCodeObject(heap_object.address()); } We (Red Hat) ran into this issue when running Node.js v12.16.1 in combination with yarn on aarch64 (this was the only architecture that this happed on). Bug: v8:10808 Change-Id: I0c396b0eb64bc4cc91d9a3be521254f3130eac7b Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2390665 Commit-Queue: Ulan Degenbaev <ulan@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Cr-Commit-Position: refs/heads/master@{#69876}
-
- 31 Aug, 2020 1 commit
-
-
Jake Hughes authored
With conservative stack scanning enabled, a snapshot of the call stack upon entry to GC will be used to determine part of the root-set. When the collector walks the stack, it looks at each value and determines whether it could be a potential on-heap object pointer. However, unlike with Handles, these on-stack pointers aren't guaranteed to point to the start of the object: the compiler may decide hide these pointers, and create interior pointers in C++ frames which the GC doesn't know about. The solution to this is to include an object start bitmap in the header of each page. Each bit in the bitmap represents a word in the page payload which is set when an object is allocated. This means that when the collector finds an arbitrary potential pointer into the page, it can walk backwards through the bitmap until it finds the relevant object's base pointer. To prevent the bitmap becoming stale after compaction, it is rebuilt during object sweeping. This is experimental, and currently only works with inline allocation disabled, and single generational collection. Bug: v8:10614 Change-Id: I28ebd9562f58f335f8b3c2d1189cdf39feaa1f52 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2375195 Commit-Queue: Anton Bikineev <bikineev@chromium.org> Reviewed-by:
Michael Achenbach <machenbach@chromium.org> Reviewed-by:
Dominik Inführ <dinfuehr@chromium.org> Reviewed-by:
Anton Bikineev <bikineev@chromium.org> Cr-Commit-Position: refs/heads/master@{#69615}
-
- 14 Aug, 2020 1 commit
-
-
Leszek Swirski authored
This patch introduces a new LocalIsolate and LocalFactory, which use LocalHeap and replace OffThreadIsolate and OffThreadFactory. This allows us to remove those classes, as well as the related OffThreadSpace, OffThreadLargeObjectSpace, OffThreadHeap, and OffThreadTransferHandle. OffThreadLogger becomes LocalLogger. LocalHeap behaves more like Heap than OffThreadHeap did, so this allows us to additionally remove the concept of "Finish" and "Publish" that the OffThreadIsolate had, and allows us to internalize strings directly with the newly-concurrent string table (where the implementation can now move to FactoryBase). This patch also removes the off-thread support from the deserializer entirely, as well as removing the LocalIsolateWrapper which allowed run-time distinction between Isolate and OffThreadIsolate. LocalHeap doesn't support the reservation model used by the deserializer, and we will likely move the deserializer to use LocalIsolate unconditionally once we figure out the details of how to do this. Bug: chromium:1011762 Change-Id: I1a1a0a72952b19a8a4c167c11a863c153a1252fc Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2315990 Commit-Queue: Andreas Haas <ahaas@chromium.org> Auto-Submit: Leszek Swirski <leszeks@chromium.org> Reviewed-by:
Andreas Haas <ahaas@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Reviewed-by:
Jakob Gruber <jgruber@chromium.org> Reviewed-by:
Dominik Inführ <dinfuehr@chromium.org> Cr-Commit-Position: refs/heads/master@{#69397}
-
- 12 Aug, 2020 2 commits
-
-
Dominik Inführ authored
Move external memory counters out of IsolateData back into Heap. The class ExternalMemoryAccounting now stores all counters and is responsible for updates. This change will allow turning counters into atomic variables. Bug: v8:10315 Change-Id: I2abeda298d3cfcc630fd04ca78a3d6d703e3b419 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2346647Reviewed-by:
Jakob Gruber <jgruber@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Commit-Queue: Dominik Inführ <dinfuehr@chromium.org> Cr-Commit-Position: refs/heads/master@{#69356}
-
Dominik Inführ authored
The only user was ArrayBufferTracker which got removed already. Bug: v8:10064 Change-Id: I97f8ed0727abec01b3b65ba965026f61fb9acb85 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2346406 Commit-Queue: Dominik Inführ <dinfuehr@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Cr-Commit-Position: refs/heads/master@{#69354}
-
- 06 Aug, 2020 1 commit
-
-
Leszek Swirski authored
Changes the isolate's string table into an off-heap structure. This allows the string table to be resized without allocating on the V8 heap, and potentially triggering a GC. This allows existing strings to be inserted into the string table without requiring allocation. This has two important benefits: 1) It allows the deserializer to insert strings directly into the string table, rather than having to defer string insertion until deserialization completes. 2) It simplifies the concurrent string table lookup to allow resizing the table inside the write lock, therefore eliminating the race where two concurrent lookups could both resize the table. The off-heap string table has the following properties: 1) The general hashmap behaviour matches the HashTable, i.e. open addressing, power-of-two sized, quadratic probing. This could, of course, now be changed. 2) The empty and deleted sentinels are changed to Smi 0 and 1, respectively, to make those comparisons a bit cheaper and not require roots access. 3) When the HashTable is resized, the old elements array is kept alive in a linked list of previous arrays, so that concurrent lookups don't lose the data they're accessing. This linked list is cleared by the GC, as then we know that all threads are in a safepoint. 4) The GC treats the hash table entries as weak roots, and only walks them for non-live reference clearing and for evacuation. 5) Since there is no longer a FixedArray to serialize for the startup snapshot, there is now a custom serialization of the string table, and the string table root is considered unserializable during weak root iteration. As a bonus, the custom serialization is more efficient, as it skips non-string entries. As a drive-by, rename LookupStringExists_NoAllocate to TryStringToIndexOrLookupExisting, to make it clearer that it returns a non-string for the case when the string is an array index. As another drive-by, extract StringSet into a separate header. Bug: v8:10729 Change-Id: I9c990fb2d74d1fe222920408670974a70e969bca Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2339104 Commit-Queue: Leszek Swirski <leszeks@chromium.org> Reviewed-by:
Jakob Gruber <jgruber@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Cr-Commit-Position: refs/heads/master@{#69270}
-
- 04 Aug, 2020 1 commit
-
-
Dominik Inführ authored
This is a reland of b354e344 This CL adds 3 fixes: * Unprotect code object before creating filler * Allows AllocationObserver::Step to add more AllocationObservers * Update limit in NewSpace::UpdateLinearAllocationArea Original change's description: > [heap] Refactor allocation observer in AllocationCounter > > Moves accounting of allocation observers into the AllocationCounter > class. This CL removes top_on_previous_step_ for counters that are > increased regularly in the slow path of the allocation functions. > > AdvanceAllocationObservers() informs the AllocationCounter about > allocated bytes, InvokeAllocationObservers() needs to be invoked when > an allocation step is reached. NextBytes() returns the number of bytes > until the next AllocationObserver::Step needs to run. > > Bug: v8:10315 > Change-Id: I8b6eb8719ab032d44ee0614d2a0f2645bfce9df6 > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2320650 > Commit-Queue: Dominik Inführ <dinfuehr@chromium.org> > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > Cr-Commit-Position: refs/heads/master@{#69170} Bug: v8:10315 Change-Id: I89ab4d5069a234a293471f613dab16b47d8fff89 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2332805Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Commit-Queue: Dominik Inführ <dinfuehr@chromium.org> Cr-Commit-Position: refs/heads/master@{#69216}
-
- 01 Aug, 2020 1 commit
-
-
Dominik Inführ authored
This reverts commit b354e344. Reason for revert: Clusterfuzz found issues with this CL. Original change's description: > [heap] Refactor allocation observer in AllocationCounter > > Moves accounting of allocation observers into the AllocationCounter > class. This CL removes top_on_previous_step_ for counters that are > increased regularly in the slow path of the allocation functions. > > AdvanceAllocationObservers() informs the AllocationCounter about > allocated bytes, InvokeAllocationObservers() needs to be invoked when > an allocation step is reached. NextBytes() returns the number of bytes > until the next AllocationObserver::Step needs to run. > > Bug: v8:10315 > Change-Id: I8b6eb8719ab032d44ee0614d2a0f2645bfce9df6 > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2320650 > Commit-Queue: Dominik Inführ <dinfuehr@chromium.org> > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > Cr-Commit-Position: refs/heads/master@{#69170} TBR=ulan@chromium.org,dinfuehr@chromium.org Change-Id: Icd713207bfb2085421fd82009be24a0211ae86da No-Presubmit: true No-Tree-Checks: true No-Try: true Bug: v8:10315 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2332667Reviewed-by:
Dominik Inführ <dinfuehr@chromium.org> Commit-Queue: Dominik Inführ <dinfuehr@chromium.org> Cr-Commit-Position: refs/heads/master@{#69187}
-
- 31 Jul, 2020 1 commit
-
-
Dominik Inführ authored
Moves accounting of allocation observers into the AllocationCounter class. This CL removes top_on_previous_step_ for counters that are increased regularly in the slow path of the allocation functions. AdvanceAllocationObservers() informs the AllocationCounter about allocated bytes, InvokeAllocationObservers() needs to be invoked when an allocation step is reached. NextBytes() returns the number of bytes until the next AllocationObserver::Step needs to run. Bug: v8:10315 Change-Id: I8b6eb8719ab032d44ee0614d2a0f2645bfce9df6 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2320650 Commit-Queue: Dominik Inführ <dinfuehr@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Cr-Commit-Position: refs/heads/master@{#69170}
-
- 14 Jul, 2020 1 commit
-
-
Dominik Inführ authored
Fix data race between concurrent threads allocating (accessing gc_state_ that way) and the main thread starting tear down. Bug: v8:10315 Change-Id: Icc24811e43268512c8d7fdaf92ecd3fc7b3ecd57 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2297390Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Commit-Queue: Dominik Inführ <dinfuehr@chromium.org> Cr-Commit-Position: refs/heads/master@{#68853}
-
- 02 Jul, 2020 1 commit
-
-
Dominik Inführ authored
Restructure code to make slow path of allocation more obvious. Bug: v8:10315 Change-Id: Ic3e3b866b144b6f2877acac4accf87377f757172 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2276273 Commit-Queue: Dominik Inführ <dinfuehr@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Cr-Commit-Position: refs/heads/master@{#68651}
-
- 18 Jun, 2020 1 commit
-
-
Dan Elphick authored
This reverts commit f78d69fa. With https://chromium-review.googlesource.com/c/v8/v8/+/2243216, incorrect MemoryChunk::FromHeapObject uses are now fixed. Original change's description: > Revert "[heap] Make ReadOnlySpace use bump pointer allocation" > > This reverts commit 81c34968 and also > 490f3580 which depends on the former. > > Reason for revert: Break CFI tests in chromium https://ci.chromium.org/p/chromium/builders/ci/Linux%20CFI/17438 > Original change's description: > > [heap] Make ReadOnlySpace use bump pointer allocation > > > > This changes ReadOnlySpace to no longer be a PagedSpace but instead it > > is now a BaseSpace. BasicSpace is a new base class that Space inherits > > from and which has no allocation methods and does not dictate how the > > pages should be held. > > > > ReadOnlySpace unlike Space holds its pages as a > > std::vector<ReadOnlyPage>, where ReadOnlyPage directly subclasses > > BasicMemoryChunk, meaning they do not have prev_ and next_ pointers and > > cannot be held in a heap::List. This is desirable since with pointer > > compression we would like to remap these pages to different memory > > addresses which would be impossible with a heap::List. > > > > Since ReadOnlySpace no longer uses most of the code from the other > > Spaces it makes sense to simplify its memory allocation to use a simple > > bump pointer and always allocate a new page whenever an allocation > > exceeds the remaining space on the final page. > > > > Change-Id: Iee6d9f96cfb174b4026ee671ee4f897909b38418 > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2209060 > > Commit-Queue: Dan Elphick <delphick@chromium.org> > > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > > Cr-Commit-Position: refs/heads/master@{#68137} > > TBR=ulan@chromium.org,delphick@chromium.org > > # Not skipping CQ checks because original CL landed > 1 day ago. > > Change-Id: I68c9834872e55eb833be081f8ff99b786bfa9894 > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2232552 > Commit-Queue: Dan Elphick <delphick@chromium.org> > Reviewed-by: Dan Elphick <delphick@chromium.org> > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > Cr-Commit-Position: refs/heads/master@{#68211} TBR=ulan@chromium.org,delphick@chromium.org # Not skipping CQ checks because original CL landed > 1 day ago. Change-Id: Id5b3cce41b5dec1dca816c05848d183790b1cc05 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2250254Reviewed-by:
Dan Elphick <delphick@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Commit-Queue: Dan Elphick <delphick@chromium.org> Cr-Commit-Position: refs/heads/master@{#68407}
-
- 17 Jun, 2020 1 commit
-
-
Dan Elphick authored
Since ReadOnlySpace pages will soon not be MemoryChunks, change most uses of MemoryChunk::FromHeapObject and FromAddress to use the BasicMemoryChunk variants and which use the new MemoryChunk::cast function that takes a BasicMemoryChunk and DCHECKs !InReadOnlySpace(). To enable this, it also moves into BasicMemoryChunk several MemoryChunk functions that just require a BasicMemoryChunk. Bug: v8:10454 Change-Id: I80875b2c2446937ac2c2bc9287d36e71cc050c38 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2243216 Commit-Queue: Dan Elphick <delphick@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Cr-Commit-Position: refs/heads/master@{#68390}
-
- 05 Jun, 2020 1 commit
-
-
Dan Elphick authored
This reverts commit 81c34968 and also 490f3580 which depends on the former. Reason for revert: Break CFI tests in chromium https://ci.chromium.org/p/chromium/builders/ci/Linux%20CFI/17438 Original change's description: > [heap] Make ReadOnlySpace use bump pointer allocation > > This changes ReadOnlySpace to no longer be a PagedSpace but instead it > is now a BaseSpace. BasicSpace is a new base class that Space inherits > from and which has no allocation methods and does not dictate how the > pages should be held. > > ReadOnlySpace unlike Space holds its pages as a > std::vector<ReadOnlyPage>, where ReadOnlyPage directly subclasses > BasicMemoryChunk, meaning they do not have prev_ and next_ pointers and > cannot be held in a heap::List. This is desirable since with pointer > compression we would like to remap these pages to different memory > addresses which would be impossible with a heap::List. > > Since ReadOnlySpace no longer uses most of the code from the other > Spaces it makes sense to simplify its memory allocation to use a simple > bump pointer and always allocate a new page whenever an allocation > exceeds the remaining space on the final page. > > Change-Id: Iee6d9f96cfb174b4026ee671ee4f897909b38418 > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2209060 > Commit-Queue: Dan Elphick <delphick@chromium.org> > Reviewed-by: Ulan Degenbaev <ulan@chromium.org> > Cr-Commit-Position: refs/heads/master@{#68137} TBR=ulan@chromium.org,delphick@chromium.org # Not skipping CQ checks because original CL landed > 1 day ago. Change-Id: I68c9834872e55eb833be081f8ff99b786bfa9894 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2232552 Commit-Queue: Dan Elphick <delphick@chromium.org> Reviewed-by:
Dan Elphick <delphick@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Cr-Commit-Position: refs/heads/master@{#68211}
-
- 03 Jun, 2020 1 commit
-
-
Dan Elphick authored
This changes ReadOnlySpace to no longer be a PagedSpace but instead it is now a BaseSpace. BasicSpace is a new base class that Space inherits from and which has no allocation methods and does not dictate how the pages should be held. ReadOnlySpace unlike Space holds its pages as a std::vector<ReadOnlyPage>, where ReadOnlyPage directly subclasses BasicMemoryChunk, meaning they do not have prev_ and next_ pointers and cannot be held in a heap::List. This is desirable since with pointer compression we would like to remap these pages to different memory addresses which would be impossible with a heap::List. Since ReadOnlySpace no longer uses most of the code from the other Spaces it makes sense to simplify its memory allocation to use a simple bump pointer and always allocate a new page whenever an allocation exceeds the remaining space on the final page. Change-Id: Iee6d9f96cfb174b4026ee671ee4f897909b38418 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2209060 Commit-Queue: Dan Elphick <delphick@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Cr-Commit-Position: refs/heads/master@{#68137}
-
- 18 May, 2020 1 commit
-
-
Dan Elphick authored
Splits out MemoryAllocator and CodeRangeAddressHint into memory-allocator.h Bug: v8:10473, v8:10506 Change-Id: I0855f23dd0374ddd68493ee05af7a3a00c84660d Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2203206 Auto-Submit: Dan Elphick <delphick@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Reviewed-by:
Peter Marshall <petermarshall@chromium.org> Commit-Queue: Peter Marshall <petermarshall@chromium.org> Cr-Commit-Position: refs/heads/master@{#67857}
-
- 15 May, 2020 1 commit
-
-
Dan Elphick authored
Splits out all of SemiSpace, NewSpaces and related classes into paged-spaces.h. Bug: v8:10473, v8:10506 Change-Id: I97ecceaf5df41263cc8ea75ff0018442bfeffa66 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2202903 Auto-Submit: Dan Elphick <delphick@chromium.org> Commit-Queue: Ulan Degenbaev <ulan@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Cr-Commit-Position: refs/heads/master@{#67831}
-
- 14 May, 2020 1 commit
-
-
Dan Elphick authored
Splits out all of PagedSpace and subclasses into paged-spaces.h. Also moves CodeObjectRegistry to code-object-registry.h. Bug: v8:10473, v8:10506 Change-Id: I35fab1e545e958eb32f3e39a5e2ce8fb087c2a53 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2201763Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Commit-Queue: Dan Elphick <delphick@chromium.org> Cr-Commit-Position: refs/heads/master@{#67811}
-
- 06 May, 2020 1 commit
-
-
Nico Hartmann authored
Bug: v8:10391 Change-Id: Ic92cdaca38c2181427cc12ec5e572d5964afe704 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2152647Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Reviewed-by:
Igor Sheludko <ishell@chromium.org> Reviewed-by:
Jakob Gruber <jgruber@chromium.org> Commit-Queue: Nico Hartmann <nicohartmann@chromium.org> Cr-Commit-Position: refs/heads/master@{#67601}
-
- 05 May, 2020 1 commit
-
-
Dan Elphick authored
Also makes memory-chunk.h accessible from outside heap which allows removal of some heap-inl.h includes. Bug: v8:10473, v8:10496 Change-Id: Iec4fc5ce8ad201f6ee5fd924cc3cd935324429fc Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2172088 Commit-Queue: Jakob Gruber <jgruber@chromium.org> Auto-Submit: Dan Elphick <delphick@chromium.org> Reviewed-by:
Jakob Gruber <jgruber@chromium.org> Reviewed-by:
Ulan Degenbaev <ulan@chromium.org> Cr-Commit-Position: refs/heads/master@{#67551}
-