Commit 9ab6621a authored by Dan Elphick's avatar Dan Elphick Committed by Commit Bot

Reland "Reland "[heap] Move initial objects into RO_SPACE""

This is a reland of 6c68efac

Updated Heap::CommittedMemory and related functions to iterate over all
spaces rather than including them manually which can lead to a space
being overlooked. Also adds a test to ensure this the case.

Original change's description:
> Revert "Reland "[heap] Move initial objects into RO_SPACE""
>
> This reverts commit 6c68efac.
>
> Reason for revert: https://bugs.chromium.org/p/v8/issues/detail?id=7668
>
> Original change's description:
> > Reland "[heap] Move initial objects into RO_SPACE"
> >
> > This is a reland of f8ae62fe
> >
> > Original change's description:
> > > [heap] Move initial objects into RO_SPACE
> > >
> > > This moves:
> > > * the main oddballs (null, undefined, hole, true, false) as well as
> > > their supporting maps (also adds hole as an internalized string to make
> > > this work).
> > > * most of the internalized strings
> > > * the struct maps
> > > * empty array
> > > * empty enum cache
> > > * the contents of the initial string table
> > > * the weak_cell_cache for any map in RO_SPACE (and eagerly creates the
> > > value avoid writing to it during run-time)
> > >
> > > The StartupSerializer stats change as follows:
> > >
> > >      RO_SPACE  NEW_SPACE  OLD_SPACE  CODE_SPACE  MAP_SPACE  LO_SPACE
> > > old         0          0     270264       32608      12144         0
> > > new     21776          0     253168       32608       8184         0
> > > Overall memory usage has increased by 720 bytes due to the eager
> > > initialization of the Map weak cell caches.
> > >
> > > Also extends --serialization-statistics to print out separate instance
> > > type stats for objects in RO_SPACE as shown here:
> > >
> > >   Read Only Instance types (count and bytes):
> > >        404      16736  ONE_BYTE_INTERNALIZED_STRING_TYPE
> > >          2         32  HEAP_NUMBER_TYPE
> > >          5        240  ODDBALL_TYPE
> > >         45       3960  MAP_TYPE
> > >          1         16  BYTE_ARRAY_TYPE
> > >          1         24  TUPLE2_TYPE
> > >          1         16  FIXED_ARRAY_TYPE
> > >          1         32  DESCRIPTOR_ARRAY_TYPE
> > >         45        720  WEAK_CELL_TYPE
> > >
> > > Bug: v8:7464
> > > Change-Id: I12981c39c82a7057f68bbbe03f89fb57b0b4c6a6
> > > Reviewed-on: https://chromium-review.googlesource.com/973722
> > > Commit-Queue: Dan Elphick <delphick@chromium.org>
> > > Reviewed-by: Hannes Payer <hpayer@chromium.org>
> > > Reviewed-by: Ross McIlroy <rmcilroy@chromium.org>
> > > Reviewed-by: Yang Guo <yangguo@chromium.org>
> > > Cr-Commit-Position: refs/heads/master@{#52435}
> >
> > Bug: v8:7464
> > Change-Id: I50427edfeb53ca80ec4cf46566368fb2213ccf7b
> > Reviewed-on: https://chromium-review.googlesource.com/999654
> > Commit-Queue: Dan Elphick <delphick@chromium.org>
> > Reviewed-by: Yang Guo <yangguo@chromium.org>
> > Reviewed-by: Hannes Payer <hpayer@chromium.org>
> > Cr-Commit-Position: refs/heads/master@{#52638}
>
> TBR=rmcilroy@chromium.org,yangguo@chromium.org,hpayer@chromium.org,mlippautz@chromium.org,delphick@chromium.org
>
> # Not skipping CQ checks because original CL landed > 1 day ago.
>
> Bug: v8:7464,v8:7668
> Change-Id: I10aa03623b51e997f95a3715ea9f0bf5d29d2cdb
> Reviewed-on: https://chromium-review.googlesource.com/1016600
> Commit-Queue: Peter Marshall <petermarshall@chromium.org>
> Reviewed-by: Peter Marshall <petermarshall@chromium.org>
> Cr-Commit-Position: refs/heads/master@{#52667}

Cq-Include-Trybots: luci.chromium.try:linux_chromium_rel_ng
Change-Id: If4b7490c8c4d31612de8ec132de334955a319b11
Bug: v8:7464, v8:7668
Reviewed-on: https://chromium-review.googlesource.com/1019020Reviewed-by: 's avatarJakob Gruber <jgruber@chromium.org>
Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
Commit-Queue: Dan Elphick <delphick@chromium.org>
Cr-Commit-Position: refs/heads/master@{#52689}
parent b730f5eb
......@@ -341,6 +341,10 @@ void i::V8::FatalProcessOutOfMemory(i::Isolate* isolate, const char* location,
intptr_t start_marker;
heap_stats.start_marker = &start_marker;
size_t ro_space_size;
heap_stats.ro_space_size = &ro_space_size;
size_t ro_space_capacity;
heap_stats.ro_space_capacity = &ro_space_capacity;
size_t new_space_size;
heap_stats.new_space_size = &new_space_size;
size_t new_space_capacity;
......
......@@ -571,10 +571,10 @@ inline std::ostream& operator<<(std::ostream& os, WriteBarrierKind kind) {
}
// A flag that indicates whether objects should be pretenured when
// allocated (allocated directly into the old generation) or not
// (allocated in the young generation if the object size and type
// allocated (allocated directly into either the old generation or read-only
// space), or not (allocated in the young generation if the object size and type
// allows).
enum PretenureFlag { NOT_TENURED, TENURED };
enum PretenureFlag { NOT_TENURED, TENURED, TENURED_READ_ONLY };
inline std::ostream& operator<<(std::ostream& os, const PretenureFlag& flag) {
switch (flag) {
......@@ -582,6 +582,8 @@ inline std::ostream& operator<<(std::ostream& os, const PretenureFlag& flag) {
return os << "NotTenured";
case TENURED:
return os << "Tenured";
case TENURED_READ_ONLY:
return os << "TenuredReadOnly";
}
UNREACHABLE();
}
......
......@@ -686,7 +686,11 @@ Handle<SeqOneByteString> Factory::AllocateRawOneByteInternalizedString(
Map* map = *one_byte_internalized_string_map();
int size = SeqOneByteString::SizeFor(length);
HeapObject* result = AllocateRawWithImmortalMap(size, TENURED, map);
HeapObject* result = AllocateRawWithImmortalMap(
size,
isolate()->heap()->CanAllocateInReadOnlySpace() ? TENURED_READ_ONLY
: TENURED,
map);
Handle<SeqOneByteString> answer(SeqOneByteString::cast(result), isolate());
answer->set_length(length);
answer->set_hash_field(hash_field);
......@@ -730,7 +734,11 @@ Handle<String> Factory::AllocateInternalizedStringImpl(T t, int chars,
size = SeqTwoByteString::SizeFor(chars);
}
HeapObject* result = AllocateRawWithImmortalMap(size, TENURED, map);
HeapObject* result = AllocateRawWithImmortalMap(
size,
isolate()->heap()->CanAllocateInReadOnlySpace() ? TENURED_READ_ONLY
: TENURED,
map);
Handle<String> answer(String::cast(result), isolate());
answer->set_length(chars);
answer->set_hash_field(hash_field);
......@@ -1630,13 +1638,14 @@ Handle<PropertyCell> Factory::NewPropertyCell(Handle<Name> name) {
return cell;
}
Handle<WeakCell> Factory::NewWeakCell(Handle<HeapObject> value) {
Handle<WeakCell> Factory::NewWeakCell(Handle<HeapObject> value,
PretenureFlag pretenure) {
// It is safe to dereference the value because we are embedding it
// in cell and not inspecting its fields.
AllowDeferredHandleDereference convert_to_cell;
STATIC_ASSERT(WeakCell::kSize <= kMaxRegularHeapObjectSize);
HeapObject* result =
AllocateRawWithImmortalMap(WeakCell::kSize, TENURED, *weak_cell_map());
AllocateRawWithImmortalMap(WeakCell::kSize, pretenure, *weak_cell_map());
Handle<WeakCell> cell(WeakCell::cast(result), isolate());
cell->initialize(*value);
return cell;
......
......@@ -442,7 +442,8 @@ class V8_EXPORT_PRIVATE Factory {
Handle<PropertyCell> NewPropertyCell(Handle<Name> name);
Handle<WeakCell> NewWeakCell(Handle<HeapObject> value);
Handle<WeakCell> NewWeakCell(Handle<HeapObject> value,
PretenureFlag pretenure = TENURED);
Handle<FeedbackCell> NewNoClosuresCell(Handle<HeapObject> value);
Handle<FeedbackCell> NewOneClosureCell(Handle<HeapObject> value);
......
......@@ -183,6 +183,7 @@ AllocationResult Heap::AllocateRaw(int size_in_bytes, AllocationSpace space,
DCHECK(isolate_->serializer_enabled());
#endif
DCHECK(!large_object);
DCHECK(CanAllocateInReadOnlySpace());
allocation = read_only_space_->AllocateRaw(size_in_bytes, alignment);
} else {
// NEW_SPACE is not allowed here.
......@@ -261,6 +262,12 @@ void Heap::OnMoveEvent(HeapObject* target, HeapObject* source,
}
}
bool Heap::CanAllocateInReadOnlySpace() {
return !deserialization_complete_ &&
(isolate()->serializer_enabled() ||
!isolate()->initialized_from_snapshot());
}
void Heap::UpdateAllocationsHash(HeapObject* object) {
Address object_address = object->address();
MemoryChunk* memory_chunk = MemoryChunk::FromAddress(object_address);
......
......@@ -264,15 +264,25 @@ size_t Heap::Capacity() {
size_t Heap::OldGenerationCapacity() {
if (!HasBeenSetUp()) return 0;
return old_space_->Capacity() + code_space_->Capacity() +
map_space_->Capacity() + lo_space_->SizeOfObjects();
PagedSpaces spaces(this, PagedSpaces::SpacesSpecifier::kAllPagedSpaces);
size_t total = 0;
for (PagedSpace* space = spaces.next(); space != nullptr;
space = spaces.next()) {
total += space->Capacity();
}
return total + lo_space_->SizeOfObjects();
}
size_t Heap::CommittedOldGenerationMemory() {
if (!HasBeenSetUp()) return 0;
return old_space_->CommittedMemory() + code_space_->CommittedMemory() +
map_space_->CommittedMemory() + lo_space_->Size();
PagedSpaces spaces(this, PagedSpaces::SpacesSpecifier::kAllPagedSpaces);
size_t total = 0;
for (PagedSpace* space = spaces.next(); space != nullptr;
space = spaces.next()) {
total += space->CommittedMemory();
}
return total + lo_space_->Size();
}
size_t Heap::CommittedMemory() {
......@@ -285,11 +295,12 @@ size_t Heap::CommittedMemory() {
size_t Heap::CommittedPhysicalMemory() {
if (!HasBeenSetUp()) return 0;
return new_space_->CommittedPhysicalMemory() +
old_space_->CommittedPhysicalMemory() +
code_space_->CommittedPhysicalMemory() +
map_space_->CommittedPhysicalMemory() +
lo_space_->CommittedPhysicalMemory();
size_t total = 0;
for (SpaceIterator it(this); it.has_next();) {
total += it.next()->CommittedPhysicalMemory();
}
return total;
}
size_t Heap::CommittedMemoryExecutable() {
......@@ -380,6 +391,15 @@ void Heap::PrintShortHeapStatistics() {
" available: %6" PRIuS " KB\n",
memory_allocator()->Size() / KB,
memory_allocator()->Available() / KB);
PrintIsolate(isolate_,
"Read-only space, used: %6" PRIuS
" KB"
", available: %6" PRIuS
" KB"
", committed: %6" PRIuS " KB\n",
read_only_space_->Size() / KB,
read_only_space_->Available() / KB,
read_only_space_->CommittedMemory() / KB);
PrintIsolate(isolate_, "New space, used: %6" PRIuS
" KB"
", available: %6" PRIuS
......@@ -4141,6 +4161,8 @@ bool Heap::ConfigureHeapDefault() { return ConfigureHeap(0, 0, 0); }
void Heap::RecordStats(HeapStats* stats, bool take_snapshot) {
*stats->start_marker = HeapStats::kStartMarker;
*stats->end_marker = HeapStats::kEndMarker;
*stats->ro_space_size = read_only_space_->Size();
*stats->ro_space_capacity = read_only_space_->Capacity();
*stats->new_space_size = new_space_->Size();
*stats->new_space_capacity = new_space_->Capacity();
*stats->old_space_size = old_space_->SizeOfObjects();
......@@ -4181,8 +4203,13 @@ void Heap::RecordStats(HeapStats* stats, bool take_snapshot) {
}
size_t Heap::PromotedSpaceSizeOfObjects() {
return old_space_->SizeOfObjects() + code_space_->SizeOfObjects() +
map_space_->SizeOfObjects() + lo_space_->SizeOfObjects();
PagedSpaces spaces(this, PagedSpaces::SpacesSpecifier::kAllPagedSpaces);
size_t total = 0;
for (PagedSpace* space = spaces.next(); space != nullptr;
space = spaces.next()) {
total += space->SizeOfObjects();
}
return total + lo_space_->SizeOfObjects();
}
uint64_t Heap::PromotedExternalMemorySize() {
......
......@@ -947,6 +947,7 @@ class Heap {
inline void OnMoveEvent(HeapObject* target, HeapObject* source,
int size_in_bytes);
inline bool CanAllocateInReadOnlySpace();
bool deserialization_complete() const { return deserialization_complete_; }
bool HasLowAllocationRate();
......@@ -1811,7 +1812,16 @@ class Heap {
// Selects the proper allocation space based on the pretenuring decision.
static AllocationSpace SelectSpace(PretenureFlag pretenure) {
return (pretenure == TENURED) ? OLD_SPACE : NEW_SPACE;
switch (pretenure) {
case TENURED_READ_ONLY:
return RO_SPACE;
case TENURED:
return OLD_SPACE;
case NOT_TENURED:
return NEW_SPACE;
default:
UNREACHABLE();
}
}
static size_t DefaultGetExternallyAllocatedMemoryInBytesCallback() {
......@@ -2136,9 +2146,11 @@ class Heap {
V8_WARN_UNUSED_RESULT AllocationResult
AllocatePartialMap(InstanceType instance_type, int instance_size);
void FinalizePartialMap(Map* map);
// Allocate empty fixed typed array of given type.
V8_WARN_UNUSED_RESULT AllocationResult
AllocateEmptyFixedTypedArray(ExternalArrayType array_type);
V8_WARN_UNUSED_RESULT AllocationResult AllocateEmptyFixedTypedArray(
ExternalArrayType array_type, AllocationSpace space = OLD_SPACE);
void set_force_oom(bool value) { force_oom_ = value; }
......@@ -2485,30 +2497,32 @@ class HeapStats {
static const int kEndMarker = 0xDECADE01;
intptr_t* start_marker; // 0
size_t* new_space_size; // 1
size_t* new_space_capacity; // 2
size_t* old_space_size; // 3
size_t* old_space_capacity; // 4
size_t* code_space_size; // 5
size_t* code_space_capacity; // 6
size_t* map_space_size; // 7
size_t* map_space_capacity; // 8
size_t* lo_space_size; // 9
size_t* global_handle_count; // 10
size_t* weak_global_handle_count; // 11
size_t* pending_global_handle_count; // 12
size_t* near_death_global_handle_count; // 13
size_t* free_global_handle_count; // 14
size_t* memory_allocator_size; // 15
size_t* memory_allocator_capacity; // 16
size_t* malloced_memory; // 17
size_t* malloced_peak_memory; // 18
size_t* objects_per_type; // 19
size_t* size_per_type; // 20
int* os_error; // 21
char* last_few_messages; // 22
char* js_stacktrace; // 23
intptr_t* end_marker; // 24
size_t* ro_space_size; // 1
size_t* ro_space_capacity; // 2
size_t* new_space_size; // 3
size_t* new_space_capacity; // 4
size_t* old_space_size; // 5
size_t* old_space_capacity; // 6
size_t* code_space_size; // 7
size_t* code_space_capacity; // 8
size_t* map_space_size; // 9
size_t* map_space_capacity; // 10
size_t* lo_space_size; // 11
size_t* global_handle_count; // 12
size_t* weak_global_handle_count; // 13
size_t* pending_global_handle_count; // 14
size_t* near_death_global_handle_count; // 15
size_t* free_global_handle_count; // 16
size_t* memory_allocator_size; // 17
size_t* memory_allocator_capacity; // 18
size_t* malloced_memory; // 19
size_t* malloced_peak_memory; // 20
size_t* objects_per_type; // 21
size_t* size_per_type; // 22
int* os_error; // 23
char* last_few_messages; // 24
char* js_stacktrace; // 25
intptr_t* end_marker; // 26
};
......
This diff is collapsed.
......@@ -315,6 +315,7 @@ HeapObject* PagedSpace::TryAllocateLinearlyAligned(
AllocationResult PagedSpace::AllocateRawUnaligned(
int size_in_bytes, UpdateSkipList update_skip_list) {
DCHECK_IMPLIES(identity() == RO_SPACE, heap()->CanAllocateInReadOnlySpace());
if (!EnsureLinearAllocationArea(size_in_bytes)) {
return AllocationResult::Retry(identity());
}
......@@ -330,7 +331,8 @@ AllocationResult PagedSpace::AllocateRawUnaligned(
AllocationResult PagedSpace::AllocateRawAligned(int size_in_bytes,
AllocationAlignment alignment) {
DCHECK(identity() == OLD_SPACE);
DCHECK(identity() == OLD_SPACE || identity() == RO_SPACE);
DCHECK_IMPLIES(identity() == RO_SPACE, heap()->CanAllocateInReadOnlySpace());
int allocation_size = size_in_bytes;
HeapObject* object = TryAllocateLinearlyAligned(&allocation_size, alignment);
if (object == nullptr) {
......
......@@ -1923,7 +1923,8 @@ void PagedSpace::Verify(ObjectVisitor* visitor) {
// be in map space.
Map* map = object->map();
CHECK(map->IsMap());
CHECK(heap()->map_space()->Contains(map));
CHECK(heap()->map_space()->Contains(map) ||
heap()->read_only_space()->Contains(map));
// Perform space-specific object verification.
VerifyObject(object);
......@@ -2374,10 +2375,11 @@ void NewSpace::Verify() {
HeapObject* object = HeapObject::FromAddress(current);
// The first word should be a map, and we expect all map pointers to
// be in map space.
// be in map space or read-only space.
Map* map = object->map();
CHECK(map->IsMap());
CHECK(heap()->map_space()->Contains(map));
CHECK(heap()->map_space()->Contains(map) ||
heap()->read_only_space()->Contains(map));
// The object should not be code or a map.
CHECK(!object->IsMap());
......@@ -3452,10 +3454,11 @@ void LargeObjectSpace::Verify() {
CHECK(object->address() == page->area_start());
// The first word should be a map, and we expect all map pointers to be
// in map space.
// in map space or read-only space.
Map* map = object->map();
CHECK(map->IsMap());
CHECK(heap()->map_space()->Contains(map));
CHECK(heap()->map_space()->Contains(map) ||
heap()->read_only_space()->Contains(map));
// We have only code, sequential strings, external strings (sequential
// strings that have been morphed into external strings), thin strings
......
......@@ -25,13 +25,19 @@ Serializer<AllocatorT>::Serializer(Isolate* isolate)
if (FLAG_serialization_statistics) {
instance_type_count_ = NewArray<int>(kInstanceTypes);
instance_type_size_ = NewArray<size_t>(kInstanceTypes);
read_only_instance_type_count_ = NewArray<int>(kInstanceTypes);
read_only_instance_type_size_ = NewArray<size_t>(kInstanceTypes);
for (int i = 0; i < kInstanceTypes; i++) {
instance_type_count_[i] = 0;
instance_type_size_[i] = 0;
read_only_instance_type_count_[i] = 0;
read_only_instance_type_size_[i] = 0;
}
} else {
instance_type_count_ = nullptr;
instance_type_size_ = nullptr;
read_only_instance_type_count_ = nullptr;
read_only_instance_type_size_ = nullptr;
}
#endif // OBJECT_PRINT
}
......@@ -43,16 +49,24 @@ Serializer<AllocatorT>::~Serializer() {
if (instance_type_count_ != nullptr) {
DeleteArray(instance_type_count_);
DeleteArray(instance_type_size_);
DeleteArray(read_only_instance_type_count_);
DeleteArray(read_only_instance_type_size_);
}
#endif // OBJECT_PRINT
}
#ifdef OBJECT_PRINT
template <class AllocatorT>
void Serializer<AllocatorT>::CountInstanceType(Map* map, int size) {
void Serializer<AllocatorT>::CountInstanceType(Map* map, int size,
AllocationSpace space) {
int instance_type = map->instance_type();
instance_type_count_[instance_type]++;
instance_type_size_[instance_type] += size;
if (space != RO_SPACE) {
instance_type_count_[instance_type]++;
instance_type_size_[instance_type] += size;
} else {
read_only_instance_type_count_[instance_type]++;
read_only_instance_type_size_[instance_type] += size;
}
}
#endif // OBJECT_PRINT
......@@ -72,6 +86,21 @@ void Serializer<AllocatorT>::OutputStatistics(const char* name) {
}
INSTANCE_TYPE_LIST(PRINT_INSTANCE_TYPE)
#undef PRINT_INSTANCE_TYPE
size_t read_only_total = 0;
#define UPDATE_TOTAL(Name) \
read_only_total += read_only_instance_type_size_[Name];
INSTANCE_TYPE_LIST(UPDATE_TOTAL)
#undef UPDATE_TOTAL
if (read_only_total > 0) {
PrintF("\n Read Only Instance types (count and bytes):\n");
#define PRINT_INSTANCE_TYPE(Name) \
if (read_only_instance_type_count_[Name]) { \
PrintF("%10d %10" PRIuS " %s\n", read_only_instance_type_count_[Name], \
read_only_instance_type_size_[Name], #Name); \
}
INSTANCE_TYPE_LIST(PRINT_INSTANCE_TYPE)
#undef PRINT_INSTANCE_TYPE
}
PrintF("\n");
#endif // OBJECT_PRINT
}
......@@ -362,7 +391,7 @@ void Serializer<AllocatorT>::ObjectSerializer::SerializePrologue(
#ifdef OBJECT_PRINT
if (FLAG_serialization_statistics) {
serializer_->CountInstanceType(map, size);
serializer_->CountInstanceType(map, size, space);
}
#endif // OBJECT_PRINT
......
......@@ -226,7 +226,7 @@ class Serializer : public SerializerDeserializer {
void OutputStatistics(const char* name);
#ifdef OBJECT_PRINT
void CountInstanceType(Map* map, int size);
void CountInstanceType(Map* map, int size, AllocationSpace space);
#endif // OBJECT_PRINT
#ifdef DEBUG
......@@ -256,6 +256,8 @@ class Serializer : public SerializerDeserializer {
static const int kInstanceTypes = LAST_TYPE + 1;
int* instance_type_count_;
size_t* instance_type_size_;
int* read_only_instance_type_count_;
size_t* read_only_instance_type_size_;
#endif // OBJECT_PRINT
#ifdef DEBUG
......
......@@ -18570,6 +18570,43 @@ THREADED_TEST(GetHeapStatistics) {
CHECK_NE(static_cast<int>(heap_statistics.used_heap_size()), 0);
}
TEST(GetHeapSpaceStatistics) {
LocalContext c1;
v8::Isolate* isolate = c1->GetIsolate();
v8::HandleScope scope(isolate);
v8::HeapStatistics heap_statistics;
// Force allocation in LO_SPACE so that every space has non-zero size.
v8::internal::Isolate* i_isolate =
reinterpret_cast<v8::internal::Isolate*>(isolate);
(void)i_isolate->factory()->TryNewFixedArray(512 * 1024);
isolate->GetHeapStatistics(&heap_statistics);
// Ensure that the sum of all the spaces matches the totals from
// GetHeapSpaceStatics.
size_t total_size = 0u;
size_t total_used_size = 0u;
size_t total_available_size = 0u;
size_t total_physical_size = 0u;
for (size_t i = 0; i < isolate->NumberOfHeapSpaces(); ++i) {
v8::HeapSpaceStatistics space_statistics;
isolate->GetHeapSpaceStatistics(&space_statistics, i);
CHECK_NOT_NULL(space_statistics.space_name());
CHECK_GT(space_statistics.space_size(), 0u);
total_size += space_statistics.space_size();
CHECK_GT(space_statistics.space_used_size(), 0u);
total_used_size += space_statistics.space_used_size();
total_available_size += space_statistics.space_available_size();
CHECK_GT(space_statistics.physical_space_size(), 0u);
total_physical_size += space_statistics.physical_space_size();
}
CHECK_EQ(total_size, heap_statistics.total_heap_size());
CHECK_EQ(total_used_size, heap_statistics.used_heap_size());
CHECK_EQ(total_available_size, heap_statistics.total_available_size());
CHECK_EQ(total_physical_size, heap_statistics.total_physical_size());
}
TEST(NumberOfNativeContexts) {
static const size_t kNumTestContexts = 10;
i::Isolate* isolate = CcTest::i_isolate();
......
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment