Commit c808a644 authored by erik.corry@gmail.com's avatar erik.corry@gmail.com

Avoid extra GCs when deserializing during incremental marking.

Review URL: http://codereview.chromium.org/8276030

git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@9626 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
parent 2a4245e0
......@@ -1909,11 +1909,24 @@ intptr_t FreeList::SumFreeLists() {
bool NewSpace::ReserveSpace(int bytes) {
// We can't reliably unpack a partial snapshot that needs more new space
// space than the minimum NewSpace size.
// space than the minimum NewSpace size. The limit can be set lower than
// the end of new space either because there is more space on the next page
// or because we have lowered the limit in order to get periodic incremental
// marking. The most reliable way to ensure that there is linear space is
// to do the allocation, then rewind the limit.
ASSERT(bytes <= InitialCapacity());
Address limit = allocation_info_.limit;
MaybeObject* maybe = AllocateRawInternal(bytes);
Object* object = NULL;
if (!maybe->ToObject(&object)) return false;
HeapObject* allocation = HeapObject::cast(object);
Address top = allocation_info_.top;
return limit - top >= bytes;
if ((top - bytes) == allocation->address()) {
allocation_info_.top = allocation->address();
return true;
}
// There may be a borderline case here where the allocation succeeded, but
// the limit and top have moved on to a new page. In that case we try again.
return ReserveSpace(bytes);
}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment