Commit fc6ff534 authored by mlippautz's avatar mlippautz Committed by Commit bot

Reland of "[heap] Clean up stale store buffer entries for aborted pages."

This reverts commit d4fc4a8c.

1.  Let X be the aborted slot (slot in an evacuated object in an aborted page)
2.  Assume X contains pointer to Y and Y is in the new space, so X is in the
    store buffer.
3.  Store buffer rebuilding will not filter out X (it checks InNewSpace(Y)).
4.  The current mark-sweep finishes. The slot X is in free space and is also in
    the store buffer.
5.  A string of length 9 "abcdefghi" is allocated in the new space. The string
    looks like |MAP|LENGTH|hgfedcba|NNNNNNNi| in memory, where NNNNNNN is
    previous garbage. Let's assume that NNNNNNN0 was pointing to a new space
    object before.
6.  Scavenge happens.
7.  Slot X is still in free space and in store buffer. [It causes scavenge of
    the object Y in
    store_buffer()->IteratePointersToNewSpace(&Scavenger::ScavengeObject). But
    it is not important].
8.  Our string is promoted and is allocated over the slot X, such that NNNNNNNi
    is written in X.
9.  The scavenge finishes.
9.  Another scavenge starts.
10. We crash in
    store_buffer()->IteratePointersToNewSpace(&Scavenger::ScavengeObject) when
    processing slot X, because it doesn't point to valid map.

BUG=chromium:524425, chromium:564498
LOG=N
R=hpayer@chromium.org, ulan@chromium.org

Review URL: https://codereview.chromium.org/1494503004

Cr-Commit-Position: refs/heads/master@{#32514}
parent 535e221d
......@@ -683,6 +683,8 @@ void MarkCompactCollector::CollectEvacuationCandidates(PagedSpace* space) {
PageIterator it(space);
while (it.has_next()) {
Page* p = it.next();
// Invariant: No page should be marked as aborted after a GC.
DCHECK(!p->IsFlagSet(Page::COMPACTION_WAS_ABORTED));
if (p->NeverEvacuate()) continue;
if (p->IsFlagSet(Page::POPULAR_PAGE)) {
// This page had slots buffer overflow on previous GC, skip it.
......@@ -3278,8 +3280,13 @@ void MarkCompactCollector::EvacuatePagesInParallel() {
// happens upon moving (which we potentially didn't do).
// - Leave the page in the list of pages of a space since we could not
// fully evacuate it.
// - Mark them for rescanning for store buffer entries as we otherwise
// might have stale store buffer entries that become "valid" again
// after reusing the memory. Note that all existing store buffer
// entries of such pages are filtered before rescanning.
DCHECK(p->IsEvacuationCandidate());
p->SetFlag(Page::COMPACTION_WAS_ABORTED);
p->set_scan_on_scavenge(true);
abandoned_pages++;
break;
case MemoryChunk::kCompactingFinalize:
......@@ -3658,14 +3665,6 @@ void MarkCompactCollector::EvacuateNewSpaceAndCandidates() {
heap_->IterateRoots(&updating_visitor, VISIT_ALL_IN_SWEEP_NEWSPACE);
}
{
GCTracer::Scope gc_scope(heap()->tracer(),
GCTracer::Scope::MC_UPDATE_OLD_TO_NEW_POINTERS);
StoreBufferRebuildScope scope(heap_, heap_->store_buffer(),
&Heap::ScavengeStoreBufferCallback);
heap_->store_buffer()->IteratePointersToNewSpace(&UpdatePointer);
}
int npages = evacuation_candidates_.length();
{
GCTracer::Scope gc_scope(
......@@ -3754,6 +3753,16 @@ void MarkCompactCollector::EvacuateNewSpaceAndCandidates() {
SweepAbortedPages();
}
{
// Note that this phase needs to happen after making aborted pages iterable
// in the previous (sweeping) phase.
GCTracer::Scope gc_scope(heap()->tracer(),
GCTracer::Scope::MC_UPDATE_OLD_TO_NEW_POINTERS);
StoreBufferRebuildScope scope(heap_, heap_->store_buffer(),
&Heap::ScavengeStoreBufferCallback);
heap_->store_buffer()->IteratePointersToNewSpace(&UpdatePointer);
}
heap_->isolate()->inner_pointer_to_code_cache()->Flush();
// The hashing of weak_object_to_code_table is no longer valid.
......
......@@ -328,7 +328,8 @@ class MemoryChunk {
PRE_FREED,
// |COMPACTION_WAS_ABORTED|: Indicates that the compaction in this page
// has been aborted and needs special handling by the sweeper.
// has been aborted and needs special handling, i.e., sweeper and store
// buffer.
COMPACTION_WAS_ABORTED,
// Last flag, keep at bottom.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment