Commit bed054c4 authored by mtrofin's avatar mtrofin Committed by Commit bot

[turbofan] Splintering: special case deoptimizing blocks.

This avoids a whole range traversal each time we encounter a deferred
block (or a succession of them). The traversal (in the removed
IsIntervalAlreadyExcluded) is unnecessary - an interval with a hole
where deferred blocks are shouldn't be listed in the in/out sets of
those blocks in the first place.

It turns out the root cause (that appeared like we had to special
case ranges with holes, as the comment described) was deferred
blocks with a deoptimization call. That would place the live range
in the in_set of the block, but then splitting would fail because the start
and split position would be the same - this is because everywhere else,
the deferred block would have at least a second instruction, other
than the use - like a jump - ahead of which we'd perform the lower
part of the splintering. In the usual case, this choice of a position
avoids moves on the hot path (because any moves will be before the
jump, but still in the deferred block).

With deoptimization calls, that's not the case, there is just one
instruction, the deoptimization call. So we perform the second cut of
the splintering right after the block. Since there is no control flow from
the deoptimization block to any functional block - the control flow
goes to the exit block - the range connector won't insert moves on the
hot path - although we may want to see what happens for the exit
block, and maybe teach the range connector to ignore control flow
appearing to come from blocks with deoptimization calls.

Review URL: https://codereview.chromium.org/1323473003

Cr-Commit-Position: refs/heads/master@{#30447}
parent 08ee2132
......@@ -53,19 +53,6 @@ void AssociateDeferredBlockSequences(InstructionSequence *code) {
}
// If the live range has a liveness hole right between start and end,
// we don't need to splinter it.
bool IsIntervalAlreadyExcluded(const LiveRange *range, LifetimePosition start,
LifetimePosition end) {
for (UseInterval *interval = range->first_interval(); interval != nullptr;
interval = interval->next()) {
if (interval->start() <= start && start < interval->end()) return false;
if (interval->start() < end && end <= interval->end()) return false;
}
return true;
}
void CreateSplinter(TopLevelLiveRange *range, RegisterAllocationData *data,
LifetimePosition first_cut, LifetimePosition last_cut) {
DCHECK(!range->IsSplinter());
......@@ -82,8 +69,6 @@ void CreateSplinter(TopLevelLiveRange *range, RegisterAllocationData *data,
LifetimePosition start = Max(first_cut, range->Start());
LifetimePosition end = Min(last_cut, range->End());
// Skip ranges that have a hole where the deferred block(s) are.
if (IsIntervalAlreadyExcluded(range, start, end)) return;
if (start < end) {
// Ensure the original range has a spill range associated, before it gets
......@@ -130,8 +115,12 @@ void SplinterRangesInDeferredBlocks(RegisterAllocationData *data) {
const BitVector *in_set = in_sets[block->rpo_number().ToInt()];
InstructionBlock *last = code->InstructionBlockAt(last_deferred);
const BitVector *out_set = LiveRangeBuilder::ComputeLiveOut(last, data);
last_cut = LifetimePosition::GapFromInstructionIndex(
last->last_instruction_index());
int last_index = last->last_instruction_index();
if (code->InstructionAt(last_index)->opcode() ==
ArchOpcode::kArchDeoptimize) {
++last_index;
}
last_cut = LifetimePosition::GapFromInstructionIndex(last_index);
BitVector ranges_to_splinter(*in_set, zone);
ranges_to_splinter.Union(*out_set);
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment