Commit b0cfb778 authored by Milad Farazmand's avatar Milad Farazmand Committed by Commit Bot

PPC/S390: [lite] Allocate feedback vectors lazily

Port: 7629afdb

Original Commit Message:

    Allocate feedback vectors lazily when the function's interrupt budget has
    reached a specified threshold. This cl introduces a new field in the
    ClosureFeedbackCellArray to track the interrupt budget for allocating
    feedback vectors. Using the interrupt budget on the bytecode array could
    cause problems when there are closures across native contexts and we may
    delay allocating feedback vectors in one of them causing unexpected
    performance cliffs. In the long term we may want to remove interrupt budget
    from bytecode array and use context specific budget for tiering up decisions
    as well.

Change-Id: I261a7f7cedbdaa3be2d0cf22bfa701598f749fd9
Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/1539794Reviewed-by: 's avatarJunliang Yan <jyan@ca.ibm.com>
Commit-Queue: Junliang Yan <jyan@ca.ibm.com>
Cr-Commit-Position: refs/heads/master@{#60479}
parent dfc0100a
...@@ -1101,11 +1101,17 @@ void Builtins::Generate_InterpreterEntryTrampoline(MacroAssembler* masm) { ...@@ -1101,11 +1101,17 @@ void Builtins::Generate_InterpreterEntryTrampoline(MacroAssembler* masm) {
FrameScope frame_scope(masm, StackFrame::MANUAL); FrameScope frame_scope(masm, StackFrame::MANUAL);
__ PushStandardFrame(closure); __ PushStandardFrame(closure);
// Reset code age. // Reset code age and the OSR arming. The OSR field and BytecodeAgeOffset are
__ mov(r8, Operand(BytecodeArray::kNoAgeBytecodeAge)); // 8-bit fields next to each other, so we could just optimize by writing a
__ StoreByte(r8, FieldMemOperand(kInterpreterBytecodeArrayRegister, // 16-bit. These static asserts guard our assumption is valid.
BytecodeArray::kBytecodeAgeOffset), STATIC_ASSERT(BytecodeArray::kBytecodeAgeOffset ==
r0); BytecodeArray::kOSRNestingLevelOffset + kCharSize);
STATIC_ASSERT(BytecodeArray::kNoAgeBytecodeAge == 0);
__ li(r8, Operand(0));
__ StoreHalfWord(r8,
FieldMemOperand(kInterpreterBytecodeArrayRegister,
BytecodeArray::kOSRNestingLevelOffset),
r0);
// Load initial bytecode offset. // Load initial bytecode offset.
__ mov(kInterpreterBytecodeOffsetRegister, __ mov(kInterpreterBytecodeOffsetRegister,
......
...@@ -1152,11 +1152,17 @@ void Builtins::Generate_InterpreterEntryTrampoline(MacroAssembler* masm) { ...@@ -1152,11 +1152,17 @@ void Builtins::Generate_InterpreterEntryTrampoline(MacroAssembler* masm) {
FrameScope frame_scope(masm, StackFrame::MANUAL); FrameScope frame_scope(masm, StackFrame::MANUAL);
__ PushStandardFrame(closure); __ PushStandardFrame(closure);
// Reset code age. // Reset code age and the OSR arming. The OSR field and BytecodeAgeOffset are
__ mov(r1, Operand(BytecodeArray::kNoAgeBytecodeAge)); // 8-bit fields next to each other, so we could just optimize by writing a
__ StoreByte(r1, FieldMemOperand(kInterpreterBytecodeArrayRegister, // 16-bit. These static asserts guard our assumption is valid.
BytecodeArray::kBytecodeAgeOffset), STATIC_ASSERT(BytecodeArray::kBytecodeAgeOffset ==
r0); BytecodeArray::kOSRNestingLevelOffset + kCharSize);
STATIC_ASSERT(BytecodeArray::kNoAgeBytecodeAge == 0);
__ lghi(r1, Operand(0));
__ StoreHalfWord(r1,
FieldMemOperand(kInterpreterBytecodeArrayRegister,
BytecodeArray::kOSRNestingLevelOffset),
r0);
// Load the initial bytecode offset. // Load the initial bytecode offset.
__ mov(kInterpreterBytecodeOffsetRegister, __ mov(kInterpreterBytecodeOffsetRegister,
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment