Commit 651f4cca authored by Dan Elphick's avatar Dan Elphick Committed by Commit Bot

[interpreter] Optimize return bytecodes on arm

Tweaks AdvanceBytecodeOffsetOrReturn so that the sequence of (cmp,beq)+
instructions is converted to (cmp, cmpne+, beq) saving an instruction
for every return bytecode. In reality this just saves a single
instruction.

Bug: v8:9771
Change-Id: I7cf2d5ae27ff5495808792aa4c953b97c2bb5b71
Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/1853246
Commit-Queue: Dan Elphick <delphick@chromium.org>
Reviewed-by: 's avatarSantiago Aboy Solanes <solanes@chromium.org>
Cr-Commit-Position: refs/heads/master@{#64232}
parent f1565606
......@@ -992,12 +992,18 @@ static void AdvanceBytecodeOffsetOrReturn(MacroAssembler* masm,
__ bind(&process_bytecode);
// Bailout to the return label if this is a return bytecode.
#define JUMP_IF_EQUAL(NAME) \
__ cmp(bytecode, Operand(static_cast<int>(interpreter::Bytecode::k##NAME))); \
__ b(if_return, eq);
// Create cmp, cmpne, ..., cmpne to check for a return bytecode.
Condition flag = al;
#define JUMP_IF_EQUAL(NAME) \
__ cmp(bytecode, Operand(static_cast<int>(interpreter::Bytecode::k##NAME)), \
flag); \
flag = ne;
RETURN_BYTECODE_LIST(JUMP_IF_EQUAL)
#undef JUMP_IF_EQUAL
__ b(if_return, eq);
// Otherwise, load the size of the current bytecode and advance the offset.
__ ldr(scratch1, MemOperand(bytecode_size_table, bytecode, LSL, 2));
__ add(bytecode_offset, bytecode_offset, scratch1);
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment