Commit 13136918 authored by Thibaud Michaud's avatar Thibaud Michaud Committed by Commit Bot

[wasm] Deserialization: adjust batch size

Instead of processing batches with a fixed number of functions, process
batches with approximately the same number of bytes. This prevents
disproportionately large batches to block the pipeline.

R=ahaas@chromium.org

Bug: v8:11164
Change-Id: I7fe57abac13c5fb749a002e339c5a9b2dab607be
Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2567699Reviewed-by: 's avatarAndreas Haas <ahaas@chromium.org>
Commit-Queue: Thibaud Michaud <thibaudm@chromium.org>
Cr-Commit-Position: refs/heads/master@{#71530}
parent f4638380
......@@ -608,16 +608,19 @@ bool NativeModuleDeserializer::Read(Reader* reader) {
auto batch = std::make_unique<std::vector<DeserializationUnit>>();
int num_batches = 0;
const byte* batch_start = reader->current_location();
for (uint32_t i = first_wasm_fn; i < total_fns; ++i) {
DeserializationUnit unit = ReadCodeAndAlloc(i, reader);
if (unit.code) {
batch->push_back(std::move(unit));
}
constexpr int kBatchSize = 100;
if (batch->size() == kBatchSize) {
uint64_t batch_size_in_bytes = reader->current_location() - batch_start;
constexpr int kMinBatchSizeInBytes = 100000;
if (batch_size_in_bytes >= kMinBatchSizeInBytes) {
reloc_queue.Add(std::move(batch));
num_batches++;
batch = std::make_unique<std::vector<DeserializationUnit>>();
batch_start = reader->current_location();
}
}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment