Commit eb57c722 authored by Daniel Lehmann's avatar Daniel Lehmann Committed by Commit Bot

[wasm] Add missing scopes for code modification

This is the second CL in a line of two (see crrev.com/c/2835237) to
bring write-protection to the WebAssembly code space. The previous CL
changed the page permissions from W^X (only either writable or
executable can be active, but never both) to write-protection (due to
concurrent execution in the main thread). However, write-protection
still did not work, because in several places the code space is
modified without properly switching it to writable beforehand.

This CL fixes --wasm-write-protect-code-memory such that it can now be
enabled again (with potentially high overhead due to frequent page
protection switches). For that, it adds the missing switching to
writable by adding {NativeModuleModificationScope} objects (similar to
the already existing {CodeSpaceWriteScope} objects for Apple M1
hardware).

This CL also fixes a race condition between checking for the current
writable permission and actually setting the permission, by protecting
the counter of currently active writers with the same lock as the
{WasmCodeAllocator} itself. (Before multi-threaded compilation, this
was not necessary.)

Finally, this CL also changes the {Mutex} protecting the
{WasmCodeAllocator} to a {RecursiveMutex} because it can be requested
multiple times in the call hierarchy of the same thread, which would
cause a deadlock otherwise. Since {TryLock()} of a {RecursiveMutex}
never fails, this also removes the (now failing) DCHECKs.

R=clemensb@chromium.org
CC=​​jkummerow@chromium.org

Bug: v8:11663
Change-Id: I4db27ad0a9348021b0b663dbe88b3432a4d8d6b5
Cq-Include-Trybots: luci.v8.try:v8_linux64_tsan_rel_ng
Cq-Include-Trybots: luci.v8.try:v8_linux64_tsan_isolates_rel_ng
Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2835238
Commit-Queue: Daniel Lehmann <dlehmann@google.com>
Reviewed-by: 's avatarClemens Backes <clemensb@chromium.org>
Cr-Commit-Position: refs/heads/master@{#74163}
parent 896f9c23
......@@ -164,6 +164,8 @@ class V8_BASE_EXPORT RecursiveMutex final {
// successfully locked.
bool TryLock() V8_WARN_UNUSED_RESULT;
V8_INLINE void AssertHeld() const { DCHECK_LT(0, level_); }
private:
// The implementation-defined native handle type.
#if V8_OS_POSIX
......
This diff is collapsed.
......@@ -459,6 +459,8 @@ class WasmCodeAllocator {
DisjointAllocationPool freed_code_space_;
std::vector<VirtualMemory> owned_code_space_;
int writers_count_{0};
// End of fields protected by {mutex_}.
//////////////////////////////////////////////////////////////////////////////
......@@ -466,8 +468,6 @@ class WasmCodeAllocator {
std::atomic<size_t> generated_code_size_{0};
std::atomic<size_t> freed_code_size_{0};
bool is_writable_ = false;
std::shared_ptr<Counters> async_counters_;
};
......@@ -576,7 +576,7 @@ class V8_EXPORT_PRIVATE NativeModule final {
uint32_t GetFunctionIndexFromJumpTableSlot(Address slot_address) const;
bool SetWritable(bool writable) {
base::MutexGuard guard{&allocation_mutex_};
base::RecursiveMutexGuard guard{&allocation_mutex_};
return code_allocator_.SetWritable(writable);
}
......@@ -789,7 +789,12 @@ class V8_EXPORT_PRIVATE NativeModule final {
std::unique_ptr<uint32_t[]> num_liftoff_function_calls_;
// This mutex protects concurrent calls to {AddCode} and friends.
mutable base::Mutex allocation_mutex_;
// TODO(dlehmann): Revert this to a regular {Mutex} again.
// This needs to be a {RecursiveMutex} only because of
// {NativeModuleModificationScope} usages, which are (1) either at places
// that already hold the {allocation_mutex_} or (2) because of multiple open
// {NativeModuleModificationScope}s in the call hierarchy. Both are fixable.
mutable base::RecursiveMutex allocation_mutex_;
//////////////////////////////////////////////////////////////////////////////
// Protected by {allocation_mutex_}:
......@@ -830,7 +835,6 @@ class V8_EXPORT_PRIVATE NativeModule final {
// End of fields protected by {allocation_mutex_}.
//////////////////////////////////////////////////////////////////////////////
int modification_scope_depth_ = 0;
UseTrapHandler use_trap_handler_ = kNoTrapHandler;
bool lazy_compile_frozen_ = false;
std::atomic<size_t> liftoff_bailout_count_{0};
......
......@@ -567,6 +567,8 @@ class CopyAndRelocTask : public JobTask {
void Run(JobDelegate* delegate) override {
CODE_SPACE_WRITE_SCOPE
NativeModuleModificationScope native_module_modification_scope(
deserializer_->native_module_);
do {
auto batch = from_queue_->Pop();
if (batch.empty()) break;
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment