Commit 0f1fbfbe authored by Jakob Gruber's avatar Jakob Gruber Committed by V8 LUCI CQ

[osr] Refactor TieringManager::MaybeOptimizeFrame

This started out as a minor code move of early-osr logic, but
became a more general refactor of the tiering decisions.

Early-OSR: the intent here is to trigger OSR as soon as possible
when matching OSR'd code is cached. Move this out of ShouldOptimize
(since it has side effects), and into a dedicated function that's
called early in the decision process.

Note that with this change, we no longer trigger normal TF optimization
along with the OSR request - TF tiering heuristics are already complex
enough, let's not add yet another special case right now.

Other refactors:

- Clarify terminology around OSR. None of the functions in TM actually
  perform OSR; instead, they only increase the OSR urgency, effectively
  increasing the set of loops that will trigger OSR compilation.
- Clarify the control flow through the tiering decisions. Notably,
  we only increment OSR urgency when normal tierup has previously been
  requested. Also, there is a bytecode size limit involved. These
  conditions were previously hidden inside other functions.

Bug: v8:12161
Change-Id: I8f58b4332bd9851c6b299655ce840555fb7efa92
Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3529448Reviewed-by: 's avatarTobias Tebbi <tebbi@chromium.org>
Commit-Queue: Jakob Gruber <jgruber@chromium.org>
Cr-Commit-Position: refs/heads/main@{#79512}
parent 4557c3f4
This diff is collapsed.
...@@ -32,8 +32,8 @@ class TieringManager { ...@@ -32,8 +32,8 @@ class TieringManager {
void NotifyICChanged() { any_ic_changed_ = true; } void NotifyICChanged() { any_ic_changed_ = true; }
void AttemptOnStackReplacement(UnoptimizedFrame* frame, // After this request, the next JumpLoop will perform OSR.
int nesting_levels = 1); void RequestOsrAtNextOpportunity(JSFunction function);
// For use when a JSFunction is available. // For use when a JSFunction is available.
static int InterruptBudgetFor(Isolate* isolate, JSFunction function); static int InterruptBudgetFor(Isolate* isolate, JSFunction function);
...@@ -43,12 +43,10 @@ class TieringManager { ...@@ -43,12 +43,10 @@ class TieringManager {
private: private:
// Make the decision whether to optimize the given function, and mark it for // Make the decision whether to optimize the given function, and mark it for
// optimization if the decision was 'yes'. // optimization if the decision was 'yes'.
void MaybeOptimizeFrame(JSFunction function, JavaScriptFrame* frame, // This function is also responsible for bumping the OSR urgency.
void MaybeOptimizeFrame(JSFunction function, UnoptimizedFrame* frame,
CodeKind code_kind); CodeKind code_kind);
// Potentially attempts OSR from and returns whether no other
// optimization attempts should be made.
bool MaybeOSR(JSFunction function, UnoptimizedFrame* frame);
OptimizationDecision ShouldOptimize(JSFunction function, CodeKind code_kind, OptimizationDecision ShouldOptimize(JSFunction function, CodeKind code_kind,
JavaScriptFrame* frame); JavaScriptFrame* frame);
void Optimize(JSFunction function, CodeKind code_kind, void Optimize(JSFunction function, CodeKind code_kind,
......
...@@ -12,15 +12,13 @@ ...@@ -12,15 +12,13 @@
namespace v8 { namespace v8 {
namespace internal { namespace internal {
// This enum are states that how many OSR code caches belong to a SFI. Without // This enum is a performance optimization for accessing the OSR code cache -
// this enum, need to check all OSR code cache entries to know whether a // we can skip cache iteration in many cases unless there are multiple entries
// JSFunction's SFI has OSR code cache. The enum value kCachedMultiple is for // for a particular SharedFunctionInfo.
// doing time-consuming loop check only when the very unlikely state change
// kCachedMultiple -> { kCachedOnce | kCachedMultiple }.
enum OSRCodeCacheStateOfSFI : uint8_t { enum OSRCodeCacheStateOfSFI : uint8_t {
kNotCached, // Likely state, no OSR code cache kNotCached, // Likely state.
kCachedOnce, // Unlikely state, one OSR code cache kCachedOnce, // Unlikely state, one entry.
kCachedMultiple, // Very unlikely state, multiple OSR code caches kCachedMultiple, // Very unlikely state, multiple entries.
}; };
class V8_EXPORT OSROptimizedCodeCache : public WeakFixedArray { class V8_EXPORT OSROptimizedCodeCache : public WeakFixedArray {
......
...@@ -581,10 +581,8 @@ RUNTIME_FUNCTION(Runtime_OptimizeOsr) { ...@@ -581,10 +581,8 @@ RUNTIME_FUNCTION(Runtime_OptimizeOsr) {
function->MarkForOptimization(isolate, CodeKind::TURBOFAN, function->MarkForOptimization(isolate, CodeKind::TURBOFAN,
ConcurrencyMode::kNotConcurrent); ConcurrencyMode::kNotConcurrent);
// Make the profiler arm all back edges in unoptimized code.
if (it.frame()->is_unoptimized()) { if (it.frame()->is_unoptimized()) {
isolate->tiering_manager()->AttemptOnStackReplacement( isolate->tiering_manager()->RequestOsrAtNextOpportunity(*function);
UnoptimizedFrame::cast(it.frame()), BytecodeArray::kMaxOsrUrgency);
} }
return ReadOnlyRoots(isolate).undefined_value(); return ReadOnlyRoots(isolate).undefined_value();
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment