1. 23 May, 2022 1 commit
  2. 13 May, 2022 1 commit
  3. 10 May, 2022 1 commit
  4. 03 May, 2022 1 commit
  5. 02 May, 2022 1 commit
    • Jakob Linke's avatar
      Reland "Reland "[osr] Use the new OSR cache"" · 0e9a55d2
      Jakob Linke authored
      This is a reland of commit 91453880
      
      Fixed: properly reference the ClearedValue in CSA (i.e. without
      the cage_base upper 32 bits).
      
      Original change's description:
      > Reland "[osr] Use the new OSR cache"
      >
      > This is a reland of commit 91da3883
      >
      > Fixed: Use an X register for JumpIfCodeTIsMarkedForDeoptimization
      > on arm64.
      >
      > Original change's description:
      > > [osr] Use the new OSR cache
      > >
      > > This CL switches over our OSR system to be based on the feedback
      > > vector osr caches.
      > >
      > > - OSRing to Sparkplug is fully separated from OSR urgency. If
      > >   SP code exists, we simply jump to it, no need to maintain an
      > >   installation request.
      > > - Each JumpLoop checks its dedicated FeedbackVector cache slot.
      > >   If a valid target code object exists, we enter it *without*
      > >   calling into runtime to fetch the code object.
      > > - Finally, OSR urgency still remains as the heuristic for
      > >   requesting Turbofan OSR compile jobs. Note it no longer has a
      > >   double purpose of being a generic untargeted installation
      > >   request.
      > >
      > > With the new system in place, we can remove now-unnecessary
      > > hacks:
      > >
      > > - Early OSR tierup is replaced by the standard OSR system. Any
      > >   present OSR code is automatically entered.
      > > - The synchronous OSR compilation fallback is removed. With
      > >   precise installation (= per-JumpLoop-bytecode) we no longer
      > >   have the problem of 'getting unlucky' with JumpLoop/cache entry
      > >   mismatches. Execution has moved on while compiling? Simply spawn
      > >   a new concurrent compile job.
      > > - Remove the synchronous (non-OSR) Turbofan compile request now
      > >   that we always enter available OSR code as early as possible.
      > > - Tiering into Sparkplug no longer messes with OSR state.
      > >
      > > Bug: v8:12161
      > > Change-Id: I0a85e53d363504b7dac174dbaf69c03c35e66700
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3596167
      > > Commit-Queue: Jakob Linke <jgruber@chromium.org>
      > > Auto-Submit: Jakob Linke <jgruber@chromium.org>
      > > Reviewed-by: Leszek Swirski <leszeks@chromium.org>
      > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > Cr-Commit-Position: refs/heads/main@{#80147}
      >
      > Bug: v8:12161
      > Change-Id: Ib3597cf1d99cdb5d0f2c5ac18e311914f376231d
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3606232
      > Auto-Submit: Jakob Linke <jgruber@chromium.org>
      > Reviewed-by: Leszek Swirski <leszeks@chromium.org>
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Cr-Commit-Position: refs/heads/main@{#80167}
      
      Bug: v8:12161,chromium:1320189
      Change-Id: Ibd9a2ab61f51ebb32a3f5a66f7c602faead71c3e
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3620273Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Commit-Queue: Jakob Linke <jgruber@chromium.org>
      Cr-Commit-Position: refs/heads/main@{#80306}
      0e9a55d2
  6. 29 Apr, 2022 1 commit
    • Rohan Pavone's avatar
      Revert "Reland "[osr] Use the new OSR cache"" · 896f6e74
      Rohan Pavone authored
      This reverts commit 91453880.
      
      Reason for revert: Breaking the Fuchsia Deterministic Builder
      
      Original change's description:
      > Reland "[osr] Use the new OSR cache"
      >
      > This is a reland of commit 91da3883
      >
      > Fixed: Use an X register for JumpIfCodeTIsMarkedForDeoptimization
      > on arm64.
      >
      > Original change's description:
      > > [osr] Use the new OSR cache
      > >
      > > This CL switches over our OSR system to be based on the feedback
      > > vector osr caches.
      > >
      > > - OSRing to Sparkplug is fully separated from OSR urgency. If
      > >   SP code exists, we simply jump to it, no need to maintain an
      > >   installation request.
      > > - Each JumpLoop checks its dedicated FeedbackVector cache slot.
      > >   If a valid target code object exists, we enter it *without*
      > >   calling into runtime to fetch the code object.
      > > - Finally, OSR urgency still remains as the heuristic for
      > >   requesting Turbofan OSR compile jobs. Note it no longer has a
      > >   double purpose of being a generic untargeted installation
      > >   request.
      > >
      > > With the new system in place, we can remove now-unnecessary
      > > hacks:
      > >
      > > - Early OSR tierup is replaced by the standard OSR system. Any
      > >   present OSR code is automatically entered.
      > > - The synchronous OSR compilation fallback is removed. With
      > >   precise installation (= per-JumpLoop-bytecode) we no longer
      > >   have the problem of 'getting unlucky' with JumpLoop/cache entry
      > >   mismatches. Execution has moved on while compiling? Simply spawn
      > >   a new concurrent compile job.
      > > - Remove the synchronous (non-OSR) Turbofan compile request now
      > >   that we always enter available OSR code as early as possible.
      > > - Tiering into Sparkplug no longer messes with OSR state.
      > >
      > > Bug: v8:12161
      > > Change-Id: I0a85e53d363504b7dac174dbaf69c03c35e66700
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3596167
      > > Commit-Queue: Jakob Linke <jgruber@chromium.org>
      > > Auto-Submit: Jakob Linke <jgruber@chromium.org>
      > > Reviewed-by: Leszek Swirski <leszeks@chromium.org>
      > > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > > Cr-Commit-Position: refs/heads/main@{#80147}
      >
      > Bug: v8:12161
      > Change-Id: Ib3597cf1d99cdb5d0f2c5ac18e311914f376231d
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3606232
      > Auto-Submit: Jakob Linke <jgruber@chromium.org>
      > Reviewed-by: Leszek Swirski <leszeks@chromium.org>
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Cr-Commit-Position: refs/heads/main@{#80167}
      
      Bug: v8:12161
      Change-Id: I73e2d98660e9edfbe07a152a14402380ea9227de
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3615219Reviewed-by: 's avatarDeepti Gandluri <gdeepti@chromium.org>
      Commit-Queue: Deepti Gandluri <gdeepti@chromium.org>
      Owners-Override: Deepti Gandluri <gdeepti@chromium.org>
      Bot-Commit: Rubber Stamper <rubber-stamper@appspot.gserviceaccount.com>
      Cr-Commit-Position: refs/heads/main@{#80287}
      896f6e74
  7. 27 Apr, 2022 1 commit
  8. 26 Apr, 2022 1 commit
    • Jakob Gruber's avatar
      Reland "[osr] Use the new OSR cache" · 91453880
      Jakob Gruber authored
      This is a reland of commit 91da3883
      
      Fixed: Use an X register for JumpIfCodeTIsMarkedForDeoptimization
      on arm64.
      
      Original change's description:
      > [osr] Use the new OSR cache
      >
      > This CL switches over our OSR system to be based on the feedback
      > vector osr caches.
      >
      > - OSRing to Sparkplug is fully separated from OSR urgency. If
      >   SP code exists, we simply jump to it, no need to maintain an
      >   installation request.
      > - Each JumpLoop checks its dedicated FeedbackVector cache slot.
      >   If a valid target code object exists, we enter it *without*
      >   calling into runtime to fetch the code object.
      > - Finally, OSR urgency still remains as the heuristic for
      >   requesting Turbofan OSR compile jobs. Note it no longer has a
      >   double purpose of being a generic untargeted installation
      >   request.
      >
      > With the new system in place, we can remove now-unnecessary
      > hacks:
      >
      > - Early OSR tierup is replaced by the standard OSR system. Any
      >   present OSR code is automatically entered.
      > - The synchronous OSR compilation fallback is removed. With
      >   precise installation (= per-JumpLoop-bytecode) we no longer
      >   have the problem of 'getting unlucky' with JumpLoop/cache entry
      >   mismatches. Execution has moved on while compiling? Simply spawn
      >   a new concurrent compile job.
      > - Remove the synchronous (non-OSR) Turbofan compile request now
      >   that we always enter available OSR code as early as possible.
      > - Tiering into Sparkplug no longer messes with OSR state.
      >
      > Bug: v8:12161
      > Change-Id: I0a85e53d363504b7dac174dbaf69c03c35e66700
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3596167
      > Commit-Queue: Jakob Linke <jgruber@chromium.org>
      > Auto-Submit: Jakob Linke <jgruber@chromium.org>
      > Reviewed-by: Leszek Swirski <leszeks@chromium.org>
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Cr-Commit-Position: refs/heads/main@{#80147}
      
      Bug: v8:12161
      Change-Id: Ib3597cf1d99cdb5d0f2c5ac18e311914f376231d
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3606232
      Auto-Submit: Jakob Linke <jgruber@chromium.org>
      Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Cr-Commit-Position: refs/heads/main@{#80167}
      91453880
  9. 25 Apr, 2022 3 commits
    • Nico Hartmann's avatar
      Revert "[osr] Use the new OSR cache" · c34b7b41
      Nico Hartmann authored
      This reverts commit 91da3883.
      
      Reason for revert: https://ci.chromium.org/ui/p/v8/builders/ci/V8%20Linux64%20-%20arm64%20-%20sim%20-%20pointer%20compression%20-%20builder/21150/overview
      
      Original change's description:
      > [osr] Use the new OSR cache
      >
      > This CL switches over our OSR system to be based on the feedback
      > vector osr caches.
      >
      > - OSRing to Sparkplug is fully separated from OSR urgency. If
      >   SP code exists, we simply jump to it, no need to maintain an
      >   installation request.
      > - Each JumpLoop checks its dedicated FeedbackVector cache slot.
      >   If a valid target code object exists, we enter it *without*
      >   calling into runtime to fetch the code object.
      > - Finally, OSR urgency still remains as the heuristic for
      >   requesting Turbofan OSR compile jobs. Note it no longer has a
      >   double purpose of being a generic untargeted installation
      >   request.
      >
      > With the new system in place, we can remove now-unnecessary
      > hacks:
      >
      > - Early OSR tierup is replaced by the standard OSR system. Any
      >   present OSR code is automatically entered.
      > - The synchronous OSR compilation fallback is removed. With
      >   precise installation (= per-JumpLoop-bytecode) we no longer
      >   have the problem of 'getting unlucky' with JumpLoop/cache entry
      >   mismatches. Execution has moved on while compiling? Simply spawn
      >   a new concurrent compile job.
      > - Remove the synchronous (non-OSR) Turbofan compile request now
      >   that we always enter available OSR code as early as possible.
      > - Tiering into Sparkplug no longer messes with OSR state.
      >
      > Bug: v8:12161
      > Change-Id: I0a85e53d363504b7dac174dbaf69c03c35e66700
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3596167
      > Commit-Queue: Jakob Linke <jgruber@chromium.org>
      > Auto-Submit: Jakob Linke <jgruber@chromium.org>
      > Reviewed-by: Leszek Swirski <leszeks@chromium.org>
      > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      > Cr-Commit-Position: refs/heads/main@{#80147}
      
      Bug: v8:12161
      Change-Id: I4a6955f4f20b6f3b13e98d5600c7c6a5205915bc
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3605608
      Auto-Submit: Nico Hartmann <nicohartmann@chromium.org>
      Owners-Override: Nico Hartmann <nicohartmann@chromium.org>
      Reviewed-by: 's avatarNico Hartmann <nicohartmann@chromium.org>
      Commit-Queue: Rubber Stamper <rubber-stamper@appspot.gserviceaccount.com>
      Bot-Commit: Rubber Stamper <rubber-stamper@appspot.gserviceaccount.com>
      Cr-Commit-Position: refs/heads/main@{#80148}
      c34b7b41
    • Jakob Gruber's avatar
      [osr] Use the new OSR cache · 91da3883
      Jakob Gruber authored
      This CL switches over our OSR system to be based on the feedback
      vector osr caches.
      
      - OSRing to Sparkplug is fully separated from OSR urgency. If
        SP code exists, we simply jump to it, no need to maintain an
        installation request.
      - Each JumpLoop checks its dedicated FeedbackVector cache slot.
        If a valid target code object exists, we enter it *without*
        calling into runtime to fetch the code object.
      - Finally, OSR urgency still remains as the heuristic for
        requesting Turbofan OSR compile jobs. Note it no longer has a
        double purpose of being a generic untargeted installation
        request.
      
      With the new system in place, we can remove now-unnecessary
      hacks:
      
      - Early OSR tierup is replaced by the standard OSR system. Any
        present OSR code is automatically entered.
      - The synchronous OSR compilation fallback is removed. With
        precise installation (= per-JumpLoop-bytecode) we no longer
        have the problem of 'getting unlucky' with JumpLoop/cache entry
        mismatches. Execution has moved on while compiling? Simply spawn
        a new concurrent compile job.
      - Remove the synchronous (non-OSR) Turbofan compile request now
        that we always enter available OSR code as early as possible.
      - Tiering into Sparkplug no longer messes with OSR state.
      
      Bug: v8:12161
      Change-Id: I0a85e53d363504b7dac174dbaf69c03c35e66700
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3596167
      Commit-Queue: Jakob Linke <jgruber@chromium.org>
      Auto-Submit: Jakob Linke <jgruber@chromium.org>
      Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Cr-Commit-Position: refs/heads/main@{#80147}
      91da3883
    • jameslahm's avatar
      Reland "[interpreter] Optimize strict equal boolean" · fce1047f
      jameslahm authored
      This is a reland of commit 62632c08.
      Reason for previous revert: Performance regressions crbug.com/1315724.
      The reland only optimizes strict equal boolean literal like "a===true"
      or "a===false", and we generate TestReferenceEqual rather than
      TestStrictEqual for the comparasion. And also add typed optimization
      for ReferenceEqual when all inputs are boolean with boolean constant.
      
      Original change's description:
      > [interpreter] Optimize strict equal boolean
      >
      > For strict equal boolean literal like "a===true"
      > or "a===false", we could generate TestReferenceEqual
      > rather than TestStrictEqual. And in `execution_result()->IsTest()`
      > case, we could directly emit JumpIfTrue/JumpIfFalse.
      >
      > E.g.
      > ```
      > a === true
      > ```
      > Generated Bytecode From:
      > ```
      > LdaGlobal
      > Star1
      > LdaTrue
      > TestEqualStrict
      > ```
      > To:
      > ```
      > LdaGlobal
      > Star1
      > LdaTrue
      > TestReferenceEqual
      > ```
      >
      > E.g.
      > ```
      > if (a === true)
      > ```
      > Generated Bytecode From:
      > ```
      > LdaGlobal
      > Star1
      > LdaTrue
      > TestEqualStrict
      > JumpIfFalse
      > ```
      > To
      > ```
      > LdaGlobal
      > JumpIfTrue
      > Jump
      > ```
      >
      >
      > Bug: v8:6403
      > Change-Id: Ieaca147acd2d523ac0d2466e7861afb2d29a1310
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3568923
      > Reviewed-by: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Tobias Tebbi <tebbi@chromium.org>
      > Commit-Queue: 王澳 <wangao.james@bytedance.com>
      > Cr-Commit-Position: refs/heads/main@{#79935}
      
      Bug: v8:6403
      Change-Id: I2ae3ab57dce85313af200fa522e3632af5c3a554
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3592039Reviewed-by: 's avatarTobias Tebbi <tebbi@chromium.org>
      Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Commit-Queue: Jakob Linke <jgruber@chromium.org>
      Cr-Commit-Position: refs/heads/main@{#80141}
      fce1047f
  10. 22 Apr, 2022 1 commit
  11. 20 Apr, 2022 1 commit
  12. 19 Apr, 2022 1 commit
    • Jakob Linke's avatar
      Revert "[interpreter] Optimize strict equal boolean" · 3b772a23
      Jakob Linke authored
      This reverts commit 62632c08.
      
      Reason for revert: Performance regressions crbug.com/1315724
      
      Original change's description:
      > [interpreter] Optimize strict equal boolean
      >
      > For strict equal boolean literal like "a===true"
      > or "a===false", we could generate TestReferenceEqual
      > rather than TestStrictEqual. And in `execution_result()->IsTest()`
      > case, we could directly emit JumpIfTrue/JumpIfFalse.
      >
      > E.g.
      > ```
      > a === true
      > ```
      > Generated Bytecode From:
      > ```
      > LdaGlobal
      > Star1
      > LdaTrue
      > TestEqualStrict
      > ```
      > To:
      > ```
      > LdaGlobal
      > Star1
      > LdaTrue
      > TestReferenceEqual
      > ```
      >
      > E.g.
      > ```
      > if (a === true)
      > ```
      > Generated Bytecode From:
      > ```
      > LdaGlobal
      > Star1
      > LdaTrue
      > TestEqualStrict
      > JumpIfFalse
      > ```
      > To
      > ```
      > LdaGlobal
      > JumpIfTrue
      > Jump
      > ```
      >
      >
      > Bug: v8:6403
      > Change-Id: Ieaca147acd2d523ac0d2466e7861afb2d29a1310
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3568923
      > Reviewed-by: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Tobias Tebbi <tebbi@chromium.org>
      > Commit-Queue: 王澳 <wangao.james@bytedance.com>
      > Cr-Commit-Position: refs/heads/main@{#79935}
      
      Bug: v8:6403, chromium:1315724
      Change-Id: I65c520590093724e838f738c795d229687efb9de
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3592752Reviewed-by: 's avatarTobias Tebbi <tebbi@chromium.org>
      Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Commit-Queue: Jakob Linke <jgruber@chromium.org>
      Bot-Commit: Rubber Stamper <rubber-stamper@appspot.gserviceaccount.com>
      Cr-Commit-Position: refs/heads/main@{#80010}
      3b772a23
  13. 14 Apr, 2022 1 commit
  14. 13 Apr, 2022 2 commits
  15. 12 Apr, 2022 1 commit
    • jameslahm's avatar
      [interpreter] Optimize strict equal boolean · 62632c08
      jameslahm authored
      For strict equal boolean literal like "a===true"
      or "a===false", we could generate TestReferenceEqual
      rather than TestStrictEqual. And in `execution_result()->IsTest()`
      case, we could directly emit JumpIfTrue/JumpIfFalse.
      
      E.g.
      ```
      a === true
      ```
      Generated Bytecode From:
      ```
      LdaGlobal
      Star1
      LdaTrue
      TestEqualStrict
      ```
      To:
      ```
      LdaGlobal
      Star1
      LdaTrue
      TestReferenceEqual
      ```
      
      E.g.
      ```
      if (a === true)
      ```
      Generated Bytecode From:
      ```
      LdaGlobal
      Star1
      LdaTrue
      TestEqualStrict
      JumpIfFalse
      ```
      To
      ```
      LdaGlobal
      JumpIfTrue
      Jump
      ```
      
      
      Bug: v8:6403
      Change-Id: Ieaca147acd2d523ac0d2466e7861afb2d29a1310
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3568923Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarTobias Tebbi <tebbi@chromium.org>
      Commit-Queue: 王澳 <wangao.james@bytedance.com>
      Cr-Commit-Position: refs/heads/main@{#79935}
      62632c08
  16. 11 Apr, 2022 1 commit
    • Jakob Gruber's avatar
      Reland "[osr] Add an install-by-offset mechanism" · b8473c52
      Jakob Gruber authored
      This is a reland of commit 51b99213
      
      Fixed in reland:
      - bytecode_age was incorrectly still accessed as an int8 (instead
        of int16).
      - age and osr state were incorrectly reset on ia32 (16-bit write
        instead of 32-bit).
      
      Original change's description:
      > [osr] Add an install-by-offset mechanism
      >
      > .. for concurrent OSR. There, the challenge is to hit the correct
      > JumpLoop bytecode once compilation completes, since execution has
      > moved on in the meantime.
      >
      > This CL adds a new mechanism to request installation at a specific
      > bytecode offset. We add a new `osr_install_target` field to the
      > BytecodeArray:
      >
      >   bitfield struct OSRUrgencyAndInstallTarget extends uint16 {
      >     osr_urgency: uint32: 3 bit;
      >     osr_install_target: uint32: 13 bit;
      >   }
      >
      >   // [...]
      >   osr_urgency_and_install_target: OSRUrgencyAndInstallTarget;
      >   bytecode_age: uint16;  // Only 3 bits used.
      >   // [...]
      >
      > Note urgency and install target are packed into one 16 bit field,
      > we can thus merge both checks into one comparison within JumpLoop.
      > Note also that these fields are adjacent to the bytecode age; we
      > still reset both OSR state and age with a single (now 32-bit)
      > store.
      >
      > The install target is the lowest 13 bits of the bytecode offset.
      > When set, every reached JumpLoop will check `is this my offset?`,
      > and if yes, jump into runtime to tier up.
      >
      > Drive-by: Rename BaselineAssembler::LoadByteField to LoadWord8Field.
      >
      > Bug: v8:12161
      > Change-Id: I275d468b19df3a4816392a2fec0713a8d211ef80
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3571812
      > Reviewed-by: Leszek Swirski <leszeks@chromium.org>
      > Commit-Queue: Jakob Linke <jgruber@chromium.org>
      > Cr-Commit-Position: refs/heads/main@{#79853}
      
      Bug: v8:12161
      Change-Id: I7c59b2a2aacb1d7d40fdf39396ec9d8d48b0b9ac
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3578543Reviewed-by: 's avatarClemens Backes <clemensb@chromium.org>
      Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Commit-Queue: Jakob Linke <jgruber@chromium.org>
      Cr-Commit-Position: refs/heads/main@{#79911}
      b8473c52
  17. 07 Apr, 2022 2 commits
    • Leszek Swirski's avatar
      Revert "[osr] Add an install-by-offset mechanism" · bb5cc0d5
      Leszek Swirski authored
      This reverts commit 51b99213.
      
      Reason for revert: Speculative revert for MSAN failure  https://ci.chromium.org/ui/p/v8/builders/ci/V8%20Linux%20-%20arm64%20-%20sim%20-%20MSAN/43080/overview
      
      Original change's description:
      > [osr] Add an install-by-offset mechanism
      >
      > .. for concurrent OSR. There, the challenge is to hit the correct
      > JumpLoop bytecode once compilation completes, since execution has
      > moved on in the meantime.
      >
      > This CL adds a new mechanism to request installation at a specific
      > bytecode offset. We add a new `osr_install_target` field to the
      > BytecodeArray:
      >
      >   bitfield struct OSRUrgencyAndInstallTarget extends uint16 {
      >     osr_urgency: uint32: 3 bit;
      >     osr_install_target: uint32: 13 bit;
      >   }
      >
      >   // [...]
      >   osr_urgency_and_install_target: OSRUrgencyAndInstallTarget;
      >   bytecode_age: uint16;  // Only 3 bits used.
      >   // [...]
      >
      > Note urgency and install target are packed into one 16 bit field,
      > we can thus merge both checks into one comparison within JumpLoop.
      > Note also that these fields are adjacent to the bytecode age; we
      > still reset both OSR state and age with a single (now 32-bit)
      > store.
      >
      > The install target is the lowest 13 bits of the bytecode offset.
      > When set, every reached JumpLoop will check `is this my offset?`,
      > and if yes, jump into runtime to tier up.
      >
      > Drive-by: Rename BaselineAssembler::LoadByteField to LoadWord8Field.
      >
      > Bug: v8:12161
      > Change-Id: I275d468b19df3a4816392a2fec0713a8d211ef80
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3571812
      > Reviewed-by: Leszek Swirski <leszeks@chromium.org>
      > Commit-Queue: Jakob Linke <jgruber@chromium.org>
      > Cr-Commit-Position: refs/heads/main@{#79853}
      
      Bug: v8:12161
      Change-Id: I0c47499544465c80b5b23a492c00ec1c62815caa
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3576121
      Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      Owners-Override: Leszek Swirski <leszeks@chromium.org>
      Commit-Queue: Rubber Stamper <rubber-stamper@appspot.gserviceaccount.com>
      Bot-Commit: Rubber Stamper <rubber-stamper@appspot.gserviceaccount.com>
      Cr-Commit-Position: refs/heads/main@{#79855}
      bb5cc0d5
    • Jakob Gruber's avatar
      [osr] Add an install-by-offset mechanism · 51b99213
      Jakob Gruber authored
      .. for concurrent OSR. There, the challenge is to hit the correct
      JumpLoop bytecode once compilation completes, since execution has
      moved on in the meantime.
      
      This CL adds a new mechanism to request installation at a specific
      bytecode offset. We add a new `osr_install_target` field to the
      BytecodeArray:
      
        bitfield struct OSRUrgencyAndInstallTarget extends uint16 {
          osr_urgency: uint32: 3 bit;
          osr_install_target: uint32: 13 bit;
        }
      
        // [...]
        osr_urgency_and_install_target: OSRUrgencyAndInstallTarget;
        bytecode_age: uint16;  // Only 3 bits used.
        // [...]
      
      Note urgency and install target are packed into one 16 bit field,
      we can thus merge both checks into one comparison within JumpLoop.
      Note also that these fields are adjacent to the bytecode age; we
      still reset both OSR state and age with a single (now 32-bit)
      store.
      
      The install target is the lowest 13 bits of the bytecode offset.
      When set, every reached JumpLoop will check `is this my offset?`,
      and if yes, jump into runtime to tier up.
      
      Drive-by: Rename BaselineAssembler::LoadByteField to LoadWord8Field.
      
      Bug: v8:12161
      Change-Id: I275d468b19df3a4816392a2fec0713a8d211ef80
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3571812Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Commit-Queue: Jakob Linke <jgruber@chromium.org>
      Cr-Commit-Position: refs/heads/main@{#79853}
      51b99213
  18. 31 Mar, 2022 1 commit
    • Leszek Swirski's avatar
      [maglev] Add lazy deopts · 0df9606d
      Leszek Swirski authored
      Nodes can now hold a LazyDeoptSafepoint which stores the frame state in
      case they trigger a lazy deopt. OpProperties have a new CanLazyDeopt
      bit, and codegen emits a safepoint table entry + lazy deopt for all
      nodes with this bit. Also, we now check the deoptimized code bit on
      entry into the maglev compiled function.
      
      An example use of these lazy deopts is added as a PropertyCell fast path
      for LdaGlobal, which adds a code dependency on the property cell.
      
      Bug: v8:7700
      Change-Id: I663db38dfa7325d38fc6d5f079d263a958074e36
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3557251Reviewed-by: 's avatarToon Verwaest <verwaest@chromium.org>
      Reviewed-by: 's avatarJakob Linke <jgruber@chromium.org>
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Cr-Commit-Position: refs/heads/main@{#79688}
      0df9606d
  19. 23 Mar, 2022 1 commit
  20. 17 Mar, 2022 1 commit
  21. 14 Mar, 2022 1 commit
  22. 10 Mar, 2022 2 commits
  23. 09 Mar, 2022 1 commit
  24. 08 Mar, 2022 1 commit
    • Joyee Cheung's avatar
      [ic] name Set/Define/Store property operations more consistently · 0d1ffe30
      Joyee Cheung authored
      For background and reasoning, see
      https://docs.google.com/document/d/1jvSEvXFHRkxg4JX-j6ho3nRqAF8vZI2Ai7RI8AY54gM/edit
      This is the first step towards pulling the DefineNamedOwn operation out
      of StoreIC.
      
      Summary of the renamed identifiers:
      
      Bytecodes:
      
      - StaNamedProperty -> SetNamedProperty: calls StoreIC and emitted for
        normal named property sets like obj.x = 1.
      - StaNamedOwnProperty -> DefineNamedOwnProperty: calls
        DefineNamedOwnIC (previously StoreOwnIC), and emitted for
        initialization of named properties in object literals and named
        public class fields.
      - StaKeyedProperty -> SetKeyedProperty: calls KeyedStoreIC and emitted
        for keyed property sets like obj[x] = 1.
      - StaKeyedPropertyAsDefine -> DefineKeyedOwnProperty: calls
        DefineKeyedOwnIC (previously KeyedDefineOwnIC) and emitted for
        initialization of private class fields and computed public class
        fields.
      - StaDataPropertyInLiteral -> DefineKeyedOwnPropertyInLiteral: calls
        DefineKeyedOwnPropertyInLiteral runtime function (previously
        DefineDataPropertyInLiteral) and emitted for initialization of keyed
        properties in object literals and static class initializers. (note
        that previously the StoreDataPropertyInLiteral runtime function name
        was taken by object spreads and array literal creation instead)
      - LdaKeyedProperty -> GetKeyedProperty, LdaNamedProperty ->
        GetNamedProperty, LdaNamedPropertyFromSuper ->
        GetNamedPropertyFromSuper: we drop the Sta prefix for the property
        store operations since the accumulator use is implicit and to make
        the wording more natural, for symmetry the Lda prefix for the
        property load operations is also dropped.
      
      opcodes:
      
      - (JS)StoreNamed -> (JS)SetNamedProperty: implements set semantics for
        named properties, compiled from SetNamedProperty (previously
        StaNamedProperty) and lowers to StoreIC or Runtime::kSetNamedProperty
      - (JS)StoreNamedOwn -> (JS)DefineNamedOwnProperty: implements define
        semantics for initializing named own properties in object literal and
        public class fields, compiled from DefineNamedOwnProperty (previously
        StaNamedOwnProperty) and lowers to DefineNamedOwnIC
        (previously StoreOwnIC)
      - (JS)StoreProperty -> (JS)SetKeyedProperty: implements set semantics
        for keyed properties, only compiled from SetKeyedProperty(previously
        StaKeyedProperty) and lowers to KeyedStoreIC
      - (JS)DefineProperty -> (JS)DefineKeyedOwnProperty: implements define
        semantics for initialization of private class fields and computed
        public class fields, compiled from DefineKeyedOwnProperty (previously
        StaKeyedPropertyAsDefine) and calls DefineKeyedOwnIC (previously
        KeyedDefineOwnIC).
      - (JS)StoreDataPropertyInLiteral ->
        (JS)DefineKeyedOwnPropertyInLiteral: implements define semantics for
        initialization of keyed properties in object literals and static
        class initializers, compiled from DefineKeyedOwnPropertyInLiteral
        (previously StaDataPropertyInLiteral) and calls the
        DefineKeyedOwnPropertyInLiteral runtime function (previously
        DefineDataPropertyInLiteral).
      
      Runtime:
      - DefineDataPropertyInLiteral -> DefineKeyedOwnPropertyInLiteral:
        following the bytecode/opcodes change, this is used by
        DefineKeyedOwnPropertyInLiteral (previously StaDataPropertyInLiteral)
        for object and class literal initialization.
      - StoreDataPropertyInLiteral -> DefineKeyedOwnPropertyInLiteral_Simple:
        it's just a simplified version of DefineDataPropertyInLiteral that
        does not update feedback or perform function name configuration.
        This is used by object spread and array literal creation. Since we
        are renaming DefineDataPropertyInLiteral to
        DefineKeyedOwnPropertyInLiteral, rename this simplified version with
        a `_Simple` suffix. We can consider merging it into
        DefineKeyedOwnPropertyInLiteral in the future. See
        https://docs.google.com/document/d/1jvSEvXFHRkxg4JX-j6ho3nRqAF8vZI2Ai7RI8AY54gM/edit?disco=AAAAQQIz6mU
      - Other changes following the bytecode/IR changes
      
      IC:
      
      - StoreOwn -> DefineNamedOwn: used for initialization of named
        properties in object literals and named public class fields.
        - StoreOwnIC -> DefineNamedOwnIC
        - StoreMode::kStoreOwn -> StoreMode::kDefineNamedOwn
        - StoreICMode::kStoreOwn -> StoreICMode::kDefineNamedOwn
        - IsStoreOwn() -> IsDefineNamedOwn()
      - DefineOwn -> DefineKeyedOwn: IsDefineOwnIC() was already just
        IsDefineKeyedOwnIC(), and IsAnyDefineOwn() includes both named and
        keyed defines so we don't need an extra generic predicate.
        - StoreMode::kDefineOwn -> StoreMode::kDefineKeyedOwn
        - StoreICMode::kDefineOwn -> StoreICMode::kDefineKeyedOwn
        - IsDefineOwn() -> IsDefineKeyedOwn()
        - IsDefineOwnIC() -> IsDefineKeyedOwnIC()
        - Removing IsKeyedDefineOwnIC() as its now a duplicate of
          IsDefineKeyedOwnIC()
      - KeyedDefineOwnIC -> DefineKeyedOwnIC,
        KeyedDefineOwnGenericGenerator() -> DefineKeyedOwnGenericGenerator:
        make the ordering of terms more consistent
      - IsAnyStoreOwn() -> IsAnyDefineOwn(): this includes the renamed and
        DefineNamedOwn and DefineKeyedOwn. Also is_any_store_own() is
        removed since it's just a duplicate of this.
      - IsKeyedStoreOwn() -> IsDefineNamedOwn(): it's unclear where the
        "keyed" part came from, but it's only used when DefineNamedOwnIC
        (previously StoreOwnIC) reuses KeyedStoreIC, so rename it accordingly
      
      Interpreter & compiler:
      - BytecodeArrayBuilder: following bytecode changes
          - StoreNamedProperty -> SetNamedProperty
        - StoreNamedOwnProperty -> DefineNamedOwnProperty
        - StoreKeyedProperty -> SetKeyedProperty
        - DefineKeyedProperty -> DefineKeyedOwnProperty
        - StoreDataPropertyInLiteral -> DefineKeyedOwnPropertyInLiteral
      - FeedbackSlotKind:
        - kDefineOwnKeyed -> kDefineKeyedOwn: make the ordering of terms more
          consistent
        - kStoreOwnNamed -> kDefineNamedOwn: following the IC change
        - kStoreNamed{Sloppy|Strict} -> kSetNamed{Sloppy|Strict}: only
          used in StoreIC for set semantics
        - kStoreKeyed{Sloppy|Strict} -> kSetKeyed{Sloppy|Strict}: only used
          in KeyedStoreIC for set semantics
        - kStoreDataPropertyInLiteral -> kDefineKeyedOwnPropertyInLiteral:
          following the IC change
      - BytecodeGraphBuilder
        - StoreMode::kNormal, kOwn -> NamedStoreMode::kSet, kDefineOwn: this
          is only used by BytecodeGraphBuilder::BuildNamedStore() to tell the
          difference between SetNamedProperty and DefineNamedOwnProperty
          operations.
      
      Not changed:
      
      - StoreIC and KeyedStoreIC currently contain mixed logic for both Set
        and Define operations, and the paths are controlled by feedback. The
        plan is to refactor the hierarchy like this:
        ```
        - StoreIC
          - DefineNamedOwnIC
          - SetNamedIC (there could also be a NamedStoreIC if that's helpful)
          - KeyedStoreIC
            - SetKeyedIC
            - DefineKeyedOwnIC
            - DefineKeyedOwnICLiteral (could be merged into DefineKeyedOwnIC)
            - StoreInArrayLiteralIC
          - ...
        ```
        StoreIC and KeyedStoreIC would then contain helpers shared by their
        subclasses, therefore it still makes sense to keep the word "Store"
        in their names since they would be generic base classes for both set
        and define operations.
      - The Lda and Sta prefixes of bytecodes not involving object properties
        (e.g. Ldar, Star, LdaZero) are kept, since this patch focuses on
        property operations, and distinction between Set and Define might be
        less relevant or nonexistent for bytecodes not involving object
        properties. We could consider rename some of them in future patches
        if that's helpful though.
      
      Bug: v8:12548
      Change-Id: Ia36997b02f59a87da3247f20e0560a7eb13077f3
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3481475Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarTobias Tebbi <tebbi@chromium.org>
      Reviewed-by: 's avatarIgor Sheludko <ishell@chromium.org>
      Reviewed-by: 's avatarDominik Inführ <dinfuehr@chromium.org>
      Reviewed-by: 's avatarShu-yu Guo <syg@chromium.org>
      Reviewed-by: 's avatarJakob Gruber <jgruber@chromium.org>
      Reviewed-by: 's avatarToon Verwaest <verwaest@chromium.org>
      Commit-Queue: Joyee Cheung <joyee@igalia.com>
      Cr-Commit-Position: refs/heads/main@{#79409}
      0d1ffe30
  25. 03 Mar, 2022 1 commit
  26. 24 Feb, 2022 1 commit
  27. 23 Feb, 2022 1 commit
  28. 15 Feb, 2022 2 commits
  29. 11 Feb, 2022 1 commit
    • Leszek Swirski's avatar
      [compiler] Change liveness to use a flat array · 3d02ccf7
      Leszek Swirski authored
      Bytecode liveness needs a mapping from offset to liveness. This was
      previously a hashmap with a very weak hash (the identity function) and
      both inserts and lookups showed up as a non-trivial costs during
      compilation.
      
      Now, replace the hashmap with a simple flat array of liveness, indexed
      by offset, pre-sized to the size of the bytecode. This will have a lot
      of empty entries, but will have much better runtime performance and
      probably ends up not much less memory efficient as a hashmap if the
      hashmap has to resize inside the Zone, and is likely negligible compared
      to the other compilation memory overheads.
      
      Change-Id: Id21375bfcbf0d53b5ed9c41f30cdf7fde66ee699
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3455802Reviewed-by: 's avatarTobias Tebbi <tebbi@chromium.org>
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Cr-Commit-Position: refs/heads/main@{#79049}
      3d02ccf7
  30. 10 Feb, 2022 1 commit
  31. 04 Feb, 2022 1 commit
  32. 02 Feb, 2022 1 commit
  33. 27 Jan, 2022 1 commit
    • Leszek Swirski's avatar
      [interpreter] Make JumpLoop kill its block · 2e8703aa
      Leszek Swirski authored
      Add JumpLoop to the list of bytecodes that unconditionally exit a
      block, so that bytecodes are not emitted after a JumpLoop until there's
      a bound label.
      
      As a drive by, fix the bytecode random iterator's initialisation to use
      'done()' directly (the old condition worked for Return, but was failing
      for wide JumpLoops that ended the bytecode).
      
      Change-Id: I63910602efbac8ad2b995a8fe6559a9f8f4b83b9
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3419919
      Commit-Queue: Leszek Swirski <leszeks@chromium.org>
      Auto-Submit: Leszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarToon Verwaest <verwaest@chromium.org>
      Commit-Queue: Toon Verwaest <verwaest@chromium.org>
      Cr-Commit-Position: refs/heads/main@{#78806}
      2e8703aa
  34. 24 Jan, 2022 1 commit
    • Joyee Cheung's avatar
      Reland "[class] implement reparsing of class instance member initializers" · 0e07eb53
      Joyee Cheung authored
      This is a reland of 91f08378
      
      When the class scope does not need a context, the deserialized
      outer scope of the initializer scope would not be the class scope,
      and we should not and do not need to use it to fix up the allocation
      information of the context-allocated variables. The original patch
      did not consider this case and resulted in a regression when we
      tried to reparse the initializer function to look for destructuring
      assignment errors. This fixes the regression by not deserializing
      the class scope that's going to be reparsed, and using the positions
      of the scopes to tell whether the scope info matches the reparsed
      scope and can be used to fix up the allocation info.
      
      Original change's description:
      > [class] implement reparsing of class instance member initializers
      >
      > Previously, since the source code for the synthetic class instance
      > member initializer function was recorded as the span from the first
      > initializer to the last initializer, there was no way to reparse the
      > class and recompile the initializer function. It was working for
      > most use cases because the code for the initializer function was
      > generated eagarly and it was usually alive as long as the class was
      > alive, so the initializer wouldn't normally be lazily parsed. This
      > didn't work, however, when the class was snapshotted with
      > v8::SnapshotCreator::FunctionCodeHandling::kClear,
      > becuase then we needed to recompile the initializer when the class
      > was instantiated. This patch implements the reparsing so that
      > these classes can work with FunctionCodeHandling::kClear.
      >
      > This patch refactors ParserBase::ParseClassLiteral() so that we can
      > reuse it for both parsing the class body normally and reparsing it
      > to collect initializers. When reparsing the synthetic initializer
      > function, we rewind the scanner to the beginning of the class, and
      > parse the class body to collect the initializers. During the
      > reparsing, field initializers are parsed with the full parser while
      > methods of the class are pre-parsed.
      >
      > A few notable changes:
      >
      > - Extended the source range of the initializer function to cover the
      >   entire class so that we can rewind the scanner to parse the class
      >   body to collect initializers (previously, it starts from the first
      >   field initializer and ends at the last initializer). This resulted
      >   some expectation changes in the debugger tests, though the
      >   initializers remain debuggable.
      > - A temporary ClassScope is created during reparsing. After the class
      >   is reparsed, we use the information from the ScopeInfo to update
      >   the allocated indices of the variables in the ClassScope.
      >
      > Bug: v8:10704
      > Change-Id: Ifb6431a1447d8844f2a548283d59158742fe9027
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2988830
      > Reviewed-by: Leszek Swirski <leszeks@chromium.org>
      > Reviewed-by: Toon Verwaest <verwaest@chromium.org>
      > Commit-Queue: Joyee Cheung <joyee@igalia.com>
      > Cr-Commit-Position: refs/heads/main@{#78299}
      
      Bug: chromium:1278086, chromium:1278085, v8:10704
      Change-Id: Iea4f1f6dc398846cbe322adc16f6fffd6d2dfdf3
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3325912Reviewed-by: 's avatarToon Verwaest <verwaest@chromium.org>
      Commit-Queue: Joyee Cheung <joyee@igalia.com>
      Cr-Commit-Position: refs/heads/main@{#78745}
      0e07eb53