1. 24 Jun, 2021 3 commits
  2. 19 May, 2021 1 commit
    • Jakob Gruber's avatar
      [compiler] Use kAssumeMemoryFence in two critical ref creation spots · 1e2b9c5e
      Jakob Gruber authored
      Using kAssumeMemoryFence works around the fact that the graph stores
      handles (and not refs). The assumption is that any handle inserted
      into the graph is safe to read; but we don't preserve the reason why
      it is safe to read. Thus we must over-approximate here and assume the
      existence of a memory fence.
      
      Note this is only valid if all spots that insert handles into the
      graph ensure that the handle can safely be read.
      
      In the future, we should consider having the graph store ObjectRefs or
      ObjectData pointer instead, which would make new ref construction here
      unnecessary.
      
      Bug: v8:7790,chromium:1209798
      Change-Id: Ic22340ea9f34a24be530a3c62c8309d25e108f3f
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2902742Reviewed-by: 's avatarGeorg Neis <neis@chromium.org>
      Commit-Queue: Jakob Gruber <jgruber@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#74653}
      1e2b9c5e
  3. 05 May, 2021 1 commit
  4. 23 Mar, 2021 1 commit
    • Manos Koukoutos's avatar
      [turbofan] Introduce LoadImmutable, use it in wasm compiler · f6ee9ed0
      Manos Koukoutos authored
      LoadImmutable represents a load from a position in memory that is known
      to be immutable, e.g. an immutable IsolateRoot or an immutable field of
      a WasmInstanceObject. Because the returned value cannot change through
      the execution of a function, LoadImmutable is a pure operator and does
      not have effect or control edges.
      This will allow more aggressive optimizations of loads of fields of
      the Isolate and Instance that are known to be immutable.
      Requires that the memory in question has been initialized at function
      start even through inlining.
      
      Note: We may reconsider this approach once we have escape analysis for
      wasm, and replace it with immutable load/initialize operators that live
      inside the effect chain and are less restriced.
      
      Bug: v8:11510
      Change-Id: I5e8e4f27d7008f39f01175ffa95a9c531ba63e66
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2775568Reviewed-by: 's avatarAndreas Haas <ahaas@chromium.org>
      Reviewed-by: 's avatarGeorg Neis <neis@chromium.org>
      Commit-Queue: Manos Koukoutos <manoskouk@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#73594}
      f6ee9ed0
  5. 22 Mar, 2021 1 commit
  6. 21 Jan, 2021 1 commit
  7. 07 Jan, 2021 1 commit
  8. 10 Nov, 2020 1 commit
  9. 28 Oct, 2020 1 commit
  10. 23 Jun, 2020 1 commit
  11. 29 Apr, 2020 1 commit
    • Tobias Tebbi's avatar
      Reland "Reland "[turbofan][csa] optimize Smi untagging better"" · 9e9cd5df
      Tobias Tebbi authored
      This is a reland of 43b885a8
      This fixes another signed overflow in the unit test.
      
      Original change's description:
      > Reland "[turbofan][csa] optimize Smi untagging better"
      >
      > This is a reland of ff22ae80
      >
      > Original change's description:
      > > [turbofan][csa] optimize Smi untagging better
      > >
      > > - Introduce new operator variants for signed right-shifts with the
      > >   additional information that they always shift out zeros.
      > > - Use these new operators for Smi untagging.
      > > - Merge left-shifts with a preceding Smi-untagging shift.
      > > - Optimize comparisons of Smi-untagging shifts to operate on the
      > >   unshifted word.
      > > - Optimize 64bit comparisons of values expanded from 32bit to use
      > >   a 32bit comparison instead.
      > > - Change CodeStubAssembler::UntagSmi to first sign-extend and then
      > >   right-shift to enable better address computations for Smi indices.
      > >
      > > Bug: v8:9962
      > > Change-Id: If91300f365e8f01457aebf0bd43bdf88b305c460
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2135734
      > > Commit-Queue: Tobias Tebbi <tebbi@chromium.org>
      > > Reviewed-by: Georg Neis <neis@chromium.org>
      > > Cr-Commit-Position: refs/heads/master@{#67378}
      >
      > Bug: v8:9962
      > Change-Id: Ieab0755806c95fb50022eb17596fb0c95f36004c
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2170001
      > Commit-Queue: Tobias Tebbi <tebbi@chromium.org>
      > Commit-Queue: Georg Neis <neis@chromium.org>
      > Auto-Submit: Tobias Tebbi <tebbi@chromium.org>
      > Reviewed-by: Georg Neis <neis@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#67430}
      
      Bug: v8:9962
      TBR: neis@chromium.org
      Change-Id: I79883db546bf37873b3727b8023ef688507091d9
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2169103
      Commit-Queue: Tobias Tebbi <tebbi@chromium.org>
      Reviewed-by: 's avatarGeorg Neis <neis@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#67464}
      9e9cd5df
  12. 28 Apr, 2020 2 commits
    • Clemens Backes's avatar
      Revert "Reland "[turbofan][csa] optimize Smi untagging better"" · bef5b85d
      Clemens Backes authored
      This reverts commit 43b885a8.
      
      Reason for revert: Still fails on UBSan: https://ci.chromium.org/p/v8/builders/ci/V8%20Linux64%20UBSan/10873
      
      Original change's description:
      > Reland "[turbofan][csa] optimize Smi untagging better"
      > 
      > This is a reland of ff22ae80
      > 
      > Original change's description:
      > > [turbofan][csa] optimize Smi untagging better
      > > 
      > > - Introduce new operator variants for signed right-shifts with the
      > >   additional information that they always shift out zeros.
      > > - Use these new operators for Smi untagging.
      > > - Merge left-shifts with a preceding Smi-untagging shift.
      > > - Optimize comparisons of Smi-untagging shifts to operate on the
      > >   unshifted word.
      > > - Optimize 64bit comparisons of values expanded from 32bit to use
      > >   a 32bit comparison instead.
      > > - Change CodeStubAssembler::UntagSmi to first sign-extend and then
      > >   right-shift to enable better address computations for Smi indices.
      > > 
      > > Bug: v8:9962
      > > Change-Id: If91300f365e8f01457aebf0bd43bdf88b305c460
      > > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2135734
      > > Commit-Queue: Tobias Tebbi <tebbi@chromium.org>
      > > Reviewed-by: Georg Neis <neis@chromium.org>
      > > Cr-Commit-Position: refs/heads/master@{#67378}
      > 
      > Bug: v8:9962
      > Change-Id: Ieab0755806c95fb50022eb17596fb0c95f36004c
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2170001
      > Commit-Queue: Tobias Tebbi <tebbi@chromium.org>
      > Commit-Queue: Georg Neis <neis@chromium.org>
      > Auto-Submit: Tobias Tebbi <tebbi@chromium.org>
      > Reviewed-by: Georg Neis <neis@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#67430}
      
      TBR=neis@chromium.org,tebbi@chromium.org
      
      Change-Id: I49e19811ebcecb846f61291bc0c4a0d8b0bc4cff
      No-Presubmit: true
      No-Tree-Checks: true
      No-Try: true
      Bug: v8:9962
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2168876Reviewed-by: 's avatarClemens Backes <clemensb@chromium.org>
      Commit-Queue: Clemens Backes <clemensb@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#67431}
      bef5b85d
    • Tobias Tebbi's avatar
      Reland "[turbofan][csa] optimize Smi untagging better" · 43b885a8
      Tobias Tebbi authored
      This is a reland of ff22ae80
      
      Original change's description:
      > [turbofan][csa] optimize Smi untagging better
      > 
      > - Introduce new operator variants for signed right-shifts with the
      >   additional information that they always shift out zeros.
      > - Use these new operators for Smi untagging.
      > - Merge left-shifts with a preceding Smi-untagging shift.
      > - Optimize comparisons of Smi-untagging shifts to operate on the
      >   unshifted word.
      > - Optimize 64bit comparisons of values expanded from 32bit to use
      >   a 32bit comparison instead.
      > - Change CodeStubAssembler::UntagSmi to first sign-extend and then
      >   right-shift to enable better address computations for Smi indices.
      > 
      > Bug: v8:9962
      > Change-Id: If91300f365e8f01457aebf0bd43bdf88b305c460
      > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2135734
      > Commit-Queue: Tobias Tebbi <tebbi@chromium.org>
      > Reviewed-by: Georg Neis <neis@chromium.org>
      > Cr-Commit-Position: refs/heads/master@{#67378}
      
      Bug: v8:9962
      Change-Id: Ieab0755806c95fb50022eb17596fb0c95f36004c
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2170001
      Commit-Queue: Tobias Tebbi <tebbi@chromium.org>
      Commit-Queue: Georg Neis <neis@chromium.org>
      Auto-Submit: Tobias Tebbi <tebbi@chromium.org>
      Reviewed-by: 's avatarGeorg Neis <neis@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#67430}
      43b885a8
  13. 17 Mar, 2020 1 commit
  14. 07 Oct, 2019 1 commit
  15. 12 Aug, 2019 2 commits
    • Jakob Gruber's avatar
      [compiler] Remove LoadStackPointer and related machinery · 5b2ab2f6
      Jakob Gruber authored
      Now that all uses of LoadStackPointer have been removed, this CL cleans
      up related code:
      
      - Removed LoadStackPointer.
      - Removed ArchStackPointer.
      - Removed IA32StackCheck.
      - Removed X64StackCheck.
      - Removed StackCheckMatcher.
      
      All stack checks now follow a simple path without matchers or special
      register constraints: they load the limit and pass it to
      StackPointerGreaterThan, which is finally handled by code generation.
      
      Bug: v8:9534
      Change-Id: Ib1d7be1502a471541d6441f3261aac0c949525fb
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/1748737
      Commit-Queue: Jakob Gruber <jgruber@chromium.org>
      Reviewed-by: 's avatarGeorg Neis <neis@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#63166}
      5b2ab2f6
    • Jakob Gruber's avatar
      [wasm] Update the stack check and remove WasmStackCheckMatcher · 376c7b61
      Jakob Gruber authored
      The matcher used to be needed to avoid first moving rsp to an
      allocated register for LoadStackPointer. This is no longer the case
      with the new stack check structure based on StackPointerGreaterThan.
      This CL updates the wasm stack check and removes now-unneeded
      matchers.
      
      The generated stack check code remains unchanged from before:
      
      // Load the stack limit through the instance then compare against rsp.
      REX.W movq rcx,[rbp-0x10]
      REX.W movq rcx,[rcx+0x2f]
      REX.W cmpq rsp,[rcx]
      
      // And on ia32:
      mov ecx,[ebp-0x8]
      mov ecx,[ecx+0x17]
      cmp esp,[ecx]
      
      Bug: v8:9534
      Change-Id: I9240ad922d19d498a2661c143b12d629ac14d093
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/1748733
      Commit-Queue: Jakob Gruber <jgruber@chromium.org>
      Reviewed-by: 's avatarMichael Starzinger <mstarzinger@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#63165}
      376c7b61
  16. 28 May, 2019 1 commit
  17. 24 May, 2019 1 commit
  18. 15 May, 2019 1 commit
  19. 12 Apr, 2019 1 commit
  20. 29 Mar, 2019 1 commit
  21. 13 Feb, 2019 1 commit
  22. 10 Jan, 2019 1 commit
  23. 08 Jan, 2019 1 commit
  24. 07 Jan, 2019 1 commit
  25. 14 Dec, 2018 1 commit
  26. 23 Jul, 2018 1 commit
  27. 17 Jul, 2018 1 commit
  28. 12 Jul, 2018 1 commit
  29. 02 Jul, 2018 1 commit
  30. 14 Jun, 2018 1 commit
    • jgruber's avatar
      Fix stack check pattern matching for CSA code · 9ff644ae
      jgruber authored
      The stack check instruction sequence is pattern-matched in
      instruction-selector-{ia32,x64}.cc and replaced with its own specialized
      opcode, for which we later generate an efficient stack check in a single
      instruction.
      
      But this pattern matching has never worked for CSA-generated code. The
      matcher expected LoadStackPointer in the right operand and the external
      reference load in the left operand. CSA generated exactly vice-versa.
      
      This CL does a few things; it
      1. reverts the recent change to load the
      limit from smi roots:
      
      Revert "[csa] Load the stack limit from smi roots"
      This reverts commit 507c29c9.
      
      2. tweaks the CSA instruction sequence to output what the matcher
      expects.
      3. refactors stack check matching into a new StackCheckMatcher class.
      4. typifies CSA::PerformStackCheck as a drive-by.
      
      Bug: v8:6666,v8:7844
      Change-Id: I9bb879ac10bfe7187750c5f9e7834dc4accf28b5
      Reviewed-on: https://chromium-review.googlesource.com/1099068Reviewed-by: 's avatarLeszek Swirski <leszeks@chromium.org>
      Reviewed-by: 's avatarSigurd Schneider <sigurds@chromium.org>
      Reviewed-by: 's avatarJaroslav Sevcik <jarin@chromium.org>
      Commit-Queue: Jakob Gruber <jgruber@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#53737}
      9ff644ae
  31. 13 Jun, 2018 1 commit
    • Kanghua Yu's avatar
      [ia32] Bugfix for jump optimization · 418bf412
      Kanghua Yu authored
      The jump optimization maybe run Turbofan pipeline twice for each TF/CS builtins,
      and relies on the fact that the number of j/jmp instruction generated is always
      the same.
      The behavior of {AddMatcher::SwapInputs} should be aware the two times code
      generation, and prevents the flipping of child nodes.
      
      For example:
      
      1: Int32Add(2, 3)   --- We shouldn't swap the input #2 and #3 in this situation
      2: Int32Sub(4, 5)
      3: Int32Add(6, 7)
      4: ...
      5: ...
      6: ...
      7: ...
      
      R=danno@chromium.org
      
      Bug: v8:7839
      Change-Id: Ia97de3ab28294e595ac27b5898c099c0d782e9f9
      Reviewed-on: https://chromium-review.googlesource.com/1098678Reviewed-by: 's avatarJaroslav Sevcik <jarin@chromium.org>
      Commit-Queue: Kanghua Yu <kanghua.yu@intel.com>
      Cr-Commit-Position: refs/heads/master@{#53705}
      418bf412
  32. 23 May, 2018 1 commit
  33. 05 Mar, 2018 1 commit
  34. 13 Jul, 2017 1 commit
  35. 02 Mar, 2017 1 commit
    • tebbi's avatar
      [wasm] change reducer order in WASM pipeline to make build predictable again · 12ce15c3
      tebbi authored
      BinopMatcher does not notify the reducers using it when it flips inputs to commutative operators. This leads to value numbering not being re-executed in this case. Together with the fact that value numbering might still reduce such a modified node in the case of a hash collision merging the buckets of two equivalent nodes, this leads to unpredictable behaviour.
      
      This is the easiest fix for the problem: Always running value numbering last. This is also a performance improvement because value numbering never changes but only replaces nodes.
      
      R=mstarzinger@chromium.org
      
      Review-Url: https://codereview.chromium.org/2728983002
      Cr-Commit-Position: refs/heads/master@{#43552}
      12ce15c3
  36. 24 Feb, 2017 1 commit