1. 01 Apr, 2022 1 commit
    • Dominik Inführ's avatar
      [heap] Remove sweeping_slot_set_ from MemoryChunk · ca505562
      Dominik Inführ authored
      Since the new space is always empty after a full GC, the old-to-new
      remembered set is also always empty after a full GC. This means we can
      get rid of the sweeping_slot_set_.
      
      This slot set was used to allow the main thread to insert into the
      old-to-new remembered set non-atomically. The sweeping slot set was
      owned by the sweeper, which deletes slots in free memory from it. The
      main thread would start with an empty old-to-new remembered set. After
      sweeping both slot sets are merged again.
      
      The sweeper now needs to behave differently during a GC. When sweeping
      a page during full GC, the sweeper needs to delete old-to-new-slots in
      free memory.
      
      Outside of the GC the sweeper isn't allowed to remove from the
      old-to-new slots anymore. This would race with the main thread that adds
      slots to that remembered set while the sweeper is running. However,
      there should be no recorded slots in free memory. DCHECKing this is
      tricky though, because we would need to synchronize with the main
      thread right-trimming objects and at least String::MakeThin only deletes
      slots after the map release-store.
      
      Bug: v8:12760
      Change-Id: Ic0301851a714e894c3040595f456ab93b5875c81
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3560638Reviewed-by: 's avatarMichael Lippautz <mlippautz@chromium.org>
      Commit-Queue: Dominik Inführ <dinfuehr@chromium.org>
      Cr-Commit-Position: refs/heads/main@{#79713}
      ca505562
  2. 09 Mar, 2022 1 commit
    • Dominik Inführ's avatar
      [heap] Improve accounting of PagedSpace::CommittedPhysicalMemory() · 25981026
      Dominik Inführ authored
      Instead of using the high water mark for determining this metric, we use
      a bitset for all active/used system pages on a V8 heap page. Each time
      when allocating a LAB on a page, we add the pages of that memory range
      to that bitset. During sweeping we rebuild that bitset from scratch and
      replace it with the old one in case free pages are discarded by the GC.
      We DCHECK here that the sweeper only ever removes pages. This has the
      nice benefit of ensuring that we don't miss any allocations (like we
      do now for concurrent allocations).
      
      CommittedPhysicalMemory for a page is then calculated by counting the
      set bits in the bitset and multiplying it with the system page size.
      This should be simpler to verify and track the "real" effective size
      more precisely.
      
      One case where we are partially less precise than the current
      implementation is for LABs. In order to reduce complexity we now treat
      all pages of a LAB allocation as active immediately. In the current
      implementation we tried to only account the actual used part of the LAB
      when changing the LAB later. This is more complex to track correctly
      but also doesn't account the currently used LAB in effective size.
      
      Change-Id: Ia83df9ad5fbb852f0717c4c396b5074604bd21e9
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3497363Reviewed-by: 's avatarMichael Lippautz <mlippautz@chromium.org>
      Commit-Queue: Dominik Inführ <dinfuehr@chromium.org>
      Cr-Commit-Position: refs/heads/main@{#79428}
      25981026
  3. 18 Feb, 2022 1 commit
  4. 10 Dec, 2021 1 commit
  5. 19 Aug, 2021 1 commit
  6. 20 Jul, 2021 1 commit
  7. 16 Oct, 2020 1 commit
    • Pierre Langlois's avatar
      [heap] Make maximum regular code object size a runtime value. · f4376ec8
      Pierre Langlois authored
      Executable V8 pages include 3 reserved OS pages: one for the writable
      header and two as guards. On systems with 64k OS pages, the amount of
      allocatable space left for objects can then be quite smaller than the
      page size, only 64k for each 256k page.
      
      This means regular code objects cannot be larger than 64k, while the
      maximum regular object size is fixed to 128k, half of the page size. As
      a result code object never reach this limit and we can end up filling
      regular pages with few large code objects.
      
      To fix this, we change the maximum code object size to be runtime value,
      set to half of the allocatable space per page. On systems with 64k OS
      pages, the limit will be 32k.
      
      Alternatively, we could increase the V8 page size to 512k on Arm64 linux
      so we wouldn't waste code space. However, systems with 4k OS pages are
      more common, and those with 64k pages tend to have more memory available
      so we should be able to live with it.
      
      Bug: v8:10808
      Change-Id: I5d807e7a3df89f1e9c648899e9ba2f8e2648264c
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2460809Reviewed-by: 's avatarIgor Sheludko <ishell@chromium.org>
      Reviewed-by: 's avatarGeorg Neis <neis@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Commit-Queue: Pierre Langlois <pierre.langlois@arm.com>
      Cr-Commit-Position: refs/heads/master@{#70569}
      f4376ec8
  8. 31 Aug, 2020 1 commit
    • Jake Hughes's avatar
      [heap] Add object start bitmap for conservative stack scanning · 5f6aa2e5
      Jake Hughes authored
      With conservative stack scanning enabled, a snapshot of the call stack
      upon entry to GC will be used to determine part of the root-set. When
      the collector walks the stack, it looks at each value and determines
      whether it could be a potential on-heap object pointer. However, unlike
      with Handles, these on-stack pointers aren't guaranteed to point to the
      start of the object: the compiler may decide hide these pointers, and
      create interior pointers in C++ frames which the GC doesn't know about.
      
      The solution to this is to include an object start bitmap in the header
      of each page. Each bit in the bitmap represents a word in the page
      payload which is set when an object is allocated. This means that when
      the collector finds an arbitrary potential pointer into the page, it can
      walk backwards through the bitmap until it finds the relevant object's
      base pointer. To prevent the bitmap becoming stale after compaction, it
      is rebuilt during object sweeping.
      
      This is experimental, and currently only works with inline allocation
      disabled, and single generational collection.
      
      Bug: v8:10614
      Change-Id: I28ebd9562f58f335f8b3c2d1189cdf39feaa1f52
      Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2375195
      Commit-Queue: Anton Bikineev <bikineev@chromium.org>
      Reviewed-by: 's avatarMichael Achenbach <machenbach@chromium.org>
      Reviewed-by: 's avatarDominik Inführ <dinfuehr@chromium.org>
      Reviewed-by: 's avatarAnton Bikineev <bikineev@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#69615}
      5f6aa2e5
  9. 12 Aug, 2020 1 commit
  10. 10 Jul, 2020 1 commit