1. 20 Dec, 2017 1 commit
  2. 14 Nov, 2017 1 commit
  3. 04 Sep, 2017 1 commit
  4. 25 Aug, 2017 1 commit
  5. 14 Aug, 2017 1 commit
    • Ulan Degenbaev's avatar
      [heap] Refactor object marking state (part 2). · 19ae2fc1
      Ulan Degenbaev authored
      This follows up 4af9cfcc by separating incremental marking state
      from the full MC marking state. Runtime and tests now use only
      the incremental marking state. The full MC marking state used
      by MC during atomic pause.
      
      This separation decouples atomicity of markbit accesses
      during incremental marking and during full MC.
      
      Bug: chromium:694255
      TBR: mlippautz@chromium.org
      Change-Id: Ia409ab06515cd0d1403a272a016633295c0d6692
      Reviewed-on: https://chromium-review.googlesource.com/612350
      Commit-Queue: Ulan Degenbaev <ulan@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#47336}
      19ae2fc1
  6. 10 Aug, 2017 1 commit
    • Ulan Degenbaev's avatar
      [heap] Refactor object marking state. · 4af9cfcc
      Ulan Degenbaev authored
      This patch merges ObjectMarking and MarkingState. The new marking state
      encapsulates object marking, live byte tracking, and access atomicity.
      
      The old ObjectMarking calls are now replaced with calls to marking
      state. For example:
      ObjectMarking::WhiteToGrey<kAtomicity>(obj, marking_state(obj)
      becomes
      marking_state()->WhiteToGrey(obj)
      
      This simplifies custom handling of live bytes and allows to chose
      atomicity of markbit accesses depending on collector's state.
      
      This also decouples marking bitmap from the marking code, which will
      allows in future to use different data-structure for mark-bits.
      
      Bug: chromium:694255
      Change-Id: Ifb4bc0144187bac1c08f6bc74a9d5c618fe77740
      Reviewed-on: https://chromium-review.googlesource.com/602132
      Commit-Queue: Ulan Degenbaev <ulan@chromium.org>
      Reviewed-by: 's avatarMichael Lippautz <mlippautz@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#47288}
      4af9cfcc
  7. 03 Aug, 2017 1 commit
  8. 02 Aug, 2017 2 commits
  9. 28 Jul, 2017 1 commit
  10. 27 Jul, 2017 2 commits
  11. 20 Jul, 2017 2 commits
  12. 17 Jul, 2017 2 commits
  13. 13 Jul, 2017 1 commit
  14. 12 Jul, 2017 2 commits
  15. 10 Jul, 2017 2 commits
  16. 07 Jul, 2017 1 commit
  17. 06 Jul, 2017 1 commit
  18. 03 Jul, 2017 2 commits
  19. 30 Jun, 2017 1 commit
    • Michael Lippautz's avatar
      [heap] Redo scavenging logic · ebc98f7f
      Michael Lippautz authored
      Replace the second level visitation with a much simpler logic that
      just separately dispatches the special cases. All other cases can
      use a dispatch that just evacuates an object based on size.
      
      This is similar to the logic used in the mark-compact collector. The
      goal is to align behaviors as much as possible, highlighting and 
      fixing performance issues in the different behaviors.
      
      This CL is mechanical as possible. A followup will clean
      up the naming scheme and dispatching.
      
      Bug: chromium:738368
      Change-Id: Ia5a426c5ebb25230000b127580c300c97cff8b1b
      Reviewed-on: https://chromium-review.googlesource.com/558060
      Commit-Queue: Michael Lippautz <mlippautz@chromium.org>
      Reviewed-by: 's avatarUlan Degenbaev <ulan@chromium.org>
      Cr-Commit-Position: refs/heads/master@{#46364}
      ebc98f7f
  20. 26 Jun, 2017 1 commit
    • hans's avatar
      Make some functions that are hit during renderer startup available for inlining · 777da354
      hans authored
      This is towards closing the perf gap between the MSVC build (which uses link-
      time optimization) and Clang (where LTO isn't ready on Windows yet). We did
      a study (see bug) to see which non-inlined functions are hit a lot during render
      start-up, and which would be inlined during LTO. This should benefit performance
      in all builds which currently don't use LTO (Android, Linux, Mac) as well as
      the Win/Clang build.
      
      The binary size of chrome_child.dll increases by 2KB with this.
      
      BUG=chromium:728324
      CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_chromium_compile_dbg_ng;master.tryserver.chromium.mac:mac_chromium_compile_dbg_ng
      
      Review-Url: https://codereview.chromium.org/2950993002
      Cr-Commit-Position: refs/heads/master@{#46229}
      777da354
  21. 25 Jun, 2017 1 commit
  22. 23 Jun, 2017 2 commits
  23. 06 Oct, 2016 1 commit
    • mlippautz's avatar
      [heap] Remove PromotionMode used by Scavenger · f88fe51a
      mlippautz authored
      The scavenger should never consider mark bits for promotion/copy as this creates
      weird livetimes at the start of incremental marking. E.g. consider an object
      marked black by the marker at the start of incremental marking. A scavenge would
      promote it to the old generation although it could --and for short-living
      objects actually does-- become unreachable during marking
      
      Also, keeping this invariant significantly simplifies young generation mark
      compacting as we can compare against the scavenging decision without keeping
      different sets of markbits.
      
      BUG=chromium:651354
      R=hpayer@chromium.org
      
      Review-Url: https://codereview.chromium.org/2397713002
      Cr-Commit-Position: refs/heads/master@{#40026}
      f88fe51a
  24. 04 Aug, 2016 1 commit
  25. 01 Jun, 2016 1 commit
    • hlopko's avatar
      Immediately promote marked objects during scavenge · dc78e0d4
      hlopko authored
      It happens that a scavenger runs during incremental marking. Currently scavenger does not care about MarkCompact's mark bits. When an object is alive and marked, and at least one scavenge happens during incremental marking, the object will be copied once to the other semispace in the new_space, and then once to the old_space. For surviving objects this is useless extra work.
      
      In our current attempts (https://codereview.chromium.org/1988623002) to ensure marked objects are scavenged, all marked objects will survive therefore there will be many objects which will be uselessly copied.
      
      This cl modifies our promotion logic so when incremental marking is in progress, and the object is marked, we promote it unconditionally.
      
      BUG=
      LOG=no
      
      Review-Url: https://codereview.chromium.org/2005173003
      Cr-Commit-Position: refs/heads/master@{#36643}
      dc78e0d4
  26. 24 May, 2016 2 commits
  27. 23 May, 2016 1 commit
  28. 19 May, 2016 1 commit
    • ahaas's avatar
      [heap] Get rid of the wrapper in remembered-set.h · 3ddb2249
      ahaas authored
      This patch moves the wrapper code from the remembered-set to the
      scavenger and the mark-compact code.
      
      The wrapper code inspected a slot address to see if the object that
      belongs to the address is in the from-space. If it was in the
      from-space, then some callback was executed on the object. If the object
      got move to the to-space, then the wrapper returned KEEP_SLOT, otherwise
      REMOVE_SLOT.
      
      This logic does not really belong to the remembered set, so I moved it
      away from there.
      
      R=ulan@chromium.org
      
      Review-Url: https://codereview.chromium.org/1994933002
      Cr-Commit-Position: refs/heads/master@{#36364}
      3ddb2249
  29. 02 Feb, 2016 1 commit
  30. 12 Jan, 2016 1 commit
    • mlippautz's avatar
      [heap] Use HashMap as scratchpad backing store · 55422bdd
      mlippautz authored
      We use a scratchpad to remember visited allocation sites for post processing
      (making tenure decisions). The previous implementation used a rooted FixedArray
      with constant length (256) to remember all sites. Updating the scratchpad is a
      bottleneck in any parallel/concurrent implementation of newspace evacuation.
      
      The new implementation uses a HashMap with allocation sites as keys and
      temporary counts as values. During evacuation we collect a local hashmap of
      visited allocation sites. Upon merging the local hashmap back into a global one
      we update potential forward pointers of compacted allocation sites.  The
      scavenger can directly enter its entries into the global hashmap. Note that the
      actual memento found count is still kept on the AllocationSite as it needs to
      survive scavenges and full GCs.
      
      BUG=chromium:524425
      LOG=N
      R=hpayer@chromium.org
      
      Review URL: https://codereview.chromium.org/1535723002
      
      Cr-Commit-Position: refs/heads/master@{#33233}
      55422bdd
  31. 04 Dec, 2015 1 commit