Opened 6 years ago

Closed 6 years ago

#8609 closed bug (fixed)

Clean up block allocator

Reported by: ezyang Owned by: ezyang
Priority: low Milestone:
Component: Runtime System Version: 7.7
Keywords: patch Cc: simonmar
Operating System: Unknown/Multiple Architecture: Unknown/Multiple
Type of failure: Runtime performance bug Test Case:
Blocked By: Blocking:
Related Tickets: Differential Rev(s):
Wiki Page:

Description

For fun, I was reimplementing GHC's block allocator in Rust, and I noticed that there are a number of things the current block allocator code does that result in extra, unnecessary work:

  • Upon a perfect match, alloc_mega_group initializes the returned block group (initGroup). This is unnecessary, because the caller of alloc_mega_group is responsible for initializing the block groups.
  • The initGroup function uses the blocks to figure out how many block descriptors it needs to initialize. However, in the case of a multi-megablock group, blocks will larger than BLOCKS_PER_MBLOCK, so this process will march into the first block and write information to block descriptors which will be later be overwritten by the user data. So, initGroup should cap at the block descriptors of the first megablock. One can go even further: if a block group spans multiple megablocks, then in practice, the only block descriptor we care about is the very first one, since we're not allowed to have internal GC'd pointers to the block group (as not every address in the group has a valid bdescr associated with it).
  • The commentary should add a bit about mblock groups. Essentially, the mblock free list has a different set of invariants: the first mblock's start pointers are guaranteed to be initialized (the mblock was initMBlock 'd), and the blocks and link fields of the first bdescr will be something appropriate; there are no other guarantees.

Here are the proposed functional changes (obviously some extra documentation is necessary; replace the XXX marked expression with BLOCKS_PER_MBLOCK if you want to do the more conservative changes for larger than mblock block chains):

diff --git a/rts/sm/BlockAlloc.c b/rts/sm/BlockAlloc.c
index 18c167f..33530d8 100644
--- a/rts/sm/BlockAlloc.c
+++ b/rts/sm/BlockAlloc.c
@@ -170,7 +170,7 @@ initGroup(bdescr *head)
   bdescr *bd;
   W_ i, n;
 
-  n = head->blocks;
+  n = head->blocks > BLOCKS_PER_MBLOCK ? 1 /* XXX */ : head->blocks;
   head->free   = head->start;
   head->link   = NULL;
   for (i=1, bd = head+1; i < n; i++, bd++) {
@@ -278,7 +278,6 @@ alloc_mega_group (StgWord mblocks)
             } else {
                 free_mblock_list = bd->link;
             }
-            initGroup(bd);
             return bd;
         }
         else if (bd->blocks > n)

Change History (3)

comment:1 Changed 6 years ago by simonmar

Looks good to me! Nice catch for the duplicate initGroup call.

comment:2 Changed 6 years ago by Edward Z. Yang <ezyang@…>

In 38d17a0cdfee64cc73537e7bb96eb2f25df9ec92/ghc:

Clean up block allocator, fixes #8609

Signed-off-by: Edward Z. Yang <ezyang@cs.stanford.edu>

comment:3 Changed 6 years ago by ezyang

Resolution: fixed
Status: newclosed
Note: See TracTickets for help on using tickets.