Changes between Version 4 and Version 5 of Commentary/Compiler/CodeLayout


Ignore:
Timestamp:
Jun 12, 2018 12:14:39 PM (14 months ago)
Author:
AndreasK
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Commentary/Compiler/CodeLayout

    v4 v5  
    228228}}}
    229229
    230 FS seems like it is just a edge case for the new layout algorithm.
    231 With lambda I have no idea what is up. The benchmark basically runs two functions: `main = do { mainSimple ; mainMonad }`.
    232 
    233 It seems like some edge case I didn't consider. But so far I haven't really been able to pinpoint why it get's faster by so much.
    234 
    235 Removing either one the new layout code is faster. Using -XStrict[Data] the new one is faster.
    236 However when padding the whole executable to change alignment the difference stays about the same. So it doesn't seem to be an alignment issue.
    237 
    238 Performance counters indicate the DSB buffer being the culprit. But I haven't yet been able to find out how so.
    239 
    240230=== Things left to do:
    241231
    242 It's not clear why the lambda case is so slow.
    243 
    244 Besides that another question is how calls would be best handled.
     232A question is how calls would be best handled.
    245233If we have a small function f we really want to keep a sequence like
    246234
     
    257245right after B is gone. (Cache lines have been evicted, buffers invalidated, ...).
    258246
     247For now it's seems ignoring call edges leads to better performance.
     248
    259249=== Conclusion
    260250
    261 After some tweaking this patch https://phabricator.haskell.org/D4726 lowered runtime by 0.5% on Haswell and Skylake.
     251After some tweaking this patch https://phabricator.haskell.org/D4726 lowered runtime on Haswell and Skylake.
    262252
    263253The primary change made compared to the above was to ignore edges based on call returns for code layout.
     
    265255code right after the check.
    266256
    267 This isn't much but it allows us to take advantage of things like likelihood information so it opens GHC op to new optimizations.
     257This isn't much but it allows us to take advantage of things like likelihood information so it opens GHC up to new optimizations.