= Why we should split off CPR analysis from the demand analyser = This page collects some illustrative arguments. Here's the summary: Pro - Demand Analysis is a backwards analysis, CPR is a forward analysis - Separation of concerns: Strictness/usage analysis is independent of CPR, while CPR relies on strictness info to be present. Makes you ask at every line of code "Is this relevant to CPR?" + Virgin run Cons - Possible code duplication. Counter-argument: Extract overlapping logic into a "projection-based analysis" skeleton, instantiate with demand/CPR - CPR and strictness feel like they are dual to another, hence it makes sense to compute them together. Counter-argument: Strictness can be computed independent of CPR, CPR analysis needs a sound approximation of strictness info. Also the whole point about forward vs. backward analysis - Compiler performance: An additional pass slows things down. Counter-argument: The whole additional non-virgin run thing just hides that CPR analysis depends on strictness analysis. We unnecessarily recompute CPR information on unsound approximations of strictness info, until strictness info hits a fixed point. We need to recompute CPR information then anyway, so all the prior attempts at coming up with CPR info based on unsound strictness approximations seems unnecessary. == Forward vs. Backward analysis == Joachim was probably the first to realise this, but CPR analysis is a forward analysis: https://ghc.haskell.org/trac/ghc/ticket/12364#comment:3. That led to a reeeeeeally complicated, duplicated `Case` case in his take on nested CPR analysis: https://phabricator.haskell.org/D4244#inline-35408. For that reason, I feel strongly about splitting off CPR before we try this again. Note that this isn't an issue for non-nested CPR (at least I couldn't come up with an example). But `Note [CPR in a product case alternative]` seems we already suffer in a similar way. The idea is the following: {{{ module Foo where f :: Int -> (Int, Int) f n = (n+1, n+2) {-# NOINLINE f #-} g :: Int -> Int g n = case f n of (a, b) -> a {-# NOINLINE g #-} }}} Imagine we'd do nested CPR. The current approach would compute a CPR signature for `f` of `m(tm(t),tm(t))`, stating that it constructs a pair of constructed (and terminating) `Int`s. So far, so good! Now look at the call site in `g`. The demand analyser handles the `case` backwards, because there might be worthwhile strictness info to be unleashed on the scrutinee. However, that's bad for CPR when the scrutinee is an application, as the example demonstrates: At the time we see `a` in the alt of the case, we don't know that `a` really has the CPR property once we WW'd `f`. Hence, `g` itself doesn't have the CPR property, rendering the whole business of CPRing `f` useless (harmful, even). Note that for CPR for sum types to be useful, we need at least nested CPR of depth 2, which has all the same problems (https://ghc.haskell.org/trac/ghc/ticket/12364#comment:3) wrt. termination and analysis "direction". == Separation of concerns == Take [https://github.com/ghc/ghc/blob/a5373c1fe172dee31e07bcb7c7f6caff1035e6ba/compiler/stranal/DmdAnal.hs#L663 this binding]. Is it related to strictness analysis? Or just important for CPR analysis? Or the whole [https://github.com/ghc/ghc/blob/a5373c1fe172dee31e07bcb7c7f6caff1035e6ba/compiler/stranal/DmdAnal.hs#L1029 lazy_fv business]. Do I have to pay attention/re-read related notes when all I do is hunting down a bug in CPR analysis? Assume this hack is just necessary because we do usage and strictness analysis in the same pass. Does this have side-effects on the precision of CPR analysis? I really can't tell without spending a few hours to fully understand this through reading notes and printf debugging. Beyond better CPR, does the [https://github.com/ghc/ghc/blob/a5373c1fe172dee31e07bcb7c7f6caff1035e6ba/compiler/stranal/DmdAnal.hs#L73 weird additional non-virgin run] due to `Note [CPR for thunks]` and `Note [Optimistic CPR in the "virgin" ase]` improve any strictness or usage info or could we drop it when CPR is split off? I sure hope so (implying there's no information flowing from CPR -> Strictness), but we can't know until we try out. The point I'm trying to make: Any benefits to interleaving the analyses I could think of don't make up for the nasty side-effects of the interleaving. I don't see the principle behind it. Note that the same argument probably applies to splitting usage and strictness analysis. The latter doesn't need the LetUp case (we even give up precision for that) and it necessitates the strange lazy_fv business ([https://ghc.haskell.org/trac/ghc/ticket/14816#comment:13 yuck]). But we'll leave that for another time.