Opened 4 years ago

Last modified 2 years ago

#10818 new bug

GHC 7.10.2 takes much longer to compile some packages

Reported by: kazu-yamamoto Owned by: dfeuer
Priority: high Milestone:
Component: Compiler Version: 7.10.2
Keywords: Cc: gidyn, maoe, j.waldmann, dfeuer, RyanGlScott
Operating System: Unknown/Multiple Architecture: Unknown/Multiple
Type of failure: Compile-time performance bug Test Case:
Blocked By: Blocking:
Related Tickets: #9630 Differential Rev(s):
Wiki Page:

Description

If we compile the iproute package by GHC, GHC 7.10.2 take much longer than GHC 7.8.4. See the report from Travis CI:

https://travis-ci.org/kazu-yamamoto/iproute/builds/77427248

I can reproduce this on my local MacBookAir:

  • GHC 7.8.4: cabal build 8.81s user 0.94s system 103% cpu 9.430 total
  • GHC 7.10.2: cabal build 37.86s user 3.49s system 98% cpu 42.066 total

Attachments (1)

rules-after (14.8 KB) - added by dfeuer 2 years ago.
Specialization rules after -fspecialise-aggressively commit

Download all attachments as: .zip

Change History (25)

comment:1 Changed 4 years ago by hvr

Priority: normalhigh
Type of failure: None/UnknownCompile-time performance bug

comment:2 Changed 4 years ago by gidyn

Cc: gidyn added

comment:3 Changed 4 years ago by maoe

Cc: maoe added

comment:4 Changed 4 years ago by gidyn

I tried this with the haskell docker images for 7.10.2 and 7.8.4 and the build times are similar.

comment:5 Changed 4 years ago by j.waldmann

here are two more instances where compile time does increase substantially.

      7.6  7.8  7.10  
tpdb   4    6    13     (minutes of travis build time)
obdd   2    2     5

https://travis-ci.org/jwaldmann/haskell-tpdb/builds/108860258 https://travis-ci.org/jwaldmann/haskell-obdd/builds/108860079

What do such experiments actually measure? Travis builds for difference GHC versions use identical source of the primary build target, so that's a good start. But transitive dependencies of the packages could include something that depends on the compiler version, by cabal flags, or ifdefs in the code (e.g. pull in different packages, or compile with different options), so GHC might actually see very different source texts. I see no easy way to check this.

So it's not necessarily a drop in compiler performance. But it's a regression in "developer experience". Which could be balanced by "user experience", i.e., better runtime performace of the compiled program. This could be measured if "cabal test" contained performance tests. My two packages above currently don't have these.

Last edited 4 years ago by j.waldmann (previous) (diff)

comment:6 Changed 4 years ago by rwbarton

Hmm, I wouldn't take those Travis times very seriously... I ran approximately the same sequence of commands locally, and cabal build took 3 seconds, not the 2.5 minutes reported by Travis. (Edit: talking about building j.waldmann's haskell-obdd.)

Last edited 4 years ago by rwbarton (previous) (diff)

comment:7 Changed 4 years ago by thomie

Can you try with 7.10.3 instead of 7.10.2.

comment:8 Changed 4 years ago by j.waldmann

Yes it's not clear (to me) what the travis timing data actually measures. I was hoping it's CPU time allocated to the build process. Of course if it's wall clock then all bets are off. Or, if time includes waiting for the cabal downloads ...

3 sec for obdd

Really. On my local machine, more like 30 seconds (single-threaded build)

comment:9 Changed 4 years ago by rwbarton

rwbarton@morphism:/tmp/haskell-obdd$ cabal-super2 clean
cleaning...
rwbarton@morphism:/tmp/haskell-obdd$ cabal-super2 configure -w ghc-7.10.1 --with-hc-pkg ghc-pkg-7.10.1 --enable-tests --enable-benchmarks --disable-shared
Resolving dependencies...
Configuring obdd-0.3.3...
rwbarton@morphism:/tmp/haskell-obdd$ time cabal-super2 build
Building obdd-0.3.3...
Preprocessing library obdd-0.3.3...
[1 of 7] Compiling OBDD.IntIntMap   ( src/OBDD/IntIntMap.hs, dist/build/OBDD/IntIntMap.o )
[2 of 7] Compiling OBDD.VarIntIntMap ( src/OBDD/VarIntIntMap.hs, dist/build/OBDD/VarIntIntMap.o )
[3 of 7] Compiling OBDD.Data        ( src/OBDD/Data.hs, dist/build/OBDD/Data.o )
[4 of 7] Compiling OBDD.Make        ( src/OBDD/Make.hs, dist/build/OBDD/Make.o )
[5 of 7] Compiling OBDD.Operation   ( src/OBDD/Operation.hs, dist/build/OBDD/Operation.o )
[6 of 7] Compiling OBDD.Property    ( src/OBDD/Property.hs, dist/build/OBDD/Property.o )
[7 of 7] Compiling OBDD             ( src/OBDD.hs, dist/build/OBDD.o )
Preprocessing test suite 'obdd-placement' for obdd-0.3.3...
[1 of 1] Compiling Main             ( examples/Placement.hs, dist/build/obdd-placement/obdd-placement-tmp/Main.o )
Linking dist/build/obdd-placement/obdd-placement ...
Preprocessing test suite 'obdd-queens' for obdd-0.3.3...
[1 of 1] Compiling Main             ( examples/Queens.hs, dist/build/obdd-queens/obdd-queens-tmp/Main.o )
Linking dist/build/obdd-queens/obdd-queens ...

real	0m3.187s
user	0m2.916s
sys	0m0.212s

(I have a lot of cabal versions installed.)

This is without a sandbox, if that matters somehow.

comment:10 Changed 4 years ago by j.waldmann

Oh, I was timing "cabal install" (not just "cabal build") because I assume that is what is being timed on travis.

Meanwhile, I did repeat the builds with 7.10.3 instead of 7.10.2, see https://travis-ci.org/jwaldmann/haskell-tpdb/builds/109177528 https://travis-ci.org/jwaldmann/haskell-obdd/builds/109179019 and results look similar to before.

comment:11 Changed 4 years ago by rwbarton

Travis shows the time executed for each command at the right. So https://travis-ci.org/jwaldmann/haskell-obdd/jobs/108860083#L233 shows cabal build taking ~150 seconds for example.

Though I don't know how to reconcile that with the whole job taking "5 min 16 sec", since the other times reported add up to less than a minute.

I'm assuming Travis build machines are under huge load and memory pressure, and it's probably unreasonable to expect anything out of the build times beyond "if you make the build more efficient, the numbers will go down, probably".

comment:12 Changed 4 years ago by rwbarton

I have to say I don't really understand cabal install taking 30 seconds either; unless it includes either building your dependencies (random, mtl) or the slowest operation known to man, "Updating documentation index" in the global package database.

comment:13 Changed 4 years ago by j.waldmann

... both! (well, update doc index in the sandbox only)

comment:14 Changed 4 years ago by j.waldmann

Cc: j.waldmann added

comment:15 Changed 4 years ago by j.waldmann

Here is some more timing info for travis:

I added "time" measurements around the cabal commands https://travis-ci.org/jwaldmann/haskell-tpdb/builds/109678365

So the timing they are showing is indeed wallclock.

Still, sometimes "user time" was larger than "real time". I suspect this is because cabal uses -j $ncpus by default. I think for cabal install --dependencies this means to compile them in parallel, which is OK, but for cabal build this means to pass -j $ncpus to ghc, which decreases performance, and sometimes drastically so.

So, I put explicit -j 1 for these commands, https://travis-ci.org/jwaldmann/haskell-tpdb/builds/109683703 and now the timings look at least somewhat reasonable.

comment:16 Changed 3 years ago by bgamari

I'm having a look at this now; it seems culprit in iproute is the Data.IP.Addr, whose compilation time has increased by the better part of a second with -O1 between 7.8.4 and 7.10.3. I'll try to look into why tomorrow.

comment:17 Changed 2 years ago by dfeuer

Git bisect indicates that the regression in Data.IP.Addr occurred in b9e49d3e9580e13d89efd1f779cb76f610e0d6e0 "Add -fspecialise-aggressively". Compiler allocation with -O increased from 1.53GB to 2.25GB and compilation time increased from 1.8s to 2.6s. Is this to be expected?

comment:18 Changed 2 years ago by dfeuer

Cc: dfeuer added

comment:19 Changed 2 years ago by simonpj

I'm still puzzled by the quadrupling in compile time describe in the Description. Is that reproducible. Incrasing from 1.8s to 2.6s seems much less dramatic.

Bisection is great! I would not have suspected that patch. Inspecting it, I think that perhaps dfuns are specialised after the patch but not before. Try -dshow-passes before and after to show code size; and -ddump-rules to show what specialision rules have been created (at least for top level things). Then worth checking that the extra specialisations make sense.

(I'm assuming that this program does not actually say -fspecialise-aggressively?

comment:20 Changed 2 years ago by dfeuer

@simonpj, no I cannot reproduce the claimed regression. I only see the regression Ben noted. I will investigate that one further today.

comment:21 Changed 2 years ago by dfeuer

No, the program doesn't say -fspecialise-aggressively. -dshow-passes indicates that everything stays the same up until specialization, with terms: 2,528, types: 2,420, coercions: 214. In specialization, it previously rose to terms: 2,601, types: 2,487, coercions: 214, but after the change instead jumps to terms: 3,468, types: 3,731, coercions: 214. Looking at -ddump-rules, the difference is pretty dramatic. Before, we had just

"SPEC Control.Monad.when @ (Text.Appar.Parser.MkParser GHC.Base.String)" [ALWAYS]
    forall ($dMonad :: Monad (MkParser String)).
      when @ (MkParser String) $dMonad
      = $swhen

After the change, we have many, many rules (I'll attach them). While the specializations all look fairly reasonable to me, I'm not sure we really want quite that many by default.

Changed 2 years ago by dfeuer

Attachment: rules-after added

Specialization rules after -fspecialise-aggressively commit

comment:22 Changed 2 years ago by dfeuer

It looks like the same commit that causes the regression in Data.IP.Addr here was also responsible for the much more dramatic Language.Haskell.Extension regression described in #9630.

comment:23 Changed 2 years ago by RyanGlScott

Cc: RyanGlScott added

comment:24 Changed 2 years ago by bgamari

Owner: set to dfeuer
Note: See TracTickets for help on using tickets.