Opened 5 years ago
Last modified 4 years ago
#9276 new task
audit ghc floating point support for IEEE (non)compliance
Reported by: | carter | Owned by: | carter |
---|---|---|---|
Priority: | normal | Milestone: | |
Component: | Compiler | Version: | 7.8.2 |
Keywords: | Cc: | jrp | |
Operating System: | Unknown/Multiple | Architecture: | Unknown/Multiple |
Type of failure: | None/Unknown | Test Case: | |
Blocked By: | Blocking: | #3070, #8364, #9530 | |
Related Tickets: | #9304 | Differential Rev(s): | |
Wiki Page: |
Description
As best I can determine, ghc has never been closely audited for conformance to IEEE-754 (floating point) standard and currently is a bit far from providing a compliant implementation.
This impacts both a number of other tasks i wish to do for ghc and much of my own use of haskell is in a floating point heavy workloads, I will do a bit of leg work to:
a) improve test suite support for checking for compliance b) write some patches to provide portable compliant primops for the operations which need compiler support c) try to determine how to allow ghc optimizer to be a bit more aggressive in a sound way in the presence of floating point.
(this may grow into a few subtickets, we'll see)
Change History (16)
comment:2 Changed 5 years ago by
You can check if the host and the target have the same kind of FP (easy if host==target) and only constant fold under that condition.
comment:3 Changed 5 years ago by
yeah, i'm not concerned with even doing optimization yet, just making sure all the suitable primitives are exposed :)
comment:4 Changed 5 years ago by
Blocking: | 9304 added |
---|
comment:5 Changed 5 years ago by
@augustss: Ideally, adapters should be used to emulate floating point behaviour. I'm not sure how complex it is, but you shouldn't lose any optimisations in cross compiling -- what if cross compiling is normal, i.e. embedded?
comment:6 Changed 5 years ago by
Yeah, improving optimization requires a pretty precise soft float model of the target hardware's floating point semantics, with roughly three modes
- IEEE / machine model -- same result as if run as a normal program
- fast math model -- assume associativity, assume NaNs never happen
- excess precision -- use extra precision in the intermediate computation to provide as many bits of precision as possible
adding that sort of machinery to ghc is a bit out of scope for just an audit (and any induced patched to provide added missing operations), but becomes possible once such an audit is done. (Also a LOT of work)
I want to get this done for 7.10, adding optimization on top can be on the table later though! :)
comment:7 Changed 5 years ago by
though if someone writes that soft model tooling for at least case (1), maybe it could happen faster! :)
comment:8 Changed 5 years ago by
Blocking: | 8364 added |
---|
comment:9 Changed 5 years ago by
Cc: | jrp added |
---|
comment:10 Changed 5 years ago by
Blocking: | 9530 added |
---|
comment:11 Changed 5 years ago by
Blocking: | 3070 added |
---|
comment:12 Changed 5 years ago by
Milestone: | 7.10.1 → 7.12.1 |
---|---|
Priority: | high → normal |
remilestoning to 7.12,
comment:14 Changed 4 years ago by
Blocking: | 9304 removed |
---|
comment:16 Changed 4 years ago by
Milestone: | 8.0.1 |
---|
The constant folder doesn't actually fold floating points yet.
It's a hard thing to do, without blowing up cross compilation. Also, can't represent Inf/NaN in the pipeline.