Opened 11 years ago

Last modified 2 years ago

#3070 new bug

floor(0/0) should not be defined

Reported by: carette Owned by: squadette
Priority: lowest Milestone:
Component: Prelude Version: 6.10.1
Keywords: Cc: squadette@…, mihai.maruseac@…, anton.nik@…, cblp
Operating System: Unknown/Multiple Architecture: Unknown/Multiple
Type of failure: None/Unknown Test Case:
Blocked By: #9276 Blocking:
Related Tickets: #10754 Differential Rev(s):
Wiki Page:


floor(0/0) returns some giant negative integer - it should return NaN or undefined.

The bug appears to be in some implementation of 'properFraction' in the standard library.

[from Andrej Bauer, from Barak Pearlmutter (from ???)]

Change History (33)

comment:1 Changed 11 years ago by igloo

difficulty: Unknown
Milestone: 6.12.1

comment:2 Changed 11 years ago by crutcher

if decodeFloat (0.0/0.0) was changed to be undefined, then floor would be undefined.

Is there any support for this?

comment:3 Changed 11 years ago by squadette

Cc: squadette@… added
Owner: set to squadette

Same thing happens with exponent, significand, scaleFloat, truncate, round, and ceiling.

I believe this patch should fix the problem:

  1. Of course, we get some performance hit, with two ifs for every call to those functions, and the appearance of `undefined' (may it confuse the optimizer somehow?)

However, most of them seem to be not very performance oriented (except maybe for scaleFloat).

  1. However, I believe that getting

significand (0/0) = -0.75

is not right, however fast it is.

  1. withValidDouble could probably be made a monad of some sort?
  1. properFraction should be reindented if this patch is applied. I did not make it to minimize changes.
  1. maybe we need a special compiler flag which means "do not bother checking floats for validity, let the user handle details.

Thank you,

comment:4 Changed 11 years ago by squadette

Well, it seems like this whole idea of correctness really contradicts some folks' expectations about speed: ("This is really a problem for me when doing signal processing, since for writing to a common audio file format or listening to a signal data has to be converted from Double to Int16.")

If we add even more specializations for primitive types, what will happen to correctness?

Haskell 98 seems to ignore the question of invalid floating point arguments, however recognizing the existence of NaNs :)

Maybe we should just explicitly state somewhere that we sacrifice correctness to speed wrt Float <-> Int domain discrepancies.

comment:5 Changed 11 years ago by squadette

or, the problem is purely of expectations.

People are used to the fact that evaluating 0/0 gives Division by zero runtime error.

Haskell however is so lazy that it does not bother exiting. Maybe a special command line argument is needed -- terminate on invalid FP operations. This could be useful for beginners, gentle introductions and such.

As for the day-to-day programming we just declare that Double cannot be converted to integral types anyway, so a) invalid operations give NaN, and b) all the functions will give you literal garbage when out of the target range (incl. NaN and Infinity).

AFAIU floating point developers are used to manage evaluation details carefully anyway.

comment:6 Changed 10 years ago by igloo

Type of failure: None/Unknown

See also #3676

comment:7 Changed 10 years ago by carette

I don't think that exact patch is quite the way to go. Actually, I think that decodeFloat should be fixed to throw an exception when encountering NaN or infinities -- that's much more compatible with IEEE 754-2008. Also, there should probably be an isFinite :: Double -> Bool function, which would help guard against 'weird' floats with one check, not two.

comment:8 Changed 10 years ago by simonpj


We are stalled on this (and I think a couple of other fp-related tickets) because we lack the sophistication in numerical methods to develop the Right Answer, and no consensus has emerged. Would you, and/or others interested in numerical aspects of programming, like to figure out what we should do, make a proposal, and ultimately send us a patch? If it's just left for GHC HQ I fear that nothing may happen.


comment:9 Changed 10 years ago by carette

I can do that - but not before mid-January.

comment:10 Changed 10 years ago by igloo


comment:11 Changed 10 years ago by igloo

Priority: normallow

comment:12 Changed 9 years ago by igloo


comment:13 Changed 9 years ago by igloo


comment:14 Changed 8 years ago by igloo


comment:15 Changed 8 years ago by igloo

Priority: lowlowest

comment:16 Changed 8 years ago by mihai.maruseac

Cc: mihai.maruseac@… added

comment:17 Changed 8 years ago by simonmar

closed #5683 as a duplicate of this ticket (see more commentary there).

comment:18 Changed 8 years ago by tristes_tigres

I've already wrote a few things over at closed as duplicate ticket, and would like to add the comment re the discussion above. It is quite correct that explicit checking for undefined operations like 0.0/0.0 is adversely affecting performance. What is worse, it affects the performance even the vast majority of operations, where NaN would not arise. And that is precisely why NaNs were introduced - so that you don't have to check for invalid operations, but carry on computations to the conclusion, and if it blew up along the way, you'll see it, and see exaclty what parts were affected, since the result of every operation involving NaNs is also NaN. The IEEE 754 standard authors even thought of providing a way to indicate the source of NaN - the special bit pattern indicating NaN has some "vacant" range, that an implementation may use to indicate thse source.

In general, it is a real pity that Haskell floating point design is so not thoghtful - it could have been a great tool for numerical analyst. For instance, the big thing that Haskell got wrong is that FP operations are not pure in the sense of being fully determined by their operands. IEEE754 provides for rounding direction flags, and those can be use to check the stability of numerical algorithms experimentally. But how to fit this into Haskell type system - that's pretty fundamental question.

comment:19 Changed 8 years ago by simonpj

tristes_tigres, if you had some suggestions for how Haskell's FP design could be made more thoughtful, and more useful for numerical analysts, I think folk would definitely entertain them. People with sercious numerical-analysis expertise in the Haskell community are rare, so if you are such a person we'd gladly listen to you. Don't take the status quo as cast in stone.


comment:20 Changed 8 years ago by lelf

Cc: anton.nik@… added

comment:21 Changed 8 years ago by tristes_tigres

Re: simonpj

I am not sure that it is appropriate to call my limited knowledge of numerical analysis "serious expertise". But I do have some suggestions, for what it's worth, where Haskell handling of FP could be improved.

To start from the bug under discussion, I think that the suggestions to throw errors on round(0.0/0.0) are somewhat misguided and defeat the purpose of having NaNs in the first place.

a) Inserting an "if" check will slow down (disrupt processor pipeline) every time round/ceil/floor/etc is called, even though NaNs may occur in tiny fraction of the cases

b) Throwing exception on encountering NaN may lead to termination of the program, with many possibilities of bad things happening. It is possible that the result of invalid operation may not be actually needed, and occurred in some code left over in production by mistake (see the notorious case of Ariane 5 rocket blowing over FP exception, where precisely this happened). Clearly, this is undesirable outcome.

So how the bug can be corrected properly? By separating rounding and type conversion. It is IMHO a design error to conflate the two. Rounded floating point number is very typically used in further FP computations (I already gave an example - fdlibm sine/cosine functions). So there ought to be something like

ceilingf :: (Real a) => a -> a

floorf :: (Real a) => a -> a

truncatef :: (Real a) => => a -> a

roundf :: (Real a) => => a -> a

The existing round/floor/ceil may be implemented by adding type conversion step (if they should be implemented at all, which I am not sure about, other than for backward compatibility with existing software). It is this step where exception on NaN may be properly raised,as per IEEE 754 sec.7.1

There're possible further improvements to NaN handling beyond fixing this bug, too.

comment:22 Changed 8 years ago by simonpj

Thanks. Your suggestions are indeed helpful. What this really needs is someone willing to take up the cudgels and propose, debate, and carry through the "NaN". Are there any numerical analysts who care about this stuff?

comment:23 Changed 7 years ago by igloo


comment:24 Changed 6 years ago by thoughtpolice


Moving to 7.10.1.

comment:25 Changed 5 years ago by carter

Blocked By: 9276 added

comment:26 Changed 5 years ago by thoughtpolice


Moving to 7.12.1 milestone; if you feel this is an error and should be addressed sooner, please move it back to the 7.10.1 milestone.

comment:27 Changed 5 years ago by thoughtpolice

Moving to 7.12.1 milestone; if you feel this is an error and should be addressed sooner, please move it back to the 7.10.1 milestone.

comment:28 Changed 4 years ago by thomie

See also #10754.

comment:29 Changed 4 years ago by bgamari

While this doesn't really address the crux of the problem, one option here would be to introduce another pair of floating point types which ensure proper treatment of all IEEE 754 constructs at the expense of performance. I really don't like the fact that this would make you choose between correctness and speed (especially since arguably the default should be correctness), but it is (I think) an option on the table.

As far as I can tell, this could be strictly a library change (with the types wrapping the unboxed Float# and Double# types; we would just need to double-check that the constant folding rules in GHC are correct with respect to the special floating-point values).

comment:30 Changed 4 years ago by tristes_tigres

Correctly implemented FP would be actually faster on (most) modern processors, because they implement IEEE754 in hardware.

By the way, the whole model of FP computations in Haskell needs to be rethought from scratch. FP operations are influenced by the state of rounding flags and may raise exceptions, but Haskell type system treats them as pure.

comment:31 Changed 4 years ago by thoughtpolice


Milestone renamed

comment:32 Changed 4 years ago by thomie

Milestone: 8.0.1

comment:33 Changed 2 years ago by cblp

Cc: cblp added
Note: See TracTickets for help on using tickets.