Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Floats are typically used to approximate reals, a uniform distribution over individual float values is usually not what is needed. What is wanted is uniform distribution over equal-sized intervals.

2^53 equally-spaced values within [0, 1) is plenty enough for many use cases, but it's still fundamentally a set of fixed-point rather than floating-point numbers. For a true random floating-point value, all the available precision should be used such that every single value in the domain has some probability mass (except maybe subnormals, but you can usually forget about subnormals). Especially with 32-bit floats it's not that difficult to run out of precision when you start doing math and only have a fixed-point subset of size 2^23 available.



The floats are a surprisingly bad approximation of the reals. They're barely better than the integers, while their shortcomings are much harder to understand, so I'd say it's like if you want approximately a spaceship and you're choosing between an F-14 Tomcat and a Fiat Punto.

Neither of these things is a spaceship, but, it will be obvious to almost anybody why the Punto isn't a spaceship whereas you are less likely to know enough about the F-14 to see why it's not "good enough".


I think the truly surprising thing is just how well floating point numbers work in many practical applications despite how different they are from the real numbers. One could call it the "unreasonable effectiveness of floating point mathematics".


Why is that surprising? No one ever uses real numbers for anything.


There are many situations where you have something you want to compute to within low number of units in the last place, that seem fairly involved, but there are very often clever methods that let you do it without having to go to extended or arbitrary precision. Maybe not that surprising, but it's something I find interesting.


> The floats are a surprisingly bad approximation of the reals. They're barely better than the integers

They are integers. Each float is a pattern of 64 discrete bits. Discretion means integers.


In float32 land, if

    x   = 0.5   (with bit pattern 0x3f000000)
    y   = 0.25  (with bit pattern 0x3e800000)
then

    x+y = 0.75  (with bit pattern 0x3f400000)
But if we just added the bit patterns like integers, we would get 0x7d800000 (with a float value of over 10^37). Just because they are discrete doesn't mean they are integers only that they can be mapped one-to-one with integers. The bit pattern is not the semantic meaning of the value, and you cannot perform correct operations if you ignore the semantics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: