Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did you read the rest of my comment? You can't have a uniform distribution no matter what.


Ah, sorry I didn't see the "or certain values will be far more likely than other values". But I don't understand why that wouldn't fall under the definition of a uniform distribution.

The way I would define a uniform distribution is the following:

For any two floating-point numbers, r1 and r2, which form the range [r1,r2] over the real numbers, and any second pair of floating point numbers s1 and s2, which form a range [s1,s2] over the real numbers, which is contained in [r1,r2]. The probability of getting a result in [s1,s2] when sampling from [r1,r2] must be equivalent to the result of (s2-s1)/(r2-r1) with infinite precision.

This is obviously possible to achieve.


Given a uniform distribution over an interval (α, β) and a tolerance ε, sample the distribution twice. What is the probability that |x_1 - x_2| < ε?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: