That is essentially just the fixed point method mentioned here, and it is uniformly distributed across the range (and faster because you do less), but it does not fill the low bits of the number correctly (53 bits of precision != 2^53 of them between 0 and 1). If you look at the final distribution of numbers and you use under 1-10 million numbers, that method and the division method will appear to be unbiased unless you have a pathological case.