On Thu, 28 Nov 2002, Frank Barknecht wrote:
David N G McCallum hat gesagt: // David N G McCallum wrote:
Maybe I'm missing something here, or it's already been said, but couldn't you just set up a really simple patch to compare the output of [random] with its previous output. If new==old then bang the random again, if new!=old then pass it through.
That's what I thought at first glance, too, but as randomness goes, it is possible, that you get a lot of equal numbers after another and that would stop your flow of numbers.
If you can afford to do N retries without desyncing your patch, and the probability of a good number is 80%, then the probability of a desync will be 20%^N. For example, for N=10, that's 1 out of about 10 million. That's if your uniform generator [random] is uniform enough -- if not, then strange things may happen.
Mathieu's solution is a very elegant algorithm, I didn't know before, but I'll remember that one now.
Thanks =)
I think it was a swiss scientist, who tested how "random" humans behave. He rolled a dice, and test persons were told to guess the result without seeing the dice. The humans unintentionally tried to avoid repeating numbers. Subconsciously they must think, that "true randomness" excludes repeating results. Of course it doesn't.
I remember a prof asking half of the class to throw a coin 100 times and note the results. And the other half to fake them. He was then trying to figure out which ones were faked just by looking at the sheet. His criterion was that real results typically include 6 times the same value in a row (TTTTTT or HHHHHH) while faked results typically avoid those.
Mathieu Bouchard http://artengine.ca/matju