Alternate drawing method fails test -- Scientific Computing with Python Projects - Probability Calculator

Tell us what’s happening:
There is an implementation of Hat.draw that involves picking balls one by one by getting a random index, copying it, and popping it from Hat.contents, and I assume that is the intended one, because it passes all the tests without issue.

However, I initially tried a slightly simpler method, as follows:

def draw(self, number):
    random.shuffle(self.contents)
    n = min(number, len(self.contents))
    result = self.contents[:n]
    self.contents = self.contents[n:]
    return result

This one will fail test_hat_draw in the test module, giving the following error:

AssertionError: Lists differ: ['red', 'red'] != ['blue', 'red']

First differing element 0:
'red'
'blue'

- ['red', 'red']
?   ^ -

+ ['blue', 'red']
?   ^^^
 : Expected hat draw to return two random items from hat contents.

Although I did not change the random seed, my different implementation causes the result to be different from the expected “random” result that would be given by a different implementation.

Is there something necessarily wrong with the implementation I provided, or is the test module just not robust enough to handle the difference?

Your code so far
My code on replit is available here: fcc-probability-calculator - Replit

Your browser information:

User Agent is: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36

Challenge: Scientific Computing with Python Projects - Probability Calculator

Link to the challenge:

Is there something necessarily wrong with the implementation I provided, or is the test module just not robust enough to handle the difference?

The test depends on unspecified implementation details. Shuffling is as unbiased as the RNG itself so the two results are equally random and correct. In my opinion though, changing the order of the contents is unexpected and this is not worth the slightly cleaner code. That’s a judgment call for you though.

1 Like

I think the most relevant discussion is here, outside the python source.

As for the tests, there’s few ways to repeatably test random outcomes. The easiest is to use a random seed. Shuffling uses many random numbers per shuffle as opposed to one per choice for choice, which leads to failing this test.

You can also detect probability differences in these simulations with shuffles versus choice, to the point of failing some of the probability tests (I’ve run several variations against the current and older tests). This is likely a consequence of the choice of number of trials for the tests.

1 Like

Yeah, as far as I can tell, a somewhat low number of iterations is used so that the result is different than the result that you would expect for a very large sample size.

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.