[ML] Cats and Dogs Test Set .evaluate() only 1 image?

Hi, I have been working on the second ML with Python Projects Cats and Dogs. So far everything seems to work. However, when I reach the point/cell of model.predict() I feel like something is not working correctly.

I trained the model and now I want to grap the test set and get the predictions so I use:

model.evaluate(test_data_gen) - In the output there is only this response 1/1 which I think that it takes only one test image instead of 50?! Not sure about this information.

And I get this output for this cell:

1/1 [==============================] - 0s 232ms/step - loss: 0.0000e+00 - accuracy: 0.0000e+00

[0.0, 0.0]

Also the transformation to a list is missing but I guess the above problem should be resolved first?!

Would be really great to clarify that or any hint would be very nice.

You can find my colab here: Google Colab

Compare evaluate and predict to make sure you are using the one you intend. You should get 50 results. Your results should look something like

[[4.86431420e-01 5.13568521e-01]
 [6.01261914e-01 3.98738027e-01]
 [7.98121810e-01 2.01878160e-01]
 [1.52471483e-01 8.47528458e-01]
 [5.39969265e-01 4.60030705e-01]
 [4.93960142e-01 5.06039798e-01]
...
]

if you print them. You’ll need to process this data before doing the comparison against answers in the last cell as it expects binary data.

Thank you very much! Looking at the two links made it clear to me and I implemented it then.

I used np.argmax(model.predict(test_data_gen), axis=-1) to get the predicted_classes into binaries and got 66% Accuracy in the end.

Thank you.

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.