That link won’t let me view the notebook (I think it was an editing link). Look in the top right corner for the ‘Share’ button, click that and then copy the ‘anyone can view’ link from that dialog and post it.
Looks like the problem is in the code where you are training your model:
history = model.fit(train_data_gen, steps_per_epoch=16, epochs=epochs,validation_data=val_data_gen,validation_steps=20)
When it threw this warning:
WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 20 batches). You may need to use the repeat() function when building your dataset.
You have to make sure that you have enough data to supply the training and validation. For the training, for example, you have 2,000 images, a batch size of 128, and are running 15 epochs. In each epoch, you can only have as many steps in the epoch as you have batches of data. So you would calculate the steps_per_epoch argument from the ratio of total number of images to batch size for both training and validation.
Thank you for your advice.
So I changed steps_per_epoch argument and it makes to remove the Warning message that I had previously. However, I still get a single number of val_accuracy. Although by stacking more layers in my model, I could pass this project, I still don’t understand how to fix this problem.
There’s been a couple of other posts here about the val_accuracy being constant. Right now, mine is constant in the model I used to get above the threshold accuracy. I’m not even sure it’s a problem, but I think it is. I think it is caused by either how the validation data is generated (something with the order, class, or shuffling of the images; this is probably the problem), the final layers of the model (after all the convolutions, specifically the final dense layers and activations), or the optimizer choice (interplay with relu and sigmoid activations and binary cross entropy). I intend to solve this problem when I get the chance, but it may be a while.
Regardless, your training data seems to be trending correctly, so you may be able to hit the accuracy target if you haven’t already. Good luck.