fcc_cat_dogML:Can't get a val_accuracy and val_loss in array

I cannot get the right results for model.fit.
I only get a single number for val_accuracy and val_loss.
Do you know how I fix my code?
Here what i did

model = Sequential()

model.add(Conv2D(32,(3,3), activation=‘relu’, input_shape=(IMG_HEIGHT,IMG_WIDTH,3)))
model.add(MaxPooling2D((2,2)))
model.add(Conv2D(32,(3,3),activation=‘relu’))
model.add(MaxPooling2D((2,2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(64, activation=‘relu’))
model.add(Dense(1,activation=‘sigmoid’))
model.compile(optimizer=‘adam’,loss=tf.keras.losses.BinaryCrossentropy(), metrics=[‘accuracy’])
model.summary()

history = model.fit(train_data_gen, steps_per_epoch=16, epochs=epochs,validation_data=val_data_gen,validation_steps=20)

Welcome to the forums, @dchang.

I’ll take a look at it if you can post a link to your notebook on Google Colab.

Hi Jeremy,
Here is my link for notebook on Google Colab.

Thank you for your helps in advance.

That link won’t let me view the notebook (I think it was an editing link). Look in the top right corner for the ‘Share’ button, click that and then copy the ‘anyone can view’ link from that dialog and post it.

here is the copied link based on your instruction

Thanks!!

Looks like the problem is in the code where you are training your model:

history = model.fit(train_data_gen, steps_per_epoch=16, epochs=epochs,validation_data=val_data_gen,validation_steps=20)

When it threw this warning:

WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 20 batches). You may need to use the repeat() function when building your dataset.

You have to make sure that you have enough data to supply the training and validation. For the training, for example, you have 2,000 images, a batch size of 128, and are running 15 epochs. In each epoch, you can only have as many steps in the epoch as you have batches of data. So you would calculate the steps_per_epoch argument from the ratio of total number of images to batch size for both training and validation.

Thank you for your advice.
So I changed steps_per_epoch argument and it makes to remove the Warning message that I had previously. However, I still get a single number of val_accuracy. Although by stacking more layers in my model, I could pass this project, I still don’t understand how to fix this problem.
Screen Shot 2020-12-30 at 11.19.26 PM
It looks this model is not trained properly.

Do you have any idea?

Thank and happy new year!

There’s been a couple of other posts here about the val_accuracy being constant. Right now, mine is constant in the model I used to get above the threshold accuracy. I’m not even sure it’s a problem, but I think it is. I think it is caused by either how the validation data is generated (something with the order, class, or shuffling of the images; this is probably the problem), the final layers of the model (after all the convolutions, specifically the final dense layers and activations), or the optimizer choice (interplay with relu and sigmoid activations and binary cross entropy). I intend to solve this problem when I get the chance, but it may be a while.

Regardless, your training data seems to be trending correctly, so you may be able to hit the accuracy target if you haven’t already. Good luck.