TensorFlow freeCodeCamp project - help

For the Cat and Dog image classification project from freeCodeCamp, I cannot seem to get past this one issue:

For the project, I have to create a convolutional neural network, which uses an ImageDataGenerator as input data to train the model, and I can’t seem to resolve the error that I am getting which has to do either with the data I’m passing in or an improperly built model.

Code:

Get project files

!wget https://cdn.freecodecamp.org/project-data/cats-and-dogs/cats_and_dogs.zip

!unzip cats_and_dogs.zip

PATH = ‘cats_and_dogs’

train_dir = os.path.join(PATH, ‘train’)

validation_dir = os.path.join(PATH, ‘validation’)

test_dir = os.path.join(PATH, ‘test’)

Get number of files in each directory. The train and validation directories

each have the subdirecories “dogs” and “cats”.

total_train = sum([len(files) for r, d, files in os.walk(train_dir)])

total_val = sum([len(files) for r, d, files in os.walk(validation_dir)])

total_test = len(os.listdir(test_dir))

Variables for pre-processing and training.

batch_size = 128

epochs = 15

IMG_HEIGHT = 150

IMG_WIDTH = 150

train_image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=0.00392156862)

validation_image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=0.00392156862)

test_image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=0.00392156862)

train_data_gen = train_image_generator.flow_from_directory(directory=train_dir, target_size=(IMG_HEIGHT,IMG_WIDTH),class_mode=‘binary’,batch_size=batch_size,shuffle=True,color_mode=‘rgb’)

val_data_gen = validation_image_generator.flow_from_directory(directory=validation_dir, target_size=(IMG_HEIGHT,IMG_WIDTH),class_mode=‘binary’,batch_size=batch_size,shuffle=True,color_mode=‘rgb’)

test_data_gen = test_image_generator.flow_from_directory(directory=PATH,target_size=(IMG_HEIGHT,IMG_WIDTH),classes=[‘test’],class_mode=‘input’,batch_size=batch_size,shuffle=False,color_mode=‘rgb’)

model = tf.keras.models.Sequential()

model.add(tf.keras.layers.Conv2D(32,(3,3),activation=‘relu’,input_shape=(32,32,3)))

model.add(tf.keras.layers.MaxPooling2D((2,2)))

model.add(tf.keras.layers.Conv2D(64,(3,3),activation=‘relu’))

model.add(tf.keras.layers.MaxPooling2D((2,2)))

model.add(tf.keras.layers.Conv2D(64,(3,3),activation=‘relu’))

model.add(tf.keras.layers.Flatten())

model.add(tf.keras.layers.Dense(32))

model.add(tf.keras.layers.Dense(16))

model.add(tf.keras.layers.Dense(1, activation=‘sigmoid’))

model.summary()

model.compile(optimizer=“Adam”,loss=‘binary_crossentropy’,metrics=[‘accuracy’])

Error:

Does someone know how I can resolve this error? Thanks in advance.

The error refers to shapes. Try reducing your layers and see when it works.
You are propably reducing the internal “image-size” to such a low number, that the following layer can’t handle it anymore.

I’ve been running into that error in this project very repeatedly, until I realized how the layers are internally reshaping the “image” (more like the generated feature-map).
For example a max-pooling(2,2) does reduce the image to 1/4th of it’s size.
Without padding, Conv2D(3,3) will reduce the image-size by 2pixel in both width and height.

Tensorflow however does not notice such issues until the training-process.

1 Like