Machine Learning Python Help Please!

I just started the Cat and Dog classifier project on Python, but when I run the first part of the code, it won’t let me download the data from the url that freecodecamp gave me :frowning: I’m not sure what the error means and where I’m going wrong? Please help thank you so much

(this is what the error message says:)

Downloading data from https://cdn.freecodecamp.org/project-data/cats-and-dogs/cats_and_dogs.zip

HTTPError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/data_utils.py in get_file(fname, origin, untar, md5_hash, file_hash, cache_subdir, hash_algorithm, extract, archive_format, cache_dir)
277 try:
–> 278 urlretrieve(origin, fpath, dl_progress)
279 except HTTPError as e:

8 frames
HTTPError: HTTP Error 403: Forbidden

During handling of the above exception, another exception occurred:

Exception Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/data_utils.py in get_file(fname, origin, untar, md5_hash, file_hash, cache_subdir, hash_algorithm, extract, archive_format, cache_dir)
278 urlretrieve(origin, fpath, dl_progress)
279 except HTTPError as e:
–> 280 raise Exception(error_msg.format(origin, e.code, e.msg))
281 except URLError as e:
282 raise Exception(error_msg.format(origin, e.errno, e.reason))

Exception: URL fetch failure on https://cdn.freecodecamp.org/project-data/cats-and-dogs/cats_and_dogs.zip: 403 – Forbidden

Welcome to the forums @haikyuuu. If you google around a while you’ll find reports that there is some issue with the CDN and the python utilities for fetching the file. I worked around the problem with

# URL = 'https://cdn.freecodecamp.org/project-data/cats-and-dogs/cats_and_dogs.zip'
# path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=URL, extract=True)

!wget https://cdn.freecodecamp.org/project-data/cats-and-dogs/cats_and_dogs.zip
!unzip cats_and_dogs.zip

PATH = os.path.join(os.path.dirname('cats_and_dogs.zip'), 'cats_and_dogs')

train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
test_dir = os.path.join(PATH, 'test')

in my jupyter notebook, which is basically the method described in those reports I found googling.

If you are working on these from a python script (as I did) you can use urllib, shutils, and zipfile to fetch and manipulate the data file without shelling out to wget and unzip.

Regardless, this problem affects all the machine learning projects, so you will have to use !wget url to fetch the data and !unzip file as necessary to decompress it.

Good luck.

thank you so much!!!