Hello, I am on a project and I want to improve the page load time. I see that my images are not optimized enough. The SEO method would be to compress them and then change their size according to their content .
It is written on my course “It is therefore essential to resize it before putting it on your site”.
But with what method? using apps. Because resizing them in css will not make them any lighter.
If you want to optimise your images, you need to compress them before uploading them. I can’t recommend any specific software for this, but I mostly use irfanview for resizing/cropping images. For websites, a resolution of 80-100 ppi is perfectly fine (everything above makes no sense).
There’s no CSS method that would decrease the size of your image files, but you can have different images for different viewports, and use the <picture> element. It allows to set different image files for different viewports:
Very well yes indeed it is written what you said in my document “You can compress your images using free applications like ImageOptim, PNGgauntlet, or via websites like https://compressor.io/. all you have to do is select the image to compress, in JPEG, PNG, GIF or SVG formats. “One last question” compression. " The most common mistake is to put images that are too large for their container, and then adapt them using CSS.”
“Once you have resized your images to the correct size, you can further optimize them by compressing them.”
Thank you a question please ?
I am working on visual studio code, once my images are compressed and I can select the images directly in the img tag? I begin
I’m not sure what you mean by “downloading vs img tag”. The usual workflow would be this:
find an image and download it to your computer (or make a photo yourself)
save different versions/sizes for different viewports (saving as jpg will automatically compress them)
on your website, use the <picture> tag and provide different images for different viewports
The idea behind the <picture> tag is that someone who opens your page with a smartphone (usually a width of around 400px) doesn’t need to be served an image that is 1980px wide. That’s why you save different versions/sizes of your image. Then you upload them all to the server where your page is hosted.
When a smartphone user opens your page, their browser will tell the server that they’d like to fetch the smartphone version of your image. A desktop user’s browser would fetch the big desktop version.
Alternatively, if you don’t want different versions/sizes of your image, you’d use just one <img> tag.
is this some sort of exercise or are we talking about a real website?
I am a student and on a real project, I need to improve the loading speed of the website, and I could see that the images slow down the optimization of the site. The review is to improve the SEO of the website.
I work with Visual studio code and I cchek all problem with devtools " lighthouse " of chrome. I have already started to do an audit.
Thank you. I want to improve the loading speed of the pages of my project. I understood that the good methods to adopt are: Optimize your images, Compress your images. I have the right tools like https://pnggauntlet.com/, https://compressor.io/. But I am blocking, or should I put my compressed file at the root of my directory? (I work on a site where I need to improve its referencing and its accessibility I am a student) How will I select my img in my html (the compressed images) we say “Évitez l’erreur de débutant : dimensionnez vos images à la bonne taille” il y a des outils comme https://squoosh.app/
I doesn’t matter where your file is located, can be the root directory or (more common) an img folder.
First you save different sizes of your image (for example, one with a width of ~400px for the smartphone version, one with a width of ~800-1000px for a tablet version, then the huge ~1200-1980px width for the desktop version.
After saving those different versions, you can try to compress them further with some tool (although saving them as jpg will already make them pretty much as small as you can get).
Then you add a <picture> tag to your HTML and provide different sources for the different versions, here’s another link explaining that tag: https://www.w3schools.com/TAGS/tag_picture.asp
It’s difficult to help without more context, or the code of your site, or ideally a link to a working live version. But this would be the usual workflow.
I’d like to point out that knowing it’s a JPEG file doesn’t tell you anything about the amount of compression used or any of the other related things such as Huffman table, DCT method, and chroma subsampling. There are other little things like removing metadata (e.g. Exif) that can give small gains as well with no loss to the quality.
You can easily make an image larger than its original without any quality gain, it’s even possible to make it worse quality and larger. As long as it’s lossy compression (like JPEG) you can’t get data back that has already been lost. So the more you know about the image before you try to compress it the better.
I mainly use XnView because I know its compression tool (or NConvert as stand-alone) and another tool I like to use is JPEGsnoop to analyze images.
Another format other than JPEG worth a look at is WebP