Why would we want to chop the size of an image in half to use it on a higher resolution display?

Tell us what’s happening:
I know how to execute the instructions perfectly fine. What I don’t understand is why we should cut the height and width by 1/2 if we want to optimize the images for a higher-resolution display. Isn’t the purpose of the optimization to keep the image the same size, despite being displayed on a higher-resolution screen? If we do this, not only are we cutting the size of the image in half on a lower-resolution display, but, if viewed on one with a higher-resolution, it will appear smaller because there are more pixels.

Your code so far


<style>
  img {
      height: 100px;
      width: 100px;
  }
</style>

<img src="https://s3.amazonaws.com/freecodecamp/FCCStickers-CamperBot200x200.jpg" alt="freeCodeCamp sticker that says 'Because CamperBot Cares'">

Your browser information:

User Agent is: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36.

Link to the challenge:
https://learn.freecodecamp.org/responsive-web-design/responsive-web-design-principles/use-a-retina-image-for-higher-resolution-displays

I had the same question, so I googled it. Apparently the retina display from apple means that images are doubled in size automatically by their browsers. So, if you don’t want your image to be doubled in size, you should chop it in half.

Google is a great resource for answering all kinds of things…

Retina has four times more pixels than standard screens. If you have a 400 x 300 image (120,000 pixels), you’d need to use an 800 x 600 alternative (480,000 pixels) to render it well on a high-density display. (source: https://www.sitepoint.com/support-retina-displays/)

you’re not really chopping it in half, you’re just displaying it at half of its real size.

OK, let’s say that there’s a computer out there that has, oh, I don’t know, twice the about of pixels that mine does. I would code the website, put and put in the image. And then when I optimize the website for a higher-resolution display, I would make the image 1/4 of its original size, if I were to go according to these instructions. And then after that , because that computer has twice the amount of pixels that mine does, its pixels would have to be smaller, which means that that image would be yet again 1/4 as small as I had intended it to be in my code, on that computer. Which means that the result would end up being 1/16 as large as it is on a normal resolution computer. The instruction here was that it wanted us to make the image half of its original size so that it can be displayed well on a higher-resolution screen, which seemed backwards to me as it made more sense that we should double it in size, like you said.

However, yes, it is important to take into consideration that Apple automatically doubles the size of the image in their browser when you are programming your website. This might cause some confusion as when optimizing for other higher-resolution displays they might not do the same thing.

actually that’s not correct. The ‘retina display’ is coined by Apple for apple products. So if you are coding for apple, you should (for the time being until the h/w or s/w changes) continue to chop your image in half. (that is, the term is coined by Apple for Apple products and therefore the challenge topic is only applicable for Apple. Other products may be similar in their nature but they are not called ‘Retina display’. You should confirm with each type of browser and environment your application is to be used on what works best for it)