To put it simply, the problem is that we think of numbers in base-10 but computers store them in base-2. For example, .8 is really 4/5. 5 divides nicely into 10 so you get a decimal, .8. But 5 does not divide in 2 so you get (I think) a repeating “decimal” (“binamal”? OK, mantissa), an infinite number of digits. The computer can’t store infinite digits so it has to round it off. We do the same thing in decimal with 1/3. It can be .3 or .33 or .333, etc. We have to choose somewhere to round it off. So, if we were to convert it back to a fraction, it would be 333/1000, close but not the same thing. It is impossible to accurately represent 1/3 (and infinitely more numbers) in a finite number of decimal digits. You run into the same problem with binary numbers, but sometimes for different numbers.