I’m not a Python guy, but…
First of all, it’s not random. Nothing in programming is random. Computers can’t do anything truly random, even if you want them to. I realize you probably meant that colloquially, but the word “random” gets thrown around a lot - usually to mean “unexpected” or at best “chaotic” (in the mathematical sense) - but it’s worth noting.
Next, keep in mind that numbers are being stored in binary.
Think about base 10. What fractions work finitely in base 10? The factors of 10 are 5 and 2, so all the fractions that work have denominators that are combinations of 2s and 5s: 1/2, 1/4, 1/5, 1/8, 1/10, etc… Any that doesn’t has some other factor in the denominator: 1/3, 1/5, 1/6, 1/7, 1/9, 1/11, etc. Those will all be infinitely repeating decimals. They cannot be written accurately as a finite number of decimal digits.
Binary numbers have the same issue. If the denominator is not a combination of it’s factors, it can’t be accurately represented in finite digits. Of course, 2 only has one factor: 2. So any denominator must be an exponent of 2: 2, 4, 8, 16, etc.
So, we now know that any floating point number in binary that (if written as a fraction) has a denominator that is not an exponent of 2, will have to be an approximation. So, 1/2, 1/4, 1/8, etc. are fine, but 1/3, 1/5, 1/6, etc. are not. They will always have errors.
So, classically with computers, if you add
0.1 + 0.2 (1/10 + 1/5), it will have some errors built into it.
Various languages have some bandaids to deal with it. There are libraries for higher precision if you truly need it (you almost never do).
If you google “0.1 + 0.2”, you’ll find a lot of discussion on this.