Why is my code randomly adding 0's to the number

So I have made this short bitcoin simulation game when you roll a 10000-faced die and get some fake bitcoin corresponding to the dice number.
But when I rolled a few times it added 0000000000000# at the end of the number.
my code

import random

btc = 0

Copyright 2022 Fappy and Jason TM
Type help to begin.

game_is_running = True

while game_is_running:
    cmd = input(">>> ").upper()
    if cmd == "HELP":
        Type roll to roll the wheel
        you can roll evrey 1 minute
        Type risk to play the multiplyer game.
    elif cmd == "ROLL":
        num = random.randint(0,10000)
        if num <= 3000:
            btc += 0.001
        elif num >= 3001:
            btc += 0.1
        elif num == 10000:
            btc += 1
        print("You rolled ", str(num), " you have ", str(btc))

Computers cannot accurately represent fractional numbers.

wasted 9 minutes of my time I still do not know how to fix this

I’m not a Python guy, but…

First of all, it’s not random. Nothing in programming is random. Computers can’t do anything truly random, even if you want them to. I realize you probably meant that colloquially, but the word “random” gets thrown around a lot - usually to mean “unexpected” or at best “chaotic” (in the mathematical sense) - but it’s worth noting.

Next, keep in mind that numbers are being stored in binary.

Think about base 10. What fractions work finitely in base 10? The factors of 10 are 5 and 2, so all the fractions that work have denominators that are combinations of 2s and 5s: 1/2, 1/4, 1/5, 1/8, 1/10, etc… Any that doesn’t has some other factor in the denominator: 1/3, 1/5, 1/6, 1/7, 1/9, 1/11, etc. Those will all be infinitely repeating decimals. They cannot be written accurately as a finite number of decimal digits.

Binary numbers have the same issue. If the denominator is not a combination of it’s factors, it can’t be accurately represented in finite digits. Of course, 2 only has one factor: 2. So any denominator must be an exponent of 2: 2, 4, 8, 16, etc.

So, we now know that any floating point number in binary that (if written as a fraction) has a denominator that is not an exponent of 2, will have to be an approximation. So, 1/2, 1/4, 1/8, etc. are fine, but 1/3, 1/5, 1/6, etc. are not. They will always have errors.

So, classically with computers, if you add 0.1 + 0.2 (1/10 + 1/5), it will have some errors built into it.

Various languages have some bandaids to deal with it. There are libraries for higher precision if you truly need it (you almost never do).

If you google “0.1 + 0.2”, you’ll find a lot of discussion on this.

What number would you prefer to see instead of 0.302000000000000005? Once you figure that out, then you have a chance to move forward with a solution.

0.302 of course
or if it is different like
9.20100000…001 I wold like to see 9.021
and if it is 0.1000000000…01
then I would like to see 0.1

If you think learning how computers work is a waste of time, then you may want to consider something else.

As to a solution, can you round it off? Do you need accuracy into the several trillionths?

I need accuracy to the thousandths. :grinning:

Then use round(number, 3)?
Floating point operations will always run the risk of having an error and thus resulting in these tiny-fractions of random numbers at the end. So you gotta tell the program what level of precision you actually want.

Just keep in mind, if you go further into complex programming you might have to pay attention to this and how it can influence and break certain operations. After all, computers can make billions of operations in a second and thus even such a small error can add up quickly.


but what if it is not 3?

Then you’ll round it to whatever precision you need.

I don’t know how Python works, but in JS, you can round it to 10 decimal places (or whatever) because it will just ignore the trailing zeroes - something rounded to 1.1230000000 will show up as 1.123 (because they are stroed in memory the same).

Edit: just tl/dr here. The rounded value you are seeing is the computer trying to display the numeric value as a decimal (in this case -⅒, stored as a floating point number in a base-2-derived format, not stored as a decimal). What you want to display is a some formatted representation of that: you don’t need or want to display that underlying numeric value, you want a rounded and formatted value.

If you do repeated important calculations that require decimal precision (eg monetary transactions) you need to be super aware of rounding issues (eg one of the crypto exchanges fell prey to this a few years ago), and there are libraries designed to help. For very many things, what the computer is doing is totally fine: here you just need to format the number.

Right, in which case 0.30200000000000005 is, for your purposes, exactly the same number as 0.302

Numbers like 0.1 will always be rounded, as explained above (same as ⅓ in decimal). So repeatedly applying calculations involving rounded numbers to numbers that have already been rounded will cause you to accrue errors (this isn’t the case here, you’re always just doing the operation once).

Computers have finite storage; floats in Python are stored in 8 bytes (64 bits). So -0.1, which is one of the two possible values you add (the other being 0.1, which has exactly the same issue, & your program logic stops you ever adding 1), looks kinda like this if I print it out as a string:


Not going to be literally what the computer uses, but close enough. In maths, in base-2, that just repeats infinitely. But as you can’t do that on a computer, it always gets rounded.

What you are talking about is formatting, ie what you display to the end user. Which you don’t want to be the actual number 0.30200000000000005, you want to be 0.302. In a similar vein, 0.1 could be displayed as 0.100. It makes no difference that there is some rounding, because what you want to display to the user is a string representation of your number rounded & formatted to a given precision