# College Algebra with Python: Decimals Conversion

In College Algebra with Python course, the following codes are given as example for converting decimal numbers into fractions:

``````# Get string input, which will include a decimal point
digits = input("Enter a decimal number to convert: ")

# Get number of decimal places as an integer
exponent = int(len(digits))-1

# Convert the input to a float number
n = float(digits)

# Use the exponent to get the numerator
numerator = int(n * 10**exponent)

# Use the expoent to get the denominator
denominator = 10**exponent

# percent is the first two decimal places
percent = n * 100

# Output
print("The decimal is ", n)
print("The fraction is ", numerator, "/", denominator)
print("The percent is ", percent, " %")
``````

While it’s a clever way to do so, I notice the way the decimal is inputed can influence the output. Let’s say we input .24, the length of the inputed string is 3, then exponent is 2, and the output is:

Enter a decimal number to convert: .24
The decimal is 0.24
The fraction is 24 / 100
The percent is 24.0 %

But when we input 0.24, the length of the inputed string becomes 4, then exponent becomes 3, and the output changes into:

Enter a decimal number to convert: 0.24
The decimal is 0.24
The fraction is 240 / 1000
The percent is 24.0 %

While 240/1000 is of course equivalent to 24/100, the different outputs make me feel uneasy. Is it better to find the number of decimal places by counting from the end, ie using index method to find the index of decimal point in the inputed string and subtract it from the end? The line of corresponding code would go like this:

``````exponent = len(digits)-1-digits.index(".")
``````

Does your solution handle input of the form `00.24`?

Ultimately, it is “better” to accept decimal input as decimals and fraction input as fractions.

Yes. Because it only counts how many digits( or in fact characters ) are after the decimal point, the number of digits before the decimal point does not matter.