Sea Level Predictor, wrong test data?

Hi,

I am doing the Sea Level task, and I noticed some weird test fails, that do not make sense to me.

While its testing the data points plotted on the graph it asserts a difference between my plotted data points. My values are exact the same as in the test data, but the values that are checked lose in some cases precision. for example:

[1881.0, 0.220472441] <- this is the value from the testfile and also my result
[1881.0, 0.22047244100000002] <- this what its tested against which has lost in precision.

Should I artificially add digits? i don´t see how i can artificially lose precision that the test result data is asking for. :thinking:

All i do is plot the data points so i can´t see what to fix here:

    plt.scatter(data=df, x='Year', y='CSIRO Adjusted Sea Level')
Traceback (most recent call last):
  File "/home/runner/boilerplate-sea-level-predictor/test_module.py", line 30, in test_plot_data_points
    self.assertEqual(actual, expected, "Expected different data points in scatter plot.")
AssertionError: Lists differ: [[188[26 chars]72441], [1882.0, -0.440944881], [1883.0, -0.23[2982 chars]951]] != [[188[26 chars]7244100000002], [1882.0, -0.440944881], [1883.[3226 chars]951]]

First differing element 1:
[1881.0, 0.220472441]
[1881.0, 0.22047244100000002]

Difference comes from used version of pandas. Pandas 1.2.0 by default it’s now using more precise parser for the float numbers. Values from tests are how these numbers, at the time of writing tests, were by default represented by pandas. Remember that accurately representing floating point numbers for computers isn’t as easy as for example integers.

You can either force pandas version 1.1.5 in the pyproject.toml file and update dependencies. Or add float_precision='legacy' argument to the read_csv method when reading data from file.

Hi sanity,

great thanks that helped.

cheers