Arithmetic_formatter

I developed and tested my code in Atom and it works great when I run it on the command prompt, but when I paste it into the arithmetic_arranger.py file on the test page it just blows up everywhere. I’ve read through some of the errors, and it looks like I’m even getting tracebacks for lines that are inside of a try/except block that’s there specifically to prevent them. I really don’t get what’s happening there.

Edit: I rewrote the entire code into the auto-grader file in case there were just some weird spacing issues caused from pasting it over from Atom and have tweaked the code a little bit. It’s no longer throws tracebacks, at least, but I’m still failing 4/6 tests even though everything lines up perfectly on the console. It claims my solutions aren’t aligned as expected but even on the web terminal it looks good to me. I mean, look at this picture. I don’t know where the +'s running down the left side of the screen are coming from (I suspect they’re used as reference points), but it looks great to me:

open the test suite of the project and go to the test which fails. Pick the strign which represents the result the test is expecting and compare it with what your code returns when you feed it the test input data.

Run it where? If I run it from the command prompt on my pc, everything looks great. If I run it on the test page it always runs main.py and tells me it’s wrong without showing my output. Here’s a comparison between the the error message I get from running it in the unit test vs from my command prompt:

Your posted error tells you everything you need to know: that you’re not getting any output from your program (hence the empty - lines). So you need to determine why.

If you modify the beginning of your program like this:

    for problem in problems:
        if '*' in problem or '/' in problem:
            print("hello")

you will see that this never prints, so your program never gets past the conditional in the first loop, so that nothing is processed, so there is no output to format in the second loop.

Thank you, that helps a lot. I didn’t realize that my output displayed above all of the error statements, so it just seemed like I had no way of getting any feedback at all except the breakdown of which tests were failed.

Alright, so I have it down to just the last 2/6 errors. I stuck prints inside of the same blocks that return the final solutions, so it’s definitely completing. The first set of numbers returns both the print and the function’s return, which I believe indicates success. But everything else fails even though it looks exactly the same. I’ve clicked and dragged my mouse across the screen to confirm that there are exactly 4 spaces between each problem. I just don’t understand what it doesn’t like about this:

nvm, I think I see it. Too many spaces at the end.

To anyone else whose first experience with the test module is with this problem and doesn’t see what I’m talking about, it’s right here. Left side of the != is your function output, right side is what’s expected:

That’s it. Sometimes if you paste the error text instead of an image into a code block in a comment here on the forums, it will highlight the spaces with colors, which is very handy. It’s hard to tell in the console or terminal that the spaces are there if they are not highlighted. Some editors (emacs, for instance) has a built-in terminal (or can be run in a terminal) and will allow you to highlight spaces and then you can see them easily.

I’ve looked around at the various color highlighting options for pytest and unittest to color trailing spaces by default so that this problem could be addressed, but I didn’t find a simple answer. If someone has an idea for either a different test runner or a unittest plug-in that highlights spacing, it should be simple to add it to the project’s pyproject.toml and drop in an appropriate configuration file.

If you look at the posts for this project, you’ll see that spacing is a frequent recurring problem.

i meant, you can go to test_module.py, the file which runs the FCC tests. If you have done the Quality Assurance curriculum, you will be able to comprehend the test functions, they are quite similar. Navigate to the test function which sends you an error and pick the string which represents the expected output. There you can see very detailed what the test expect your function to return, every space, line break etc. Not always you can get that detailed response from the console log. Compare that string to what your function would return, with the given input data(the one which is fed to your function).
For example:

def test_solutions(self):
        actual = arithmetic_arranger(["32 - 698", "1 - 3801", "45 + 43", "123 + 49"], True)
        expected = "   32         1      45      123\n- 698    - 3801    + 43    +  49\n-----    ------    ----    -----\n -666     -3800      88      172"
        self.assertEqual(actual, expected, 'Expected solutions to be correctly displayed in output when calling "arithmetic_arranger()" with arithmetic problems and a second argument of `True`.')

The first line of the function body calls your function with the input data. The expected value is what the test will expect your function to return