Efficiency of for-loops in functions

I just successfully solved the arithmetic arranger project as part of the curriculum during which I came to the following question:
Within one function I have two for loops that looped over the same items of the same array.
The first one basically checked for valid input, the second one created valid output.

Looking at this I wondered (even though at this level it’s probably close to irrelevant) which one is more efficient. I came to the conclusion that it depends.

If I expect the input to be true in most cases, it would be more efficient to do everything in one loop. However if I expect the majority of inputs to be incorrect, I should keep them separated.

Is my thinking correct on this, or is the answer different? Can someone perhaps provide more details?

Potential follow up: If my thinking is correct, would that imply, that in the case of a number of conditions that the input has to meet, where I can expect one condition(or another specific number) to be substantially more often not met by the input, would it be more efficient to generate yet another loop for that (set of) condition (s)?

Thanks!

As a general rule of thumb, the fewer passes over your array, the better.

But, as a general rule of thumb, the less conditional logic in loops the better.

In this sort of case, I’d write code that is readable and works, and then if I need it to be faster, I’d look at the performance and speed up the slowest parts of the code first.

Note: In a different language, I’d just chain a filter and a map together.

1 Like

I’m having a hard time answering this without specifics.

The topic you want to look up is “big O notation”. It deals with the efficiency of algorithms.

So, it sounds like you an optimistic solution and a pessimistic one. Yeah, sometimes you have to make some assumptions.

Now, reading your question again, I see that you are talking about sequential loops, not nested ones. That is less of a problem. I thought you were choosing between O(n^2) and O(n), but really you have 2 X O(n) which is really just O(n).

I think you have to find a balance.

The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.
- Donald Knuth

It is really easy to get carried away with optimization. Like you said, it really doesn’t make much of a difference. Unless there are tens of thousands of inputs, this isn’t going to make any difference in the user experience. So, don’t get too carried away with it. It’s good to think about and not make “dumb” choices, but also don’t get obsessed.

Jeremy got his answer in, and I agree with what he’s saying.

Like he says, readability is important. So, I say, write good, clean code, don’t obsess about optimization until it becomes a problem.

2 Likes

Thanks! Makes sense.
Was wondering wether there is perhaps some general efficiency principle I am lacking, but it seems that at best it is the principle of “don’t worry too much about efficiency principles” :smiley:

Learn about them and make smart choices, but also don’t obsess until it becomes a problem. Learning the basics of big O notation will help understand what to think about.

2 Likes

4 posts were split to a new topic: Issues with projects