function solution(n) {
var arr = [];
for(let i =0; i < n; i++) {
if (i % 3 == 0 || i % 5 == 0) {
arr.push(i);
}
}
return arr.reduce((a, b) => a + b);
}
My code works perfectly but I’ve been thinking of ways to optimise it. Would this algorithm have a runtime of O(n)? Is there a way to optimise this?
I mean, what is “significantly”?
Anyway, you don’t seem to understand what’s going on below the hood
A computer stores data in memory. Memory consists out of cells, each with a fixed size and unique adress. In order for anything to have variable size (like an array which you can push elements into) some method must run inbetween to manage that.
That’s a LOT of wasted time when it comes to optimization - compared to an integer which in a single cell can hold giant values and thus almost never needs inbetween methods for the most part.
While it’s not a magnitude for O() it’s still significant when it comes to optimization and is part of the reason low-level languages like C++ are widely in use when it comes to high-end-performance.
Significantly as anything less than O(n), so like O(logn) or something. My understanding of complexities needs some brushing up, and yes I’m completely oblivious to most of what happens below the hood haha thank you
Big O notation deals with time complexity, but there is also space complexity to think about - what you did, removing the array, reduced the space complexity of your algorithm, making it more optimized
The thing with BigO notation is, it’s VERY broad because it only deals with “complexity” - meaning the relation of input and output in a general way.
You can add a waiting time of 10 minutes and copy 1GB of data every time the function is called - and it would still be O(n) because it’s unrelated to the input (aka constant complexity).
I’d say 10 minutes and 1GB of data would be significant.
While this is an arbitrary example, it’s to show that BigO notation is not the end-all-be-all of optimal code. But ofcourse depending on your future career, it might suffice.