What's more efficient?

I’m trying to figure out what version of project euler’s sum primes is better, I’m using sieve of eranthoses a bit improved and I’ve created a first for loop which will change all even values to false and then allowing the next loop to increment by 2, reducing the isPrime calls (look at the code and comments), I’m really no expert by calculating time complexities but it should be around the same time complexity between this code and this code without the first loop but second loop increments by 1 instead of 2, you also have this runtime function to test but my shitty computer isn’t consistent at all, especially with tests running, so which one is more efficient? with or without the loop?
thanks in advance :slightly_smiling_face:
also any tips for managing memory better without diving too deep?

Your code so far


function primeSummation(n) {
var arr = Array(n).fill(true);
arr[0] = false;
arr[1] = false;
for (let i = 2; i < arr.length; i+=2) {
      arr[i] = false;
}
/*im doing this for loop so that we could add +=2 to the
 next loop, which will reduce calls to isPrime, but is it
 worth it? because isPrime will yield falso immediately*/
arr[2] = true; //we changed it in the loop but 2 is prime
for (let i = 3; i <= Math.sqrt(n); i+=2) {
  if (isPrime(i)) { /*I think it will signifcantly reduce
 the time complexity since that if we iterated through 
3 iterating through 9 or 6 or 18 will be a waste of time*/
    for (let j = i*i; j < arr.length; j+=i) {
      arr[j] = false;
    }
  }
}
return arr.reduce((sum, k, i) => k ? sum+i : sum+0, 0);
}
function isPrime(n) {
for (let i = 2; i <= Math.sqrt(n); i++) {
  if (n % i === 0) return false;
}
return true;
}
function run_function(func, test_values) {
for(let i in test_values){
  console.log('Test value:', test_values[i]);
  var t0 = performance.now();
  console.log('Output:', func(test_values[i]));
  var t1 = performance.now();
  console.log("Took " + (t1 - t0) + " ms");
}
}
var a = [17, 2001, 140759, 2000000]
run_function(primeSummation, a)

Your browser information:

User Agent is: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36.

Challenge: Problem 10: Summation of primes

Link to the challenge:

Without diving too deeply into your code…

Complexity is weird. For example, you think that indexing by 2 instead of 1 will change the complexity. But multiplicative factors like that are dropped in the complexity analysis. The complexity is a measure of how the run increases as n increases. But it’s still a good way to think.

But there are other factors to consider. For example, the modulus operator is a very time costly one. I would rather go through and mark non-primes (like in the sieve of Eratosthenes). I might also calculate the square root once instead of calculating it each iteration.

1 Like

But yeah, benchmarking JS can be tough - there are whole articles written on it.

One measure my be to count the number of passes it does and see how that changes as you increase n.

1 Like

It’s more efficient to only loop once through your data. You can make your array only represent odd numbers and get a nice speedup.

I’ve got some code and discussion here

1 Like

I think you’re missing a key idea of the sieve here. The value in arr[i] will be true if i is prime.

Right, and muuuuch faster. For harel_avv’s benefit, that is from the memoization of the non-prime multiples of previous numbers. If you follow the sieve, and mark off multiples of the primes you encounter, every new number that isn’t marked as non-prime (true) will always be prime.

But it’s interesting, that get’s into a type of efficiency that big O notation doesn’t really measure. Big O looks at the efficiency of the algorithm and how the time taken will increase as a function of n. Whereas these little adjustments won’t affect big O, they will always just affect the result by a scalar factor, I would think. To put it another way, big 0 measure the efficiency of the algorithm, whereas we’re talking about finding efficiencies in the implementation, which should always have a linear affect on the outcome. Since constant multipliers get factored out of big O, it won’t show up there. Sorry for rambling, this is all kind of formulating in my head.

Using the sieve correctly and using the boolean value in the array instead of using the isPrime function drops from O(n sqrt(n)) to O(n log(log(n))), but some of the smaller optimizations like only using the odd numbers goes from O(n log(log(n))) to O(n log(log(n))/2), which is the same big O.

1 Like

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.