I was trying out this Codility demo test (once clicked in, scroll down to the bottom and click on “skip registration”).

Once you submit your solution and skip on their feedback request, you’ll be directed to a test result page where there are some performance metrics on the bottom right.

I managed to solve it and got the below performance results, but I’m wondering if there are smarter ways to solve it with better performance?

But looking at the problem I would guess that a best solution would be O(n) for time complexity and O(1) for space complexity (but maybe O(n) for both - I’m still thinking about it. But I don’t know, there may need to be a sort - in which case we’re getting into O(n*log(n)). Using a sort would certainly be “easier” but may not be the most efficient.

But again, we’d really need to see your code to be sure.

I think the simplest approach is sort then a linear scan which is O(nlogn) time with O(1) space, then hashing brings it to O(n) for both.

I think my approach is O(n) time and O(1) space by reusing the original array’s indices.

first check for 1, return 1 if not found

then change all impossible values to 1 (arr[i]<=0 || arr[i]>arrLen)

for each arr[i] i add arrLen to arr[(arr[i]-1)%arrLen], this way the numbers present in the range of 1…arrLen will be used to increment the value at that index above n.

I then go through the array and return the first value below n.

I think this was what I did but slightly different. Somehow also got slightly better results than first posted but should be within margin of error

function solution(A) {
let checked = [,];
A.forEach(x => {
if (x > 0) checked[x] = true;
});
for(let i = 1; i <= checked.length; i++) {
if(!checked[i]) return i;
}
}

I did try the test multiple times and this was what I came up with in the end. My first few solutions simply couldn’t pass some of the performance tests. I got errors like “maximum call stacks reached” or “hard limit reached” etc…

Yeah, that seems on some level to have a complexity of O(n), but I’m not so sure. First, the initial pass through the array is O(n). But then you create an array for memoizing the the data. OK, but how long does it take to traverse that array? How big can it be? In reality, the largest integer in JS could be just north of 9 quintrillion if I remember correctly, so double that (because of negative numbers). That is a huge potential size bump, for both time and space complexity. Granted, the problem limits it to plus and minus a million, but still…

A lot of these kinds of things are tradeoffs, trading time, space, ease of coding, readability, flexibility, etc. In the “real world”, most people would probably just do the easy, reasonably efficient one - sort, then index through the array until you find the first positive value that doesn’t have itself + 1 as the the next element. It would be O(n*log(n)) for time and O(1) for space. It may not be sexy, but it gets the job done and won’t have a noticeable performance hit until things get huge. There is a caveat against premature optimization in coding - don’t optimize until you have to.

I just ran a “sort first” solution on the codility site. It got 100%, so it can’t be that bad. The performance scores were slightly worse than the screen shot that you posted, but not by much.

I think it is generally accepted that you can consider the JS Array.sort implementation to be N*logN, so no need to implement your own sorting algorithm (unless you like doing that sort of thing for fun). I think most JS engines use either merge sort or quick sort, but there may be edge cases where another algorithm is used depending on the size of the array (such as a very small array).