I made a video of how to use console.time() to compare two JS functions. Specifically I compared two Chunky Monkey codes, the beginner and intermediate solutions. The comparison was done in FireFox.
The result? The beginner solution outperformed the intermediate solution in execution time. However, the two execution time data sets are not statistically different from one another. I did a t-test on them and at 95% confidence, the data sets are not statistically different. Here is the LibreOffice Calc data below.
Comments: Yeah, the two codes are just going to execute instantaneously for the average user, so why do this? I wanted to find a way to compare two codes and statistically compare them for when codes are more complex and will need to be compared for speed.
The p-value of 0.41 is outside my confidence interval (p value must be <.05 for the two distributions to be unique and not just a result of chance). So “according to the books,” the difference in average could be the result of random chance.
Also, please let me know if there’s a better way to find the performance of a JS script. The video is not great and the mic is awful lol. And it has a lot of mistakes. It is my first desktop vid so I figured what the hey, I’m not gonna try that hard…
I hope you all got something out of the video and stats! Happy coding.
I must have missed something in the video, but what do the individual times represent? The time all the test cases took to run? A specific test case?
Yes, the times are basically a “stopwatch” from when console.time() is called to when console.timeEnd() is called. Since the methods are just before and after the functions called, it times the execution. I now realize I forgot to call a specific test case for execution… I think I’ll have to do another trial.
So you did not actually call the function with an array and specified size? If so, this is not very useful information. I would recommend testing with some worst case scenarios such as 5 millions elements in the array and chunk size of 1. Then you will get a better idea of efficiency between the two. I would recommend 100 simulations of each solution with the large array with chunk size of 1 or 2. Also, instead of manually writing down the data, you could have it output each simulation to the screen and then you can copy/paste all the times to your spreadsheet for calculations. Better yet, why not create the calculations with your own function and avoid the manual labor?
If you want to do some proper benchmarking to test efficiency, you can check out @p1xt’s benchmarking script from this old post: Algorithm Pairwise
I also wrote a Medium thing for FCC last year about benchmarking functions, with examples of how to do it: https://medium.freecodecamp.org/what-i-learned-from-writing-six-functions-that-all-did-the-same-thing-b38fd48f0d55
@JacksonBates That is a really interesting article. It is pretty surprising that introducing the algebraic equation increased the speed by so much! I guess at this point I do not think my coding skills are great enough to compare two sets of codes that I wrote. I am usually satisfied with simply finishing a challenge rather than redoing it with other loop types, methods, etc.
That being said, I am going to download Mocha and Chai and test my codes against the basic, intermediate, and advanced solutions. I would really like to see that comparison…my Roman Numeral Converter, for instance, has a pretty long switch statement. I will give Mocha and Chai a go to benchmark it’s performance vs. other codes.
Just to clarify, Mocha and Chai are for benchmarking, only the other script.
Glad you found it useful.