Hi I was just wondering how I can measure the time it takes to execute my bubbleSort function in a Node Server because the current code I have doesn’t work with node. I would appreciate any help or suggestions thank you!
class Shoe {
constructor(name, price, type) {
this.name = name;
this.price = price;
this.type = type;
}
}
// to generate random LARGE list
function randomName(n) {
let letters = "abcdefghijklmnopqrstuvwxyz";
let name = "";
for (let i = 0; i < n; i++) {
name += letters[Math.floor(Math.random() * letters.length)];
}
return name;
}
function randomNumber(min, max) {
return Math.floor(Math.random() * (max - min) + min);
}
var shoes = [];
for (let i = 0; i < 100; i++) {
shoes.push(new Shoe(randomName(20), randomNumber(50, 5000), randomName(7)));
}
//bubblesort
function bubbleSort(shoes) {
var swapped;
do {
swapped = false;
for (var i = 0; i < shoes.length - 1; i++) {
// converting prices to numbers
if (+shoes[i].price > +shoes[i + 1].price) {
var temp = shoes[i];
shoes[i] = shoes[i + 1];
shoes[i + 1] = temp;
swapped = true;
}
}
} while (swapped);
return shoes;
}
const t0 = performance.now();
bubbleSort(shoes);
console.log('Bubble Sort:\n', shoes);
const t1 = performance.now();
console.log(`Call to bubbleSort took ${t1 - t0} milliseconds.`)
Benchmarking in JavaScript is a complicated subject. How do you know that some other unrelated background process aren’t slowing it down? But if you google “benchmarking javascript” you can find a lot of discussions.
Personally, I think people can get carried away with fetishing the minutiae of performance. Knuth said, “Premature optimization is the root of all evil.” There are often much more important things than saving a microsecond. I remember getting into a debate with a junior dev because he wanted some super slick implementation. I pointed out that there was a simpler solution that was few lines and much more readable, maintainable, and secure. And since it was only making a single pass over a hundred elements, the difference in performance was meaningless. In web dev, anything less than 1/10th of a second is unnoticeable to the user. And the app spends 99.99% of its time waiting for the user. And if you’re worried about minute optimizations, then JS probably isn’t the right language anyway.
With something like this, I think you are better off understanding performance with big-O notation. That is a more meaningful measure of how the algorithm will perform as it scales. Beyond that, looking for micro performance gains is usually a waste or time.
I remember of I was working on a hard algorithm challenge. I came up with a bruit force solution. I tried it out and the testing program tried it out. My computer hummed for about 4 hours before I shut it off. When I woke up in the morning, I realized that I had an O(n^2 log(n)) solution. I thought of it and came up with an O(log(n)) solution. It took about 20 seconds. But again, that was being testing with an extremely large data set.
oh ok wow that was a lot of feedback thank you! It’s just a high school assignment I’m supposed to do a computational complexity analysis so do you have any suggestions how I can do that with my program specifically?
Well, “computational complexity analysis” isn’t the same thing as benchmarking. That sounds more like big O analysis to me.
But as far as getting your code to run in node, I was able to load it into a file, add:
const { performance } = require('perf_hooks');
to the top of the file and then run the file from the command line.
Also in node you can add “time” before any command, like:
time node myFile
and it will time it - but you run into the benchmarking problems I mentioned. I don’t know the performance library you are doing, but it probably does a better job.
As to big O, if you are using a bubble sort, you are around O(n^2) - there are faster sorting algorithms. You can check them out here. You can checkout some youtube videos explaining big O, but basically it is a measure of how your performance degrades as the number of elements increase. With yours, you would expect that if the number of elements increased 100X, it would take about 10,000X longer (100^2).
One thing I notice is that the console.log slows things down quite a bit. And you’d probably wont to run it several times and take the average. Even better would be alternating with the algo to which you are comparing it.
1 Like
ok thank you so much for your help !
1 Like