How to us reduce instead of map and filter

Hello everyone !

I’ve written this :

let seriesToWatch = [
        { "Title": "Breaking bad",
        "Year": "2007", 
        "Seasons": "8", 
        "Episodes": "130", 
        "Actor": "Walter White",
        "Rate": "10",
        "Director": "Mandiaye Ndiaye"
    },
        { "Title": "Casa de papel",
        "Year": "2018", 
        "Seasons": "3", 
        "Episodes": "50", 
        "Actor": "Profesor",
        "Rate": "7",
        "Director": "Mandiaye Ndiaye"
    }, 
         { "Title": "Game of Throne",
         "Year": "2011", 
         "Seasons": "8", 
         "Episodes": "90", 
         "Actor": "Jon snow",
         "Rate": "9",
         "Director": "Mandiaye Ndiaye"
    }
    ];

I want to get the titles and ratings of the movies that have a rating greater than 8.
As a solution, I use map and filter:

let filteredList = seriesToWatch
                      .map(e => ({
                          title: e["Title"],
                          rating: e["Rate"]
                      }))
                      .filter(e => e.rating > 8);
    console.log(filteredList);

Good, it worked but a post by freeCodeCamp said that :"If you chain map and filter together you are doing the work twice. You filter every single value and then you map the remaining values. With reduce you can filter and then map in a single pass.

Use map and filter but when you start chaining lots of methods together you now know that it is faster to reduce the data instead."

So I don’t know how to do it with reduce method.
I start with this, but I have no idea what to put in.

 const film = seriesToWatch.reduce((title, actor) => {
         help please !
    }, {});

Yes, that is true. But does it matter? When we worry about “how much work” an algorithm takes, we tend to think of it as a function of how many elements it has (n) so we can see how the work increases as the data increase. This is called “big O” notation. Since both are (worst case) just n passes, the complexity is 2n, which reduces to n, so the complexity is O(n) - one of the best ones. Yes, for most data sets, it will take a microsecond faster, but who cares - a web user can only detect delays longer than 1/10 of a second. Using map and filter makes the code easier to read, which is usually more a consideration for me. If I were to use reduce I’d wrap it in a well named function to make it clear what it is doing. To me filter is a bad choice for this because it is typically “reducing” to a new value, not creating a complex data structure. It would work, but to me the typical use implies something else.

I start with this, but I have no idea what to put in.

OK, do you understand how reduce works? That is probably the most confusing of the prototype methods. If you don’t know reduce well, look it up, watch some videos.

That being said with (title, actor), the first parameter should be the parameter that is accumulating the value, and the second is the current iterated value. For this reason, they are often called “acc” and “cur” or even “a” and “c”. You don’t have to, if you have a clearer name that makes sense.

The second parameter is a starting value. If you don’t give one it just uses your first value, but we don’t want that here. So, you gave it {}. But don’t you want it to be an array?

Once that is done, you just use the body of the function - you analyze the current value, and based on that you return what you want the next accumulator to be.

I don’t want to write this one out because it is an FCC problem, but let’s make up our own. Let’s say we want to take an array of numbers and take only the odd numbers and square them. The “standard” solution would be:

const data = [1, 2, 3, 4, 5];

const answer = data.filter(n => n % 2).map(n => n ** 2);

console.log(answer);
// [1, 9, 25]

Or, if you like:

const data = [1, 2, 3, 4, 5];

const answer = data.reduce((acc, cur) => {
  if (cur % 2) {
    return [...acc, cur ** 2]
  } else {
    return acc;
  }
}, []);

// same thing, just "sexier"
// const answer = data.reduce((a, c) => c % 2 ? [...a, c ** 2] : a, []);

console.log(answer);
// [1, 9, 25]

Does that makes sense? Does any of that need clearing up?

I tend to prefer the first one, I think it is more obvious what it is doing, filtering and mapping. If I gave it well named callback functions it would be even more clear. The second one, I have to read it to figure out what it is doing. When you’re reading through thousands of lines of code, you want it as easy as possible. I would only worry about it “doing the work twice” if the data set could get large enough that it would affect performance in a way the user would notice. My rule - make smart choices, but don’t obsess about efficiency unless you need to.

1 Like

Thank you very much for your time, you taught me a lot of things I didn’t know.

You are right because the filter and map methods are very understandable and readable.

I have solved the challenge of map and filter but the one of reduce is very difficult for me. That’s why I want to train on the reduce method to at least understand how it works. Of course I will use the filter and map methods if I have to choose.

Based on your very clear example, I still can’t figure out the solution of my problem. It returns an empty object :

const films = seriesToWatch.reduce((acc, cur) => {
        if (cur === "Title") {
            return [...acc, cur + "Rate"];
        } else if (cur > 8){
            return [cur + "Rate"];
        } else {
            return acc;
        }
    }, {});
    console.log(films);
    // {}

OK, I think you are misunderstanding what cur is. It is the current element of the array. You’re elements are object containing movie data.

So, for the first iteration, cur is going to be:

{
    Title: "Breaking bad",
    Year: "2007",
    Seasons: "8",
    Episodes: "130",
    Actor: "Walter White",
    Rate: "10",
    Director: "Mandiaye Ndiaye"
}

And then it will be the next object and then the next one after that. Then you return what you want the new acc to be.

So, the way I see this, you need to take that object and check if the Ratings prop is above 8. If it is, return an array with the new object added on. You will be creating the new object with the properties you want from the cur object. If the Ratings is 8 or below, you will just return the acc unchanged because you have nothing to add on this iteration. That is basically what I did in my example, just that the condition is different and the data is different - but the basic plan is the same.

And once again, why are you staring out with an empty object. As you define the problem, I expect you to end up with an array, and array of movie objects that contain ratings and title. Sure, the elements of the array are objects, but the whole thing is still an array. So, instead of passing {} as the second parameter of reduce I would expect it to be [].

1 Like

you might also want to look things from logical point of view. What map and filter do and how working with reduce you can achieve the same result, but faster(as the array is ran once thru iteration, not twice).
Filter would go thru every array element and only return those that pass a certain criteria. Map would iterate thru every array element and modify them, according your desire. Its good practice to first run filter, then run map, because filter will reduce the amount of elements in the array, so map will work with smaller array, which decrease its work.
Like i said, reduce makes things even faster, as it iterates only once thru the array. reduce has different ways to be applied, in your case you want to create a new array, which holds only the elements that pass a certain criteria and only holding the data you require(stripping of unnecessary object keys in your case). So you iterate thru your array of objects, check each object rating value to see if its above 8 and if that is true, take its title and rating properties and push them as an object in the array you are expected to return from reduce. Kevin has a good explanation and display on how reduce will function in similar pattern.

1 Like

You’re right Kevin, I didn’t quite understand cur. Now I do, thank you very much.
As you explained, I tried to check if the cur object contains ratings above 8. But I think my problem is on the array to return with the new object if what I checked is true.
My console returns an array with ratings above 8 without the movie titles.

const films = seriesToWatch.reduce((acc, cur) => {
        if (cur.Rate > 8) {
            return [...acc, cur.Rate];
        } else {
            return acc;
        }
    }, []);
    console.log(films);
    // output : [ '10', '9' ]

is the accumulator the empty array ? the empty object ?
If so, why do I need to use the spread method?

Thanks for the explanation
Actually, what’s weird is that I understand perfectly what I have to do, but I’m still stuck on how to do it. I tried to push the title and note on the array that reduce returns but it puts an error (syntax error)

Your last example is very close. There is only one line that is the problem:

            return [...acc, cur.Rate];

You are returning what you want the new accumulator to be. The ...acc is copying the old accumulator so we can add to it, and the cur.Rate is telling what you want to add to it. Do you want an array of ratings? I thought you wanted an object with the title and rating. So, this should be an object with what you want.

So, for example if I wanted my result to be an array of objects with director and actor, I would have:

    return [...acc, { director: cur.Director, actor: cur.Actor }];

You need to use this part of this line to tell it what you want the elements in your final array to be.

1 Like

Oh God ! Thank you !
I understood everything. In fact, you just gave me the result :smile:. Thank you very much, very clear explanation. That was very helpfulI. I think I understand the reduce method (not everything). But I think that what was difficult was acc and cur. Now I understand how it works

spoiler
const films = seriesToWatch.reduce((acc, cur) => {
        if (cur.Rate > 8) {
            return [...acc, {title: cur.Title, rating: cur.Rate}];
        } else {
            return acc;
        }
    }, []);
    console.log(films);
    // [
    //     { title: 'Breaking bad', rating: '10' },
    //     { title: 'Game of Throne', rating: '9' }
    // ]

@Sylvant Thank you too

1 Like

For what its worth:
In some programming languages, a filter + map combo gets combined when implemented into a single loop. In Rust the combo can actually be faster than writing the logically equivalent reduce. Given this, I would not expect “reduce is always faster than filter + map” to always hold true in all cases with JS. I could see JS engines optimizing this common case.

2 Likes

Cool, good job. And if you really want to understand reduce (or any other prototype method), write your own version. That is what really made me understand what was happening and some functional programming concepts.

2 Likes

So with JS it is better to use filter + map?

In functional programming challenges there are many exercises that require the use of map, filter or reduce. But what about for loop? Because when it comes to performance issues I read a post where they say that “All the results clearly show that for loop are more proficient than for each than map/reduce/filter/find.”
What do you think?

Thank you dear ! I will do that, it really make sense.

I would generally prefer a map, filter, or reduce over raw for loops for readability. Most JS engines nowadays seem to have handled any serious performance differences.

Edit: Huh, there are apparently still some performance issues with map, filter, and reduce. That is baffling to me.

2 Likes

Yeah, readability is big for me. Imagine instead of (in my example) instead of:

const answer = data.filter(n => n % 2).map(n => n ** 2);

I extracted those callbacks into named functions and it was:

const answer = data.filter(acceptOnlyOdds).map(squareIt);

That tells me exactly what is happening. It may be overkill for a trivial example like this, but if those callbacks were more complex, this cleans things up, allows me to put those callbacks in another file, and makes this extremely easy to read.

Yes, I can read:

const answer = data.reduce((acc, cur) => {
  if (cur % 2) {
    return [...acc, cur ** 2]
  } else {
    return acc;
  }
}, []);

or

const answer = data.reduce((a, c) => c % 2 ? [...a, c ** 2] : a, []);

But it takes me a few seconds to visually parse it. And again, if you are reading thousands of lines of code, that can make a big difference.

One of my favorite compliments is when someone tells me my code is easy to read.

My philosophy is that good code tells a story. You should be able to look at it and almost instantly tell what it is doing. Along with that, I say that if I ever want to add a comment, I first double check that I can’t solve it by making the code more readable. 99% of the time, I find I didn’t need the comment.

1 Like

Yeah, I totally understand you. And this means, put a more readable code which allows you sometimes to put less comment (because the code is already readable), what map and filter allows us to do while reduce is less strong in this field (less strong, I mean less readable).

1 Like

Right, I don’t mean to say that we shouldn’t use reduce, just that it implies something different. To me, those each have semantic meanings:

filter = select which elements you want to keep

map = create a new array where each element is based on the corresponding element in the original

reduce - take an array and reduce it down to a single value (like summing numbers or counting the number of elements where a certain flag is set, etc.)

Yes, you can do your task with reduce. For that matter, you can duplicate any array prototype method with reduce. But I’d rather use the one that says exactly what I’m doing.

2 Likes

cheers, the code looks good. You can actually get rid of the else block and just leave return acc; after the if block, because whenever your if is true, it returns a value from the current iteration, which means any following code wont be executed(it proceeds with the next iteration).
You could also modify a bit your solution and use acc.push(), or acc.concat() instead of spread operator, just to see how reduce accumulator behaves. Its fun method to play with

1 Like

Thank you very much for the explanation it really helps. I agree with you, it’s really fun to do this method. I love it

I can take acc and concatenate it with the objects but I can’t push these objects on acc. It returns TypeError : push is not a function. Maybe it’s a parenthesis problem ? I don’t know. I’ll try to see what it is

spoiler
const films = seriesToWatch.reduce((acc, cur) => {
        if (cur.Rate > 8) { 
            //return acc.concat([{title: cur.Title, rating: cur.Rate}]);
            return acc.push([{title: cur.Title, rating: cur.Rate}]);
        };
        return acc;
    }, []);

The problem is that you are returning the return value of the push. But push does not return the array, it returns the new length. Since you are returning that in your callback, you are telling it that the next accumulator should be the length of the pushed array.

2 Likes