Please Critique my Reduce example

Hello everyone. I would like it if someone critique my reduce example that i created. I will be using this as one of my examples on my blog.

I’m basically extracting items that are out of stock.

Is this practical for reduce? Is my code pure? Could I have done this better? etc…

Please advise if it is readable as i’ve been trying to focus a little bit more on functional programming.

  • In addition to the above I’m also really trying to focus on making my code readable for other developers. *
let inventory = [
		{ breads: {rye: 0, wholeWheat: 6, white: 45, rasin: 0} },
		{ fruits: {apple: 34, pears: 0, pineApple: 29, plums: 0} },
];

let getOutOfStock = inventory.reduce(function(acc, obj) {
		//Get  item Category
	let itemCategory = Object.keys(obj).join('');

		//Get products out of stock
	let item = Object.keys(obj[itemCategory])
					 .filter(e => obj[itemCategory][e] === 0);

	return acc.concat({ [itemCategory]: item });

},[]);

In my opinion, since inventory is already an array, the reduce method is not needed. Instead, the map method is simpler.

const inventory = [
  { breads: {rye: 0, wholeWheat: 6, white: 45, rasin: 0} },
  { fruits: {apple: 34, pears: 0, pineApple: 29, plums: 0} },
];

const getOutOfStock = inventory.map(obj => {
  //Get  item Category
  const itemCategory = Object.keys(obj)[0];
  //Get products out of stock
  const item = Object
    .keys(obj[itemCategory])
    .filter(item => !obj[itemCategory][item]);

  return {
    [itemCategory]: item
  };

});

Also, I would make sure you are either using all arrow functions or none at all. You were mixing the syntax in your code.

I think a better example would be if inventory was an object and you reduce it to an array of category objects containing arrays that contain an array of out-of-stock items.

const inventory = {
  breads: { rye: 0, wholeWheat: 6, white: 45, rasin: 0 },
  fruits: { apple: 34, pears: 0, pineApple: 29, plums: 0 }
};

const getOutOfStock = Object
  .keys(inventory)
  .reduce((outOfStock, itemCategory) => {
    //Get products out of stock
    const zeroItems = Object
      .keys(inventory[itemCategory])
      .filter(item => !inventory[itemCategory][item]);
    return outOfStock.concat({ [itemCategory]: zeroItems });
  }, []);
1 Like

map is much more practical here (you are literally using reduce as map). Or flatMap if you want to filter out categories that have no items out of stock.

edit: what @RandellDawson said; IMO demonstrate reduce by creating an out put of a different type – either going from object -> array as suggested, or from array -> object.

Just an aside, and something that is slightly unrealistic: the objects aren’t well-formed, every type of thing in that array is a different shape. Which makes your logic complicated – this isn’t a bad thing, as it’s an example of remapping data to a more suitable shape (eg the inventory database data is junk, you need to fix it). But compare:

{ breads: {rye: 0, wholeWheat: 6, white: 45, rasin: 0} }

With this, which is effectively what you are constructing with the keys iteration, and is what I would expect a sensible API response to look something like:

{ category: "breads",
  products: [
    { type: "rye", amount: 0 },
    { type: "wholeWheat", amount: 6 },
    { type: "white", amount: 45 },
    { type: "raisin", amount:  0 },
  ],
}
1 Like

Technically, you can do anything with reduce(), including filter and map. In fact, let me show you “map” as a function implemented with reduce:

reduceMap = (arr, func) => 
  arr.reduce((accum, item) => accum.concat(func(item), [])

Look kind of similar to what your code is doing in its reducer, doesn’t it? Notice how it starts with an empty list as an accumulator, then returns the accumulator with the new element appended. That is map in terms of reduce. I was going to show you how your code transforms, but I took my sweet time writing this and got ninja’d in slow-motion :upside_down_face:

Don’t take it as a goof though – you’ve stumbled on an isomorphism, and one that leads to some pretty interesting insights later. And finding isomorphisms is what a lot of programming is about: saying “hey, this problem looks just like this other one, I wonder if I could solve them both with one piece of generic code?”.

If you want another challenge, try implementing filter() in terms of reduce() next.

2 Likes

Interesting, I never considered that; didn’t think much of it. Is that more of a conventional thing or personal preference?

You know, i thought my object looked weird, i just didn’t know what else to do with it.

I will keep this in mind. I’m still learning how and when to use certain data structures. More often than not, i get confused as to how to build them so they look more organized; readable. I’ll use that example of the API response as a reference.
Can you recommend a book that will teach me how to properly build them?

I’ve never heard of isomorphism, i’ll need to read up on that.

1 Like

I will also give this a shot, should be fun
:thinking:…Outta curiosity, is that something developers would actually do? or would they simply use filter()?

I also have another question for you guys. whats your thought on chaining? Is there some ‘unWritten-limit’ to how many one should chain together to achieve a given task? Or is it personal preference as long as it is readable?

Thanks for all your response, much-appreciated.

It has a performance impact. Say you chain a load of array functions: each one will iterate over the whole length of the current array. So this can be mitigated slightly by changing order (for example, filter first, then map, so the resulting array that map runs on is smaller), or the logic can be inlined into a reduce. Also bear in mind that you are normally creating objects with each additional chained function.

In some other languages, chained calls of iterator methods can act lazily, on each element in turn, so they’re extremely efficient (off the top of my head Microsoft’s Linq does this, and in other languages Clojure [via transducers], Rust, Haskell et al). ie if you did collection.map().filter().reduce(), it would apply all the operations to the first element, then the second element etc, so the perf difference between that and if you just did collection.map() would not be much. JS isn’t like that, it does like

collection.map().filter().reduce()
// iterate over whole collection, mapping it
mappedCollection.filter().reduce()
// iterate over whole collection, filtering it
mappedAndFilteredCollection.reduce()
// ...apply reduce to the whole collection

(note there is a way to get around this by either using a thing [sometimes] called transducers, or by using generator functions).

For example, a function to convert a number to a Roman numeral (this is just the first thing I could find where I’ve used a lot of chaining):

const numeralGroups = [
  ['', 'I', 'II', 'III', 'IV', 'V', 'VI', 'VII', 'VIII', 'IX'],
  ['', 'X', 'XX', 'XXX', 'XL', 'L', 'LX', 'LXX', 'LXXX', 'XC'],
  ['', 'C', 'CC', 'CCC', 'CD', 'D', 'DC', 'DCC', 'DCCC', 'CM'],
  ['', 'M', 'MM', 'MMM'],
]

function toRoman(number) {
    return number
                 .toString() // 1
                 .split('') // 2
                 .map((char) => parseInt(char, 10)) // 3
                 .reverse() // 4
                 .map((v, i) => numeralGroups[i][v]) // 5
                 .reverse() // 6
                 .join('') // 7
}

So first two operations take a number, convert it to a string, then split the string into an array. Then 3 is the first map, iterates over the whole string array and parses to actual numbers (1, 2 and 3 in combination just take a number and produce an array of the digits). Then reverse (4) iterates over the array again. Then a second map (5) iterates over the array again. Second reverse (6), another iteration. And join is the final iteration (7). So by chaining, the logic is split down, but afaics I’m iterating over a collection six times there.

Once I had an array of digits, I could have just used a loop and iterated over the characters once (or used math instead of splitting into digits). Or reduce (or reduceRight).

It’s context-dependent, sometimes it makes sense to take for example a map and a filter or whatever, and convert all the logic into a single reduce (or just a loop, showing loads of logic into a reduce callback often makes things very difficult to read).

Arrow functions for all the callbacks is normally a good rule of thumb, there’s no real point using the function syntax & it makes things noisy to read. imo functions for the top-level as anonymous functions at that level are harder to scan read, YMMV though. ex:

export function outOfStock (inventory) {
  return inventory.reduce((acc, curr) => .........

It will come naturally the more you use [APIs built on] databases in particular. For example, the reason this doesn’t look right:

{ breads: { rye: 0, wholeWheat: 6, white: 45, rasin: 0 }}

Is that in a database, this doesn’t quite make sense – this is regardless of whether it’s a relational DB or a normal setup for a NoSQL DB. If you have a collection of entries in a table, it’s maybe going to look bit like this for a relational DB:

| id | category |
|----|----------|
| 1  | breads   |
| 2  | fruits   |

| id | type        |amount | inventory_id |
|----|-------------|-------|--------------|
| 1  | rye         | 0     | 1            |
| 2  | wholeWheat  | 6     | 1            |
| 3  | white       | 45    | 1            |
| 4  | raisin      | 0     | 1            |

The query to the DB is going to maybe ask for each of the items in the first table, which it’s going to be translated by whatever is being used to convert that into some data format like JSON into an array like

[{ id: 1, category: "breads" }, { id: 2, category: "fruits" }]

And it’s going to join the second table, which references the first table via the inventory_id key, so you get

[
  { id: 1,
    category: "breads",
    products: [
      { id: 1, type: "rye", amount: 0 },
      { id: 2, type: "wholeWheat", amount: 6 },
      { id: 3, type: "white", amount: 45 },
      { id: 4, type: "raisin", amount: 0 },
    ]
  },
  { id: 2,
    category: "fruits",
    ...

I can’t really recommend a specific book: I mainly built a feel for this the more I built things that depended on databases & the more I consumed REST APIs. NOTE a lot of the stuff you’ll immediately get if you Google for good API design will be specifically on designing good REST APIs, which is extremely useful and important. But possibly what you’d want is resources on designing databases/queryable data structures.


Sorry, that was a bit long. Just one more thing, here is the annotated source of the Underscore library. It’s a utility library that provides a load of stuff for working with collections (so versions of forEach, map, filter etc + loads and loads of stuff built on top of that). It’s very useful to see how it’s implemented. As a utility library, Lodash replaced it a while ago (it is a fork of Underscore), but Lodash doesn’t have that annotated source code. There is also a book, Functional Javascript, which IMO is the best book on JS I’ve read, that uses the Underscore lib throughout, which may be useful.

Edit: this just popped up in my YouTube feed, literally covers what you’re talking about https://youtu.be/CZO3AxdkgqM

Interesting. I’m so use to running code and seeing immediate results on my editor; so i never even considered performance as a factor. Can I assume that I will start to notice performance issues once I start interacting with APIs?
– That statement, alone, will shape the way how I go about coding; thanks for that by the way –

I will look into this.

This is awesome, every so often I would wonder how such functions are built; nice resource to have.

I recently purchased Kyle Simpon’s “Functional-Light JavaScript” ( still going through it) will check out “Functional Javascript” as well and compare the two.

Looking at that structure and comparing what I did; i can clearly see how my version is; incomplete

Yea, i’ve found the function syntax a bit harder to read, sometimes. Which is funny because the book on FP by Kyle Simpson that i mentioned above seems to be against arrow functions. Arrows are easier on , my eyes, and allows me to quickly comprehend the code i’m reading; and it is prettier :grin:

Please, no, don’t apologize. I’m very grateful that you took the time to provide a response with such detail; and thank you for your time.

Watching this now. “Fun Fun Function” is one of my fav youTube Channels, lol.

That’s the main reason why reduce() is hardest to read and some people don’t like it.

To me your problem can be described as follows: findAll categories such as when you findAll products out of stock in given category there will be at least one product. findAll in JavaScript represented with .filter() method, but unfortunately you cannot nest filter inside another filter, resulting one extra run:

.map(/* filter entries */)
.filter(/* remove empty */)

If you want to avoid that extra run you would simply need simple for loop (or .forEach() if you want it to look “functional”)

Those two kids speak for themselves, but I’m not convinced. When I see a reduce, I know I’m seeing a function of [a] -> b. When I see a bunch of loops, conditions, and assignments to do the same, I don’t know what I’m looking at until I run it all in my head.

Using reduce() where a different higher-order function would be clearer is bad for readability, as is using recursion when higher-order functions would be clearer. As is reinventing basic algorithms because one doesn’t understand them and finds them “scary”

Well, I partly agree. For me .reduce() is a twin brother of .forEach() with one drawback - it might be really cryptic for unprepared eye, as those two kids pointed out, but it also has one advantage those kids forgot to mention - it chainable, as accumulator is a part of the function. So for me choice is simple: use .reduce() if you need it as a part of a chain, use .forEach() otherwise (of course, we are talking about scenarios when other methods will not apply).

Hm. Some interesting points made in this thread…quite a lot to digest as a noob, :thinking:

I really appreciate all the responses.

Probably not, but it’s something that may creep up cumulatively in an application, or suddenly become apparent in a specific part of a codebase. It’s something that you should just bear in mind. Generally, don’t preemptively try to optimise things. Make it work first, often there is no need to optimise. This doesn’t mean don’t bother with writing efficient code in the first place. But preemptively optimising code is often pointless (trying to second-guess the compiler/interpreter) and generally results in inflexible, complex code.

The one place you may encounter the need for performance is in code that has to be fast, and that you would see if you play around writing HTML5 games. You probably want to avoid functional iteration stuff in that case, you generally want tightly controlled loops.

Yeah, some of his personal style biases seem slightly at odds with how it’s commonly written. I’d say it’s often easy to provide good answers for why some pattern/style is the correct style. It’s just that there are often multiple correct answers, or the benefits are tiny, and maybe the trade-off is that nobody else writes code like that, or there’s a specific coding style used within an organisation where you work that becomes normal to you, etc.

Like there was a big dicussion on Reddit & Twitter recently about const vs let. Gist of it was that const is pointless most of the time and just use let, and the arguments for that are reasonable - see https://overreacted.io/on-let-vs-const/. But :man_shrugging: – personally I would use const everywhere because I think that for two extra characters it takes to type it makes the code clearer and avoids a few bugs, and a lot of people agree with that, but neither way is wrong.

Also with respect to comprehension, people are better at reading what they are used to reading (which incidentally is why almost every opintion article about readability of serif vs. sans-serif fonts is almost guaranteed to bullshit, but anyway), and that kinda applies to code as well. It’s often easy to provide good answers for why some pattern/style . It’s just that there are often multiple correct answers. But the more you get used to one pattern the more other ones might seem…wrong.

It’s all dependent on context though – it’s very easy to go down the route of trying to stick to reduce (for example) and shoehorning in complicated logic, and you then get the same issue when it would have been much easier and clearer to write a loop in the first place. As an example, I just ripped all uses of reduce out of a part of my app’s codebase. All the calls were marshalling various lists of data from an API into object structures (this is highly simplified, and there are other subtelties that meant more logic than I’m showing here):

foo = [
  { id: 1, title: "foo", code: "foo1", status: "claimed", expiry: "2020/06/11" },
  { id: 2, title: "bar", code: "bar1", status: "unclaimed", expiry: "2020/06/11" },
  { id: 3, title: "baz", code: "baz1", status: "unclaimed", expiry: "2020/06/11" },
  { id: 4, title: "quux", code: "quux1", status: "claimed", expiry: "2019/06/11" },
  { id: 5, title: "foobar", code: "foobar1", status: "unclaimed", expiry: "2019/06/11" },
];

// to

foo = {
  claimed: {
    1:  { id: 1, title: "foo", code: "foo1", status: "claimed", expiry: "2020/06/11" },
    4: { id: 4, title: "quux", code: "quux1", status: "claimed", expiry: "2019/06/11" },
  },
  unclaimed: {
    2: { id: 2, title: "bar", code: "bar1", status: "unclaimed", expiry: "2020/06/11" },
    3: { id: 3, title: "baz", code: "baz1", status: "unclaimed", expiry: "2020/06/11" },
  },
  expired: {
    5: { id: 5, title: "foobar", code: "foobar1", status: "unclaimed", expiry: "2019/06/11" },
  },
}

And that was fine as a reduce operation, but by just setting up the output data structure object then using a for...of loop to populate that it was 100% easier to read (and much easier to type when I added TS to the application, but anyway). :man_shrugging: I always used to reach for the Array iteration functions as a matter of course, but I definitely tried too hard to avoid for/while/etc loops in favour of stuffing logic into reduce (or similar) functions that became nigh-on unreadable.

As another example, the current recommended way to use Redux (relevant due to the necessity for immutability & therefor heavy use of methods like map/filter/reduce) has an imperative API for reducers (comes via the Immer library that’s bundled with the toolkit to provide immutability). For very good reason, because you often are dealing with nested object structures. So example just updating a key (but it could be some map/filter/reduce logic to rebuild the state):

return {
  ...state,
  someObj: {
    ...state.someObj,
    [action.payload.keyToTarget]: action.payload.data,
  },
};

You just do

state.someObj[action.payload.keyToTarget] = action.payload.data;

Funny you say that, as I find myself doing the same thing without a good reason…at times it makes coding more difficult.

It’s Greek for “same shape”, and it has a pretty precise term in mathematics, but I’m using it pretty loosely as computer scientists tend to do. Abstractly, it means no matter how much you bend and twist something up, you end up with more or less the same thing, or at least something you can get back the original form of. For instance, a trapezoid is isomorphic to a square, because it’s just the same thing tweaked a little. But a triangle or a pentagon isn’t, because you’d have to add or remove vertices. I probably should have just said “equivalence”, but I guess I’ve been hanging out with a strange crowd :wink:

They’d just use filter. What does happen all the time is that they’ll accidentally write something simpler (like filter) in terms of something more complex (like reduce). A whole lot of programming involves turning a dozen lines of code into one or two higher-level bits of code that do the same thing.

Seems like something that wouldn’t compose very well. Composition is what gives rise to selectors, which when I was last using Redux (it’s now been a few years) was the way to go there. If it’s doing something magic with assignment, well, all the proxy magic is one of the things about Vue I’m not too fond of (that and the crufty template syntax when I’d rather just use JSX)

Already bundles reselect, but what’s in the reducer logic doesn’t have any overlap with that anyway. I’m not sure what you mean by wouldn’t compose well: this is logic to update the state in a reducer – that’s isolated. It’s an alternative to an immutability library like Immutable.js (which has problems re size, TS typings, and critically needs serialization/deserialization to/from native JS data structures)

Eh, I suppose assignment composes just fine: do { foo <- bar; blah <- woof; }, presto composition. It’s just the whole “side effect” thing in general that seems hinky. Or is state a deep copy that’s assigned back to the original at some later point? I’m not such a purist that I won’t use local state when it’s faster/readable, but it shouldn’t escape a the scope of a single function.

Anyway we’ve drifted off topic quite a bit, and I almost said the “M” word :nerd_face: so maybe we should move this to #general :slight_smile:

1 Like

Yeah, sorry, drifting. Immer's main export is function that uses a Proxy (if available, otherwise comparatively much slower deepmerge) to merge a draft state (which is what you apply imperative updates to) into the original state, returning a new version of the original state.

function myReducer (state, action) {
  return produce(state, (draftState) => {
    switch (action.type) {
      case "foo":
        draftState.foo = 1;
        break;
      case "bar":
        ....
    }
  });
}