How to reduce array of objects to array

Greetings, Say for example i dont wish to use a for loop or forEach, I also dont wish to map and then do another pass to clear undefined values. how would I say convert an array of objects into different array, want this input:

[
  {a:1},
  {name: 'Jane'},
  {},
  {b:2},
  {name: 'Smith'},
  {name: 'Fatima'},
]

to output e.g. :

['Jane', 'Smith', 'Fatima']


Can it be done using reduce?

 const New= Original.reduce((acc, obj) => {
   if(obj.hasOwnProperty('name')){
// add val to new arr
    }
//?
  }, initialAccumulatorValue);

how to fill out rest of this, I am thinking

A few things to keep in mind:

  • The reducer callback function returns a value. That value is what is used for acc when the reducer is called again on the next item. So you will want to return something in your callback. Based on what you said you want the output to be above, what would make sense to return?
  • The initialAccumulatorValue should be the same type as what you are returning in the callback. Based on what you said you want the output to be above, what would be a good default value for initialAccumulatorValue?
  • Hopefully answering the two points above will help you figure out what you should do in the if statement.
1 Like

The final result of running the reducer across all elements of the array is a single value (copied from mdn), so you see you are supposed to return more than single value in the final array. filter fits more than reduce for the challenge, and if you can use Array.every with filter that would be like a very loving couple

@ghulamshabirbaloch “an array” is a single value.

1 Like

I think the definition from mdn meant literally a single value. for example if an array has numbers then the single value might be their sum

The initialAccumulatorValue (acc in the callback) should be an array, the first commented line should push obj.name to the accumulator, and the second commented line with the question mark needs to return the accumulator. That’s all that’s needed.

flatMap can do exactly the same thing without an accumulator, very slightly different logic in the callback (if there’s a property with the key "name", return [obj.name], otherwise return an empty array [])

Also, needs more clarification on end result required as @kevinSmith says, too many edge cases as things stand

1 Like

An array would be an example of a single value. As would an object, or a number, or a map, or a boolean or whatever. It just means one instance of a specific type of thing, it doesn’t make any discrimination regarding which specific type of thing

1 Like

oh I got it thanks, :smiley: @colinthornton

1 Like

I did not see what he said.

thanks I did not know this about the flat map, you want:

response.flatMap(curr => {
   if(curr.hasOwnProperty('name')){
     return [curr.name]
   }
return [];
});

no way its as effecient as standard loop right? re:

const processed=[]
for(let i=0;i<response.length;i++){
  if(response[i].hasOwnProperty('name')){
    processed.push(response[i].name)
  }
}

//&&
  const holder=[]
  arr.forEach(obj=>{
    if(obj.hasOwnProperty('name')){
      holder.push(obj['name'])
    }
  })

the reduce will have the same efficiency as the standard yes?

response.reduce((acc, obj) => {
   if(obj.hasOwnProperty('name')){
     acc.push(obj['name'])
    }
    return acc
  }, []);

I think the map is inefficient bc of two passes re:

Original.map(x=>x.hasOwnProperty('name')?x.name:null).filter(x=>x)

ill do it:

// would the final array need to be:
// ['Jane', 'Smith', 'Fatima']
const response=[
  {a:1},
  {name: 'Jane'},
  {},
  {b:2},
  {name: 'Smith'},
  {name: 'Fatima'},
  {name: 'Jane'}
]

const names={};
const processed=response.reduce((acc, obj)=>{
 if(obj.hasOwnProperty('name')&&!names.hasOwnProperty(obj.name)){
    acc.push(obj.name)
    names[obj.name]=true
  }
  return acc
},[])
console.log(processed)

Be very careful about assuming this, because unless you know exactly what the implementation in a given JS engine is you have no idea if your assumption is correct, or whether, if it is, it matters in any way. Also CPU cache hits/misses are at least as important as time complexity, and you’re unlikely to be able to figure out the effects of that.

The answer technically will be yes, it is less efficient (same with reduce) simply because of the overhead of creating extra objects (arrays/functions). And flatten is a recursive operation with a runtime check at each stage (or at least it is in SpiderMonkey it is recursive, not sure about Ignition/TurboFan). Whether that makes any practical difference to most JS code written IRL is unlikely; JS engines are very, very fast.

“Is it the fastest theoretically possible” is rarely a useful question. Is it fast enough for your application? Is it creating a performance bottleneck? Is it unnecessarily complex or otherwise difficult to maintain?

Filter before map is usually considered a best practice. I’m surprised JS doesn’t have a filterMap yet. I wonder if that is an optimized path in some engines since it is so common. Though, it seems that using a flatMap as a filterMap or building a helper function based upon reduce is common?

1 Like

that makes sense, less elements to map over, I did not think of that… is this what you mean:

  1. map->filter
    N + N = 2N
    doesnt matter best case or worst case you will always have to perform two passes of length n input (remove the constant because of the limit as N → infinity maybe its the same as just saying N time)

  2. filter->map
    filter: you will still have to visit each element in input of length N
    map: your size input you need to map over will have been decreased by what has been filtered before
    N + (N - filteredElements)


This is my understanding, with this the filter is only there to clean up the stuff left over from map:

this way its just slightly more efficient:

Original.filter(x=>x.hasOwnProperty('name')).map(x=>x.name)

so from my understanding

map->filter : 2N : apx N
filter->map: N + (N - filteredElements) : apx N

disreguarding the constants they are basically the same, and one isnt really more efficeint than the other where very large data is concerned


how will will the standard loop cause more hits/misses than flatMap? can you please provide example of what you mean

I’m talking about in general: you’re making guesses at micro-optimsations. So for example, taken in complete isolation, a given loop is likely to be more efficient than a flatMap operation in many situations, as latter uses a closure & array constructors. However that’s in isolation and best guess

What is the rest of the program doing, how did you get there? Time complexity really comes into play on an arbitrarily large array, what happens when you run it on in a very small array? Is there actually any difference? What are you doing in the loop? Is the JS engine designed to optimise certain situations and just completely bypass the expensive operations you think it’s doing? How does the low level system code of the engine deal with everything? Etc etc.

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.