That copies the old state into a new anonymous object. The ...state ensure that any other properties are unaffected (same values/references) and data: [...state.data, action.payload] tells it to create a new array, add the new value, and save it in that property.
I am planning on using this to create a website that can be used for 3D modeling purposes. The shapes themselves could be composed of 1000’s coordinate values.
Coping this entire state every time would hurt performance. Some items cannot be stringified and then parsed, such as Three.js objects due to circular references.
Should I just create a global object and use context to edit it from any sub-component?
I haven’t really used Redux Toolkit (I think I tested it initially long ago) but I think you have to use createSlice or createReducer if you want to use mutations (it will use Immer). Otherwise as said, the reducer has to be pure. I would suggest you read the Redux Toolkit docs and look at the examples.
Keep in mind that you are copying less than you think you are. If I do a [...state], I am only copying that first level of values and references, it is not recursively going through and copying everything.
This gets into a tricky area. Yes, you are doing a shallow copy. With this, newState is a different reference than state, but newState.data will be the same reference as state.data. So, when you push onto newState.data, you are also mutating state.data.
In theory you could do it if you did a deep copy, like with something like lodash’s _.cloneDeep.
In my scenario, I will be working with react three and react three fiber, creating and storing models on the screen. These models may contain thousands of coordinates (its a voxel/ Minecraft creator). Deep copying the state every time, for all thousands of objects (even if they are instanced mesh, the individual coordinates are still stored), would be performance costly.
What would you recommend I do here? Objects may be colliding with other objects, so the objects must be stored in a global react database so they can be checked.
You don’t necessarily need to deep copy the state. You only need to shallow copy the parts that are changing.
case ADD_ITEM:
return {
...state,
data: [...state.data, action.payload]
}
The first part of this:
return {
...state,
}
does not copy the entire state. It is a shallow copy. Let me expand it a little to make it easier to discuss:
const newState = {
...state,
}
return newState
This did not copy all of state. It allocated a new reference that points to a new object, but the reference to data does not change. So, state.data and newState.data point to the same spot in memory - that did not get copied. This is a shallow copy. This is what we want. Then we can change the parts that we want:
So, now we’re overwriting newState.data with a new reference. But if there were other sibling properties, they would not change. In newState.data we are creating a new array and we are copying the old state.data, but that is also a shallow copy. In other words, if those are objects or arrays, only the references will be copied - the data is not copied.
The larger the data structure the more expensive it will be. It also depends on how you are using spread and the runtime.
For web development, this usually isn’t a huge issue and if you are using JS for massive data crunching you probably pick the wrong language. There is a reason why code that needs to have high performance still uses pointers or languages even closer to the metal, like assembly language.
If you identify an actual performance issue, sure start looking for solutions. Until then prefer readable non-mutating code, even when it is slower.
It depends on the details. My point is simply that people often overestimate the performance of immutability.
I also believe the old adage that premature optimization is the root of all evil in coding. We often obsess about things we don’t need to. Make smart choices but don’t worry about until it becomes an issue.