React Redux mutation error

I am trying to set up a react redux application that simply pushes “hello” to the state.

However, Attempting to push this results in a mutation error.

const ADD_ITEM = "ADD_ITEM";

export const initialState = {
  data: []
};

export function reducer(state, action) {
  // let newState = JSON.parse(JSON.stringify(state));
  console.log("Reducer called");
  console.log(state);
  switch (action.type) {
    case ADD_ITEM:
      state.data.push(action.payload);
      console.log("Pushing ADD_ITEM");
      console.log(state);
      break;
    default:
      console.log("None hit");
  }
  return state;
}

export const mapStateToProps = (state) => {
  return {
    state
  };
};
export const mapDispatchToProps = (dispatch) => {
  return {
    addItem(payload) {
      dispatch({
        type: ADD_ITEM,
        payload: payload
      });
    }
  };
};

Why is this happening? Stringifying and then parsing the state does not solve the issue either.

This changes state, it mutates it. What you have to do is create a new state and return that. That is immutablitilty. Your reducer should look like:

export function reducer(state, action) {
  switch (action.type) {
    case ADD_ITEM:
      return {
        ...state,
        data: [...state.data, action.payload]
      }
      break;
    default:
      return state;
  }
}

That copies the old state into a new anonymous object. The ...state ensure that any other properties are unaffected (same values/references) and data: [...state.data, action.payload] tells it to create a new array, add the new value, and save it in that property.

Yeah, it’s weird until you get used to it.

1 Like

I am planning on using this to create a website that can be used for 3D modeling purposes. The shapes themselves could be composed of 1000’s coordinate values.

Coping this entire state every time would hurt performance. Some items cannot be stringified and then parsed, such as Three.js objects due to circular references.

Should I just create a global object and use context to edit it from any sub-component?

I haven’t really used Redux Toolkit (I think I tested it initially long ago) but I think you have to use createSlice or createReducer if you want to use mutations (it will use Immer). Otherwise as said, the reducer has to be pure. I would suggest you read the Redux Toolkit docs and look at the examples.

Keep in mind that you are copying less than you think you are. If I do a [...state], I am only copying that first level of values and references, it is not recursively going through and copying everything.

1 Like

How come I can’t use

export function reducer(state, action) {
  let newState = Object.assign({}, state);
  switch (action.type) {
    case ADD_ITEM:
      newState.data.push(action.payload);
      break;
    default:
      console.log("None hit");
  }
  return newState;
}

Without a mutation error?

I am re-assigning all of the top-level data. What if data was a 2d matrix, would I need to re-assign all of that manually using the spread operator?

How does …state work in your code if the state is an array?

This gets into a tricky area. Yes, you are doing a shallow copy. With this, newState is a different reference than state, but newState.data will be the same reference as state.data. So, when you push onto newState.data, you are also mutating state.data.

In theory you could do it if you did a deep copy, like with something like lodash’s _.cloneDeep.

1 Like

In my scenario, I will be working with react three and react three fiber, creating and storing models on the screen. These models may contain thousands of coordinates (its a voxel/ Minecraft creator). Deep copying the state every time, for all thousands of objects (even if they are instanced mesh, the individual coordinates are still stored), would be performance costly.

What would you recommend I do here? Objects may be colliding with other objects, so the objects must be stored in a global react database so they can be checked.

You don’t necessarily need to deep copy the state. You only need to shallow copy the parts that are changing.

    case ADD_ITEM:
      return {
        ...state,
        data: [...state.data, action.payload]
      }

The first part of this:

      return {
        ...state,
      }

does not copy the entire state. It is a shallow copy. Let me expand it a little to make it easier to discuss:

      const newState = {
        ...state,
      }
      return newState

This did not copy all of state. It allocated a new reference that points to a new object, but the reference to data does not change. So, state.data and newState.data point to the same spot in memory - that did not get copied. This is a shallow copy. This is what we want. Then we can change the parts that we want:

      const newState = {
        ...state,
        data: [...state.data, action.payload]
      }
      return newState

So, now we’re overwriting newState.data with a new reference. But if there were other sibling properties, they would not change. In newState.data we are creating a new array and we are copying the old state.data, but that is also a shallow copy. In other words, if those are objects or arrays, only the references will be copied - the data is not copied.

There is a lot less data copying than you think.

1 Like

Why not just use RTK as is recommended by using the hooks that let you use Immer.

What if state.data had 500,000 elements inside of it, each of which were arrays [x,y,z]?

Would this cause a massive performance decrease or does Javascript know to just simply change the pointer to that block of memory storing the array?

The larger the data structure the more expensive it will be. It also depends on how you are using spread and the runtime.


For web development, this usually isn’t a huge issue and if you are using JS for massive data crunching you probably pick the wrong language. There is a reason why code that needs to have high performance still uses pointers or languages even closer to the metal, like assembly language.

If you identify an actual performance issue, sure start looking for solutions. Until then prefer readable non-mutating code, even when it is slower.

1 Like

It depends on the details. My point is simply that people often overestimate the performance of immutability.

I also believe the old adage that premature optimization is the root of all evil in coding. We often obsess about things we don’t need to. Make smart choices but don’t worry about until it becomes an issue.

1 Like