Storing data on server vs database

Hey all, I’m having a bit of confusion between 2 different methods of storing data and was hoping someone could help. I’ll use a super basic example, lets say I’m keeping track of how many times a page has been visited using express on node.js

const Counter = require('./models/Counter');//a mongoose model
const express = require('express');
const app = express();
const port = 3000;

//METHOD 1
let myCount = 0;
const incCount = (req,res) -> {
 res.send('Count: '+ ++myCount)
}

//METHOD 2
const incDBCount = async (req,res) -> {
   const myDBCounter = await Counter.find();
   myDBCounter.count++;
   await myDBCounter.save();
   res.send('DBCount: '+myDBCounter.count)
}

app.get('/server-count', incCount)
app.get('/db-count', incDBCount)

app.listen(port, () => console.log(`Example app listening on port ${port}!`))

So i’m pretty sure on a small scale these 2 apps would do the same thing, As long as the server stayed running, but I don’t really understand the pro’s and con’s to keeping data either way.

Lets say you were making a game where players positions updated frequently, you would never store data like the player position in a database, would you? I say this because retrieving from a database seems like it would be slower, and if the player disconnected you could afford to lose it because they can just reconnect at the start (depending on the game obviously, but that’s how the game works for this example).

Any help understanding this would be greatly appreciated.

1 Like

For the game where players positions updated frequently you would neither store positions on server nor database. You would store it on client (in browser).

I probably should have specified a multiplayer game with websockets. In this case, if you were storing the data on the client, you would get discrepancies between the data on each client. I’m pretty sure that introduces the need to store data on the server.

I would still do that on client :slight_smile: Server is your connector, if one client changes position, server’s job to update other clients. Surely it’s something that doesn’t need to be stored in db, you can store final score at the end of the game…

Let’s say you’re doing it on the client, its a multiplayer race game. On my client, I cross the finish line, but on your client, you cross the finish line before you receive the update sent from my client. Do we both win, or do we both lose? :stuck_out_tongue:

If you were doing it on the clients there would be a bunch of problems like this. The solution to that is to store the data on the server and everyone updates and uses that data, the first person whos update reaches the server would be the winner, and you dont run into any data inconsistencies players.

In race game scenario each player will have its own timer and when you finish your message would be PLAYER X FINISHED. TIME: xx:xx:xx.x and not I won!!!

okay, so you can do something that basic in other ways. Imagine if I wanted to nudge you off the road, there needs to be one consistent data store, otherwise on one client i’ll be knocking you off the road and on the other i’ll be missing you completely.

KIf one player have connection problems and cannot see updates in at least 30fps then server will not solve this issue anyway. Actually it will make it worse. Consider this: I make a move on client and send ’slow’ message to the server, then server needs to calculate my new position and update my view - that’s two slow runs except of one :slight_smile:

In your case I would actually look at webRTC to make it completely peer-to-peer

It would solve the issue. There may still be lag, that’s why you need a decent connection to play online games, but atleast now lag won’t cause complete inconsistencies between players. I know that peer to peer is a solution, but it’s definitely not the only or simplest one.

I mean… there are no rules how to do stuff, I you feel that everyone should be synced to the slowest connection to keep 100% consistency, then you probably shall do it in the way you’re proposing :slight_smile: I personally think it’s a very bad idea. I strongly suggest to throttle your updates to 60 times per second and if someone cannot keep up - it’s his problem, definitely not a joint problem

You think its a very bad idea to use a dedicated server over peer to peer? I’m no expert, but i’m pretty sure there are a load of use cases for a dedicated server over peer to peer.

WebRTC is not true peer 2 peer as it needs server - to me sounds exactly what you need :slight_smile:

bump, still don’t really have an answer to the original.

You can use 4 types of memory on the server:

  1. Process memory - save something to the variable and while process (server) is running this value will persist.
  2. Computer RAM - use Redis or Memcached to save value, so while machine is up and running it will persist*.
  3. File system aka fs and save value on disk into let’s say json file. Value will always persist as long as you use only one machine.
  4. Database.

*in most cases redis / memcached will do the best to save db to json file in reload situation

From the least to the most reliable solution:

  1. Process memory
  2. RAM
  3. File system
  4. DB

From the slowest to the fastest solution:

  1. File system
  2. DB
  3. RAM
  4. Process

Did you notice how fs and db switched positions? That’s why people almost always choose database.

General conventions:

  • Store large data objects like media files in files
  • Store data you’d like to persist in database
  • Cache frequently accessible data from database in RAM for faster reads
  • Only store data that is temporal and needed for computation in process memory, like memoization

In your examples:

  1. Store page/article view count in database
  2. Store players positions in process memory
2 Likes

In your specific case, what you’re talking about is physics, keeping it in sync over a network, and afaik there are three ways to deal with it (this is taken from some notes on my computer, apologies if I’ve missed anything). Say you have two players, and the two clients are connected:

You can run the simulation on one side at a time:

  • You keep the simulation (positions, velocity etc) on the client only, and send player inputs only between the two players. So player one sends command to turn left, that’s reflected in the other player’s client. This is called deterministic lockstep, and what it depends upon is the physics simulation being deterministic, ie the game starts on either client and, given the same set of inputs, the engine behaves in exactly the same way (no tiny differences, exactly the same, because if there are even tiny differences the two clients will steadily diverge from each other). This means you don’t need to store data anywhere, it’s all on the clients. But floating point is a massive issue here. And latency is also an issue, because player2 can’t do anything until player1 input comes in and vice versa.
  • You send a snapshot of the game state [the important bits, like where the cars are etc, not the entire game] of player1’s client to player2’s client and interpolate it with their game state, and vice versa. This is called, surprisingly, snapshot interpolation. And it means big packets of data (whereas deterministic lockstep is tiny packets of data). This is very effective, but you need strategies to make the data as small as possible, which is where the complexity comes in (otherwise it is super laggy). It has the advantage that if an update fails for whatever reason, then the next one can be used or the next one and so on; the game state should just be smoothly interpolated between previous and current. It has the disadvantage that it doesn’t know about the physics, so if there is a gap between updates, players will [more often than with other approaches afaik] see weird behaviour when there are gaps in the data, like stuff going through walls as the cars update from one position to another.

You can run the simulation on both sides at the same time:

  • Both clients send each other their state and inputs, and the game synchronises the state on each client based on that (state synchronisation). This has the massive advantage of things still running for each of the players between updates. Less data has to be sent less often than in state interpolation, and the physics does not have to deterministic like in deterministic lockstep. But what you lose is exactness: everything becomes an approximation.

So those are the three approaches, and they don’t need a server. But in practice they kinda do, because there generally needs to be some authority about what is correct behaviour in the game; latency is the killer, and just having two clients communicating directly is likely to result in a bad game experience (there are situations where it’s not an issue, eg RTS games, especially where it is one player’s turn, then another player, and it’s fine to just back and forth between clients even on bad connections, as it has no real bad effects on gameplay).

So you have a server that knows the state of the game, and one of the three approaches described is used (or some combination thereof), with the server as kinda the referee in the middle. There are now an extra two hops for the data to make between the two players’ clients, but that can be masked on the client side using various tricks.

The storing data – the game is effectively being played by the server. A database on its own is not really any use, although there are some available that are optimised to be very fast, for game usage, but they would be supporting the application code running in the server.

3 Likes