React Router issue with State not being fetched before loading

What’s happening: Whey I type a deep URL into a browser without navigating to it first from the main page, the page won’t render because it dosn’t have the data.

In my server I see that there is a fetch sent to my API with a status code of 304, but if I do it in an incognito window, I do get this error that others have received:
Cannot GET /topics/glascow-coma-score

I’ve read this article, but it didn’t help, or I’m not doing it correctly.

For example, if you go here:

In the console you’ll first see an error due to the state (resources) not being loaded. Then in a half second, the resources are populated in the console but there is no rerender.

I’m using browser history, and I’m sending all uncaught calls on the server side to my index.html page, which is supposed to load the index page first, and then react will route, and it should have the state loaded.

I’m not sure if this is a component lifecycle error, if I have an error in my server file, or if I need to eject my create react app and do something with the web-pack server.

I really don’t know, Im just sifting through various things I’ve read online.

Im not even sure that I’m on the right path, but I know the problem can be solved, b/c that’s one of teh things react-router is supposed to allow you to do, link to deep pages, and with some magic, do some SEO as well.

Here’s the relevant server code:

app.use('/api/resources/', resourceRoutes)


app.get("/api", function (req,res){
  res.json({api: "This is your api"})

app.get("/api/resources/*", function (req,res){
  console.log("Index page sent")
  res.sendFile(path.resolve(__dirname, 'build', 'index.html'), function(err) {
    if(err) {

And the resources routes:

 const express= require('express'),
   router = express.Router(),
  Resource = require('../models/resource')

router.get('/', function(req,res){
  Resource.find({}, function (err, resources){
    if (err) console.log(err)

router.get('/:id', function(req,res){
  console.log("Im here in the backend API getting " +
  Resource.findById(, function (err, resource) {
    if (err) console.log(err)
    // console.log(
})'/', function(req,res){

    .then(function(newResource) {

And my repo

That article says that every route handled by react router should return the index.html. This means you should only have one route:

app.get('/*', function(req, res) {
  res.sendFile(path.join(__dirname, 'path/to/your/index.html'), function(err) {
    if (err) {

Then when react-router loads from that request, it will pull the actual url used to make the navigation.

You correctly redirected the /api/resources/* route. But that will only work when someone physically enters that route in the browser. If you type

into the browser you will see it try to render the index.html page.

Browsers can only send the url you enter because it doesn’t know react and react-router are listening until it loads the index.html.

Which is why we have to hack it by allowing it to send and store the url in it’s history, but you will always return the index.html so that react-router can handle the navigation.

I can sort of follow what you’re describing, but if we redirect all server requests to the index.html file, how does the server ever send the JSON data from the API?

And thank you for the explanation!

Since you’re using react-router to create a single-page app, all the rendering logic will live on the client side. This means the client must handle requesting any data, including the initial data.

The current best practice is to fetch the data on componentDidMount. This method only gets called on initial mount, so it’s the best place to request any data you’d like.

When you use routes with react-router, you have to treat each route as if it actually is an index.html page. In other words, you have to mount your components and fetch data as if it was your root url.

This is inline with how static pages work without react, where the server sends a new html page. The only difference is that react handles all the html now.

Let me try to explain a bit the logic behind react-router in case you’re still not clear.

  • the browser makes a request to a dns server to fetch a url
  • the browser stores this url in browserHistory
  • your server returns an html page (your index.html)
  • when the page loads, it mounts react (because your html page says to)
  • react loads and calls react-router to render a route
  • react-router now looks at your browserHistory and renders the last route added

The issue you ran into is that the server was sending an html file, but it was to the api route. Which meant the address had to be called directly, either by the user in the browser, or you through code.

The best solution is to redirect all react-router routes to /* – these are the urls customers will use, and call your api directly on componentDidMount.

Keep in mind that express is like react-router and will return the first matching route. So make sure your catch all is the last route defined.

Your welcome! I know it seems like a song and dance just to render a route, so let me know if there’s anything else I didn’t make clear enough.

OK, so my resources routes that serve the JSON data to api/resources is what the App component will call through the fetch in my api.js file (client side), and everything else is the “/*” path that goes back to index?

I think that’s what I’m doing here, but it still isn’t working:

my heroku deployment is uptodate with the git hub commit, so you can test it as well (if you like & have time!)

Also I’m thrilled I got my routes and some functional JS working to display categories correctly!

Finally…this is essentially a SPA since it’s all loaded up front…how well will this scale if I have, 1000+ resources? (off topic from this thread, I know)

Your resources routes are catching your root, so it never makes it to your catch all

router.get('/', function(req,res){
  Resource.find({}, function (err, resources){
    if (err) console.log(err)

Try removing it and see if it works. If it doesn’t, let me know if you’re ok with me deploying it to a free heroku instance to troubleshoot it tmr. I’ll need to do this so that I can tweak the code on my end. This way you won’t have to give me access.

Nice work. Your code looks clean on first scan through. You also have a nice separation of concerns.

The first question to ask when you think about scaling is, “do i have to access all that data at the same time?”

Usually the answer is a resounding no. So if you want it to scale with thousands of resources, only send the required resources for initial mount. Then load the rest lazily while the user scans the page.

Time to first paint is the most important metric that determines whether a user thinks your site is slow. On average a person spends at least 1-2 seconds on a page before navigating away. That should give you plenty of time to decide what to begin loading in the background.

Also, roundtrip times to a server are extremely quick. So if you could separate your data to minimize payload size and number of requests, you’ll be on the right track.

Architecting a webapp will always center around how you structure your data tbh.


In the server file I have this line, so the “root” route in my resources.js routes file is actually /api/resources. But I see what you’re saying. I’ll play with it a bit and feel free to play with it on your end when you have time.

app.use('/api/resources/', resourceRoutes)

Nice work. Your code looks clean on first scan through. You also have a nice separation of concerns.

THank you very much! This is probably my 10th or 12th React app…but by far the most involved up to this point. I feel like last year I was literally crawling with react, like using react to show a static single page that didn’t really need react, but I coudl write it. THen I bought an Andrew Farmer ebook that was really great and I made half a dozen more simple apps. Finally I took Colt’s bootcamps and I feel liek the pieces are starting to come together…yet everything I want to add for this real-life application is so tedious! Lots of things going on all at once.

Architecting a webapp will always center around how you structure your data tbh.

Again, that sounds like a piece of wisdom learned through experience! Before medschool I worked as a database analyst for a small healthcare software company. It was all SQL database work that was fun. So learning mongo / non relational DBS was a huge eye opener for me, and is as much freeing as it is challenging. So far I’m loving it.

BTW if you pull down a local repo, there is a resources.json in the data directory that you can pull in, or I can PM you the .env variables for the database.

Iv’e udpated my issue here. I can now link directly to a single nested route, but any deeper than that Iand I get an error. The index.html page (App.js) is still loading but not until after the linked page/ route tries to update and I get an error.

Can anyone help with this questoin on MERN stack / react-router? I’ve been told it’s a server (ie node) issue, but i’ve tried all the fixes I’ve read and still no luck. Details and code in the question:

This isn’t as simple as just troubleshooting your code. So I’m downloading a repo and troubleshooting this today. Should get back to you shortly.

Thank you! I’m tearing my hair out over this, thinking it may Be other issues, trying toncode Minimim viable code samples and breaking things!

Ok, so I have a working version of your app with a working development environment. This should help you feel more secure in what’s going on with your app.

First I’ll answer your question.

This is because you didn’t guard for the “unhappy path” – the path the code takes when you don’t get what you want.

     * In react, it's better if we ALWAYS return something.
     * Even if it's just an empty object,
     * This way you always have something to render.
    const resource =
      this.props.resources.find(({ friendly }) => friendly === val) || {}

     * You can then check if it's available to log
     * or display a message to the user
    if (!resource.type) {
        `No match found for id: ${val} with these resources:`,

This fixes this particular issue, but it results in others in your Resource component. Which leads me to the next section.

Before moving on, download my updated repo at GitHub - JM-Mendez/emquick: quick resources for emergency medicine students/residents/physicians/pas/nps/rns, and see if you’d find this structure useful.

Everything works from environment variables, but I set up git to not check in that or the database files. So it has to be recreated on each machine. This keeps your api secrets private.

To actually see the full benefit you could receive, my suggestion is to take 15 minutes and set it up according to these 5 steps:

I’m going to guess it all seems daunting at first. But once you set it up, you won’t have to worry about it any longer. So I’m willing to help you every step of the way. You can PM me if you’d like as well.

sweet, thank you so much for this. I’ve cloned your repository and going through the 5 steps. Looking forward to having a new way to work on software to make cool stuff. I’ll update (or PM) when I’m ready to move on, although I supposes others can benefit so I’m happy to keep posting in this thread

OK, its running and logging stuff!

Why not include the debug tools in the package.json as opposed to installing them later?

these are not in your version of my server file, where do i connect to my db or choose local vs mlab?

const db_url = process.env.MLAB_URI

OK, finally I’m not sure what to do with the logger.js file, do I use one copy for each component I’m working on?

They are in the package.json. What I meant was that if you only wanted to use the logger.js, and not the full boilerplate, for you to install them directly

I didn’t know you needed two separate databases.

The way this setup works is that

  • you use a local .env file with your local url in DB_URL env.
  • in production the code will use whatever .env file or environment variables you input for DB_URL env
  • mongoose will connect to the correct url according the .env file

This is so that the code you write in development is the exact same code you deploy. Only your environment variable values change.

If you need to swap between databases for your app, then describe the workflow you need, I’ll put in a fix for it.

If you just need a development database and a production one, the module will default to your environment vars.

Ideally, yes, since you’d want to add logging to each file. Also, since you decide what to see, you throw them in anywhere you think you’d need to know what’s going on.

Months later when you wonder what’s going wrong, you’ll be glad you did it. Because it will help you follow the flow of your code without resorting to random console.logs

So for each file:

                                             |-- this is the namespace that shows up
const log = require('path-to-logger.js')('Topics')'....')

// then in the console you'd see
Topics:info .....

The namespace you assign will determine if it shows when filtering. I prefer to use this format

type-file or just file

So if I know I’m working with routing and want to see that in the logs, I’d require like this

const log = require('path-to-logger.js')('routes-topics')

// which gives

And if you want to shut off the logging you have a few choices

# show nothing. Use for production

# only show Topics

# show all except Topics, using wildcard for all (*), and exclusion sign (-)
DEBUG=* -Topics:*

# only show errors for Topics, and all from Resource
DEBUG=Topics:error Resource:*

# You can filter even more by logging level

# no level filter

# only show above level info (the precise values are in the .env sample file)
DEBUG=* -Topics:*

# the above will show everything at or above info level

# but will not show

If you’d like, I could write up some documentation on all the most important parts. I was waiting to see if you wanted to use it first.

I should have looked in the package file! I was just quickly following the steps after my work shift, by that time my brian was tired.

RE DBs: I don’t “need” two DBs, but I guess it depends how you think about it. Here is what I have been doing with this particular app. I started with a local db and a seed file while i was developing the structure of each topic. Once that was fairly settled I seeded my local DB and then just used those 3 topics.

I got excited and wanted to see if I could deploy this, and so moved to a mlab db, seeded it, then spent about a solid week on the “add new” components, so I could start adding topics to the DB.

So on my development machine, I have been using the mlab / remote db bc it has more topics and bc I can start adding topics which can remain a permanent part of my db as I continue to work on features.

So, it wasn’t clear to me where in my app that connection would occur. I added my mlab db string in the .env file, but for this specific app, because of where I currently am with development, I want to keep adding topics that come to me as being ones I"ll use currently & often so I can have a substantial amount of resources and calculators ready when I’m ready to share it with the world. (as well as use it myself). with the mlab I can pull up my heroku app at work and even though its not production ready, i’m the only one using it

I am sure there are also “proper” ways to work around this like deploying a development database to production when ready, that I have yet to learn (seems like it should be simple though?)

I was confused by not seeing any mongoose calls in the server.js file. Now that I look at the code carefully, I see that you’ve created a DB Helpers file where those actions take place.

I still don’t undersand fully what docker and kitematic help me with so when I saw these lines I assumed they were specific to some use of those tools, as opposed to being my actual database.

# Database docker variables

I tried renaming the DB Name to match my local DB that’s currently populated, but it didn’t pull up resources(db). I also added my mlab url, and uncommented that line in the database.js file, but am unable to connect to the mlab at this point.

If I’m understanding one of the functions of docker, it should start up my mongod for me so I don’t have to worry about that? and then the kitematic should let me browse that DB?

The logging info does seem heplful as well, but just need more time to sort out what I’m seeing in the browser logs vs the server side logs, etc. As I look through the code and view the app (minus data), it’s becoming clearer to me.

I guess my immediate questions are:
-how do I connect to my existing local db OR seed a new db with my seed file? (I’ve reviewed the code in seeds.js and still trying to grok it)
-how can I use my mlab db in development and/or deploy my local db to production when ready to deploy?
-what does docker do again? lol

Alright! Hot stuff! I got the local DB setup and working with the seeded DB. Very cool. I’m getting to like the debug tools as well. it’s like console.log on steroids. Its still up to the programmer to insert the debug statements where indicated, but so much more informative.

When I try to connect to the mlab db ,however I get an error of URL Malformed. I’ll keep troubleshooting.

1 Like

Wow! I was not expecting you to jump straight in! Sorry I didn’t give you all the info right away. Right now I’m going to answer your questions, but later I’m going to sit down and right up some documentation to better guide you.

I updated the .env.sample to better match your requirements. Which means I also had to update other files, so you’ll have to clone my updated master. GitHub - JM-Mendez/emquick: quick resources for emergency medicine students/residents/physicians/pas/nps/rns

I’m not sure if you’re using git (I recommend it), but for now it’s not important. First let’s get you set up properly. Then I’ll walk you through git. My main goal isn’t to make you an expert in these areas, but to get you productive. So I’ll only show you as much as you will need.

In the updated files, you now set your DB_HOST, DB_PORT, and DB_NAME. This is how you’ll connect to either your local (docker database), or your mlab one. The sample has the docker defaults, but you can change it to match your local install.

The beauty of having a dev setup is that you can connect straight to your production environment from it, without having to change much besides the env vars.

And if you use git, your secrets will never upload if you exclude them in the gitignore file. I already set it up so that your env file never gets uploaded.

So the way I’d work if it was me, is that I’d do all my work with the local database. Then when I feel the database is set and have verified that it seeds properly, I’d change the env vars to mlab and seed the new documents.

This should go smoothly since your development environment matches your production environment.

I wasn’t clear on this earlier, but if you set the SEED env var to the name of a model in your models folder, it will seed the array in the resources.json according to the model you chose.

If you detail how you’d like to orgranize the resources you’d like to seed, then I can write it up in a way where you can seed from the env vars.

docker is software that isolates the container from the rest of the machine. It’s useful because you can think of each container as it’s own little computer. And in that computer you can run a server, load a database, even install a different OS.

All that docker is doing for you at the moment is setting up a mongodb in one container, and nosqlclient in another. And it will launch and destroy them reliably to keep the work area clean.

By the way, docker will store the database files in /docker/mongodb_data. This also will not get uploaded to github.

Once your containers launch, you can navigate to localhost:2000 to view your admin board. You’ll have to create a connection to the database you created if it’s your first time. The hostname is always MONGO so that docker can talk between containers. The format is like:

hostname: MONGO
port: process.env.DB_PORT
database: process.env.DB_NAME

From there you can inspect your database, similar to how you can in mlab.

Kitematic is just an app that lets you manage your containers. You can start, stop, remove, etc. You will only need to use it if for some reason your containers stop working. You’ll be able to see the logs there. Otherwise, just keep it in the back of your head for now.

Keep in mind that you don’t have to use docker if you don’t want. You can spin up your own local db and connect to it. The boilerplate just makes it so you don’t have to do it manually each time.

Hopefully this answers your questions. I’ll write up some documentation later detailing these steps. And as always, feel free to contact me. Happy coding!