How to improve our current dev and prod workflow using git

I’m trying to come up with a solution that would help us alleviate constant issues within our company.

We have dozens of websites that are not very consistent, meaning their file structures differ and require different PHP versions(5.3, 7.0, and 7.4).

When we need to make some changes to a project, we usually modify files that are stored on our dev server(Windows) where Apache, MySQL, and PHP are running.
Our projects are then available on http://{PROJECT_NAME}.example.com.

The pros are that all projects are served by the dev server, and developers don’t need to waste time on installing and configuring Apache, MySQL, and different versions of PHP on their machines. If something needs to be modified, they simply make the necessary changes directly over LAN, and can immediately see what changes they’ve made.

The cons are that it’s very difficult to keep track of changes, especially when multiple people are involved. When we work on an old project for a few weeks or months, there are so many changes across many files and it’s very easy to forget to upload something.
Some of our developers also tend to download files locally to their computers, but then forget to upload the changed files to our dev server and instead, in a hurry, upload them directly to the prod server, which then leads to complete madness during synchronization before uploading the files from the dev server to the prod server.

Currently, we have two prod servers(Linux and Windows) and transfer files via FTP.

I would like to set up a git server, but I haven’t figured out a way to overcome these issues:

  1. Some of our projects are quite large and have a lot of data (user files) 20GB+ (.gitignore)
  2. How to deal with different PHP versions and their configurations? (custom installations and configurations or docker)
  3. Keeping the workflow as simple as possible when making minor changes e.g. replacing a logo, changing the address, and etc.
  4. Our clients should still be able to access the dev server and have a look at the project, which is in progress before everything is pushed to the master branch

What you’re describing is essentially integration problems. Everything that humans build that is complicated runs into this issue as some level and some point. Your running into issues in regards to the actual integration of all the different pieces. There have been significant progress on how to tackle this problem in a software environment. With the key part is software is cheap to integrate. If you were building houses, every “build” costs physical resources, but software just requires CPU cycles on some machine and some time. Its cheap, but it requires coordination and commitment to get to that point.

Generally it sounds like you currently have the following:

  1. Dev-server everyone works on at the same time (!!!)
  2. No source control
  3. Manual deployments

Each of these has their own solutions that build up on each other, and thus your solution should be tied to each of these. Each of these solutions have the general idea of acting as a “filter” to protect your end user’s from having pain in production (IE the madness your mention).


First, I’d look into how/where you want your developers to run their own builds locally. This will prevent developers from stepping on each other’s foot when on the single dev-server. Or unless I misunderstood and each developer already has their own dev-server, then you already mostly did this step.

You ask about how to manage local environments, and Docker is one. Virtual machines is another. Or the simplest is actually rock solid documentation. IE write down how to setup a local environment, and have all your developers partake in it. If someone see’s a problem with the documentation, they can edit it. Overtime the documentation on how to handle setting up a local environment should be extremely well documented. It takes time, but it will be worth it.

You could also try to Dockerize your entire dev environment as well, but this may or may not be easier depending on how much experience your team has with such technologies. Regardless, the docs you wrote earlier can be used as a starting point.

Alternatively you could do something like use github codespaces which will allow developers to developer on a machine imaged and pre-setup as a service! Similar techs would be code-server, which you can host yourself on-prem. However, these usually rely on having external resources. If your developers have their own machines, having them run everything locally through a mix of documentation+docker might be the most sensible.


Second, source control like git+github will allow you and your team to manage changes within the codebase over time. Manual workflows are easier, and git can get nasty when conflicts arise… but thats the point, if your going to have issues, dealing with them as early in the process will help your team manage them, rather than later when things are more complex and costly.


Third, once you have your developers able to work locally, and keep track of their changes with version control, your codebase should be more manageable, and track able. However, your code might be clean, but you still need to manage deployments. This is where a “CI/CD” pipeline helps Or more generally automation. CI/CD stands for continuous integration and continuous delivery. Most people use this term, but only focus on the CI part, you’d do the same.

Traditionally this could be something like using jenkins, or newer options like github-actions.

In general this means writing code to test/check your own code. However this requires increased effort, and more resources. So you could do just the basics and automate your pipeline to automatically deploy to a test/dev server where developers/QA can check their changes are working as expected after they are integrated with everyone else’s. The goal again being to catch issues before they get to production/end-users.


What do you mean by “users files”? Usually you don’t track specific files in git, or use something like git-lfs to keep track of larger binary files (like images). If this is actual dynamic end-user content, you could use other technology to keep track of changes besides relying only on git.

Documentation+technology. Effort put into improving these aspects can pay itself back by cutting down on inconsistencies, and speeding up onboarding. There is a DevOps principle that the most important work to do, is work that cuts down on future work. Documentation and automation is key to that principle.

The workflow for your codebase should be the same. Complex code changes aren’t much different than simple codechanges. You workflow pipeline should essentially handle both with the same level of ease. However its worth mentioning that the human element (reviewing, writing code) should focus on keeping changes as small as possible over time. As smaller changes/commits/pushes/builds/deployments are easier to debug and keep track of than bigger ones. Your build+deployment pipeline wont care much about what you’ve changed and how much, but your humans probably will.

If you can automate deployments to a single dev server, then you can automate it to any number of environment. I highly recommend having the pipeline still have manual reviews to production. My team personally uses a single button for a manual workflow to take a given git tag and “promote” it to production, using the same pipeline/steps as what gets it to production.


Finally, its possible your already using it, but I want to throw it out there. The cloud could be used to manage your dev/test/prod servers rather than having everything on-prem. This usually means you can define your infrastructure in code (IaC) to a degree, which can also help manage versions, just as you would your own code over time. However, its possible this is overkill and your on-prem solutions are working fine, then I’d stick with it, but do look into the cloud later.

Hopefully that helps at least a little, good luck, keep building, keep learning :+1:

2 Likes

Thank you for the reply!

You are right with these:

  1. Dev-server everyone works on at the same time (!!!)
  2. No source control
  3. Manual deployments

By user files, I meant files uploaded by the client/users. E.g. we work on a project where there are thousands of PDF files.

I feel like using Docker would be overkill for now, so it might be easier for our developers to install and set up their own dev server locally.
It’s not such a big deal to write good documentation.

We’ll need to get used to working differently, but these changes will make our lives easier in the future.

Thank you for the tips!

Hahaha, that’s a great joke!

Seriously though, good documentation is time consuming but very important.

1 Like

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.