What you’re describing is essentially integration problems. Everything that humans build that is complicated runs into this issue as some level and some point. Your running into issues in regards to the actual integration of all the different pieces. There have been significant progress on how to tackle this problem in a software environment. With the key part is software is cheap to integrate. If you were building houses, every “build” costs physical resources, but software just requires CPU cycles on some machine and some time. Its cheap, but it requires coordination and commitment to get to that point.
Generally it sounds like you currently have the following:
- Dev-server everyone works on at the same time (!!!)
- No source control
- Manual deployments
Each of these has their own solutions that build up on each other, and thus your solution should be tied to each of these. Each of these solutions have the general idea of acting as a “filter” to protect your end user’s from having pain in production (IE the madness your mention).
First, I’d look into how/where you want your developers to run their own builds locally. This will prevent developers from stepping on each other’s foot when on the single dev-server. Or unless I misunderstood and each developer already has their own dev-server, then you already mostly did this step.
You ask about how to manage local environments, and Docker is one. Virtual machines is another. Or the simplest is actually rock solid documentation. IE write down how to setup a local environment, and have all your developers partake in it. If someone see’s a problem with the documentation, they can edit it. Overtime the documentation on how to handle setting up a local environment should be extremely well documented. It takes time, but it will be worth it.
You could also try to Dockerize your entire dev environment as well, but this may or may not be easier depending on how much experience your team has with such technologies. Regardless, the docs you wrote earlier can be used as a starting point.
Alternatively you could do something like use github codespaces which will allow developers to developer on a machine imaged and pre-setup as a service! Similar techs would be code-server, which you can host yourself on-prem. However, these usually rely on having external resources. If your developers have their own machines, having them run everything locally through a mix of documentation+docker might be the most sensible.
Second, source control like git+github will allow you and your team to manage changes within the codebase over time. Manual workflows are easier, and git
can get nasty when conflicts arise… but thats the point, if your going to have issues, dealing with them as early in the process will help your team manage them, rather than later when things are more complex and costly.
Third, once you have your developers able to work locally, and keep track of their changes with version control, your codebase should be more manageable, and track able. However, your code might be clean, but you still need to manage deployments. This is where a “CI/CD” pipeline helps Or more generally automation. CI/CD stands for continuous integration and continuous delivery. Most people use this term, but only focus on the CI part, you’d do the same.
Traditionally this could be something like using jenkins, or newer options like github-actions.
In general this means writing code to test/check your own code. However this requires increased effort, and more resources. So you could do just the basics and automate your pipeline to automatically deploy to a test/dev server where developers/QA can check their changes are working as expected after they are integrated with everyone else’s. The goal again being to catch issues before they get to production/end-users.
What do you mean by “users files”? Usually you don’t track specific files in git
, or use something like git-lfs
to keep track of larger binary files (like images). If this is actual dynamic end-user content, you could use other technology to keep track of changes besides relying only on git.
Documentation+technology. Effort put into improving these aspects can pay itself back by cutting down on inconsistencies, and speeding up onboarding. There is a DevOps principle that the most important work to do, is work that cuts down on future work. Documentation and automation is key to that principle.
The workflow for your codebase should be the same. Complex code changes aren’t much different than simple codechanges. You workflow pipeline should essentially handle both with the same level of ease. However its worth mentioning that the human element (reviewing, writing code) should focus on keeping changes as small as possible over time. As smaller changes/commits/pushes/builds/deployments are easier to debug and keep track of than bigger ones. Your build+deployment pipeline wont care much about what you’ve changed and how much, but your humans probably will.
If you can automate deployments to a single dev server, then you can automate it to any number of environment. I highly recommend having the pipeline still have manual reviews to production. My team personally uses a single button for a manual workflow to take a given git tag and “promote” it to production, using the same pipeline/steps as what gets it to production.
Finally, its possible your already using it, but I want to throw it out there. The cloud could be used to manage your dev/test/prod servers rather than having everything on-prem. This usually means you can define your infrastructure in code (IaC) to a degree, which can also help manage versions, just as you would your own code over time. However, its possible this is overkill and your on-prem solutions are working fine, then I’d stick with it, but do look into the cloud later.
Hopefully that helps at least a little, good luck, keep building, keep learning 