I dont know what docker is for or how it might help in my workflow
Whenever someone asks this I think:
Yeah, I don’t know, even though I use it.
I mean, you can always find an alternative to docker, but the thing that most alternatives cannot replace is that docker allows you share the environment.
If you have a Xampp environment, for example, you cannot possibly share it with your colleagues. It may be that your colleague uses another O.S, doesn’t know how to install or use Xampp or his configuration is not compatible with yours. Docker would solve this problem just by installing it, which doesn’t require great technical skills (at least for development).
You could have a production environment ready in seconds and be able to test it on your machine or, even better, distribute that environment to your colleagues so everyone can test it.
With docker (and docker compose) you can create the entire stack and configure some parts of it and you’re done (you have to share it ). Doing it manually would require you to install the software, harden it (make it more secure), add the initial data (if required) and start/configure your app on different environments.
If you have more specific questions, please ask .
Anyways, I hope it helps,
Seriously though, Docker is the bees knees. Or the cat’s pajamas. Or something like that. Installation instructions to set up dev environments in Big Enterprisey Business Enterprises Inc used to look like this:
"If you’re on Linux,
sudo add-apt-repository blahblah then go ahead and hit the ok prompt after the key warning, that’ll get fixed once you
apt-get update. … 20 more lines of installation instructions … “… for the staging site, be sure $ORACLE_HOME/network/admin/tnsnames.ora contains blahblah … " 100 more lines about oracle library configuration " … Sorry we don’t have a config that works on Mac environments yet, we’re still working on it.”
Followed by pages more of how to set up your own DB instance configured for your dev environment. Then a section on how to glue it all together with Apache, god help you if you’re already using nginx for something else…
If you’re really lucky, someone already scripted a lot of these steps in a bash script or a Makefile or whatnot, but if your local config is just a wee bit different, you can watch this brittle chain of dependencies all crash and result in your coworkers wandering over to your cube saying things like “I can’t get my dev environment set up since you changed the library dependencies last month”.
My job as a build manager was to deal with all this crap. You know what the job consists of now?
git clone http://github.com/our-org/our-app
Now sit there and watch your entire dev environment build itself then spin up all the servers. Or just go grab coffee at the place down the block instead, wanna grab me a mocha while you’re there? Cool. Now I can get back to work as a developer, build management isn’t supposed to be my full-time job. On my end, that involves writing Dockerfiles, which is really just a super-primitive config file that runs scripts to put a system image together. Then I write a
docker-compose.yml file that puts multiple docker images together – like a DB server, a web server, and the API server, starts them all up, gives them all names on the network, and maps any network ports they use to ones on the local machine.
The interim solution between the utter hell that was manual config and the nuclear power tool of Docker used to be virtual machines. VMs are still useful in their own ways, but VM images get to be multiple gigabytes, and if the build process changes, it means starting over from the new image. Spinning up a half dozen VMs on a laptop is a tall order, and connecting them to each other is a tedious process. Docker does it all automatically and nearly instantaneously.
It’s pretty kewl And if you really want to get your mind blown, take a look at Kubernetes, which is kind of like docker-compose on a grand scale. It’s what Google uses to run most of their servers.