Ideas for managing workflow in a test environment?

So I have set up a web development, php, JS testing environment on a Raspberry Pi 3 running LAMP. Using the Pi because I can just have it sit on the local network using almost no power.
However it feels like i’m missing a step when it comes to actually editing files being hosted in Apache. By that I mean, if I want to sit in the living room and write code with my IDE (Sublime at the moment) on my laptop. Currently that involves me holding onto all the files locally on my laptop, making changes to whatever, FTP’ing the files to the Pi, SSH’ing into the Pi, moving the files to their respective locations in Apache on the Pi and then overwriting the existing file(s).

That just doesn’t seem like the most efficient way to manage it but I haven’t been able to google up a better process. For little things like typos or whatever I can just use vi/nano on the host directly, but I don’t want to remote in and run an IDE on the Pi.

In short, is there a better way to work on the host files remotely?

Flightplan is great for this. You can set up different plans for testing, staging, deployment, or anything else you can think of. Then, using NPM scripts or gulp tasks or whatever you want to use, you can update your remote server using a single command. You could even use something like npm-watch to execute your plan whenever you make a change to your source.


Why not just install on your laptop and do all your development and testing on your laptop. Edit the files locally, preview it locally.

Then just deploy on your Raspberry when it’s done. (Treat your RPI as your “production server.”)


I’ll look into Flightplan, that’s kinda what I was envisioning. Preferably I would like to be able to simply edit the hosted files directly, remotely…

When it come’s to real web development work, what is the most common approach to updating existing sites? Do developers typically use something like ammps on their local machine and then deploy everything to the host when they are done?

That’s what I do… MAMP (I’m on OSX), or IIS/ASP.NET/SQL Server running on a Windows virtual machine. Or a dedicated test server (a duplicate setup of the production server).

If you’re continually touching the production site, then all the changes/errors/half-finished page you make will be seen by visitors of the site.

And there are features you cannot finish development in a few minutes… what are you gonna do? Leave the production site in that limbo state? I guess if you’re the only visitor to that website it wouldn’t matter. But if your site serves hundreds of thousands of sessions every day, you can’t let that happen.

Do all your development, testing on your local environment and just deploy to production (i.e. live server) when it’s all tested, done and ready (and approved/authorized by client).

1 Like

Hey @dennisavo I’m going through this myself right now. My setup is similar and different.

Because I’ve started to work on the host Digital Ocean, and because my setup there is Ubuntu and Nginx, and because my home computer is a Windows machine, I didn’t want to mix in Ubuntu and Windows for a local development enviroment. Instead I decided to use Vagrant and virtualbox to create a VM on my local PC, isolating Ubuntu from Windows. Works very well. This puts me in the same scenario as you, just substitute Pi for VM.

Okay, so I was using putty to SSH and Filezilla to sftp files. Download a file, edit it, upload and test. TEDIOUS!

However I use PhpStorm as my editor, and I discovered that it will connect to a remote host, VM, Pi etc. and I can now edit directly to my VM ( or to a droplet on Digital Ocean ) just as if it was installed on my native Windows file system! It is most excellent, I have all the bells and whistles of the fancy IDE while working directly to my host site.

I realize you may not want to buy PhpStorm, but it is an excellent solution if you have the money to do so. And perhaps someone here can suggest a free or cheaper solution that is capable of doing the same thing.

1 Like

There are typically four environments: development, testing, staging, and production. You (the developer) work in the development and testing environments, which are on your local system. There are plenty of tools for getting a simple http server, and others that take care of things like transpiling and bundling code so development is pretty well streamlined. Once that work is “done”, it gets built and moved to the staging environment. Staging is meant to be as similar to production as possible so as to ensure the code will work when it goes live. If it works in staging, it’s uploaded to production where users can interact with it. This description smoothes over a lot of details, but the point is that in “real” webdev, you don’t touch the production code until you’re sure that everything works and you’re not going to bring business to a screeching halt. You should never have to edit files directly on your server.

The wrinkle here is that each environment can differ so much that there’s a large overhead to moving the code around. Maybe 70% of the dev team is on macOs, 29% is on various flavors of Linux, and Dale refuses to use anything but Windows. With a bit of work, everyone can work from the same repositories without much problem, but how and where things get installed can be different enough that other special tools will be required - thanks to Dale, the project needs to depend on rimraf, for instance. The latest and greatest in development is to use Docker containers, which are tiny, not-quite-complete virtual machines. The project owner or manager can create a Dockerfile which will spec out the container and normalize the environment. Once the project is complete, the container can be shipped off to a server and it will run the exact same as it did on the development machine. Now, no matter which operating system a person chooses to “use” (shakes fist at Dale), they’re developing in a *nix environment and there won’t be any surprises when the app goes live. So, in the future, you’ll be uploading containers rather than files.


Gotcha. That helps my perspective.

My original intent was to offload the development and testing environment to the Pi so I could keep it independent of my machines (desktop, laptop, mac laptop) and just use them interchangeably for writing and managing code. But the Pi ends up being the defacto production and staging environments as well given what I’m trying to accomplish. (building a website that does things, on my local network)

If I was only going to work from a single machine it would have been much easier to go the VM route. Maybe thats the easiest answer, stop bouncing around different machines, pick one and setup a testing environment on it?

@PortableStick and @dennisavo just to give a different perspective. I think PortableStick’s description is pretty much hitting the nail on the head if you are working in a bigger shop, but a lot of developers work for smaller shops or are self employed. In this case your client may very will not have a staging server to put at your disposal. If this is the case, and you are asked to work on an already functional website you will most likely be editing a live site. Scary, but as long as you do a complete backup first you can get away with it. If you get into freelance work for one of the large Freelance sites like Upwork, very often the client wants something done “right now” and because the request is usually limited in scope, again you find yourself editing a live website ( again do a backup ).

In a small shop, or working on your own, if you are asked to build a website from scratch, again you may not have access to a staging server. Its not practical if your clients are using all sort of hosting companies, and their builds consist of all manner of operating systems, server flavors, different databases types, etc.

So for some of us the “development, testing, staging, and production” model becomes more of something like “local development, local testing, push to clients hosting company” – and pray… PortableStick’s suggested way of doing things is by far the best way, but for some developers it is just not practical.

Last, if you develop skills using your local Pi setup, and become skilled with SSH and sftp, they are skills that may serve you very well down the road, depending on what you end up doing in your career.

1 Like

Yeah, I guess "typical’ may not be the best word to describe the process, since I don’t know how often anyone adheres to it (I think “agile” stands for “A’int Got Enough Time To Do Things Right” for some people). Maybe “archetypal” is a better descriptor.

I totally get what you’re trying to do here, and I wish there were a solid technological solution because I’ve wanted one myself for a while. There are plenty of solutions if you’re willing to put your faith in the cloud, but I assume you want to keep it on the Pi. You do have options.

One is to create a Samba share on the Pi. You’ll be able to map that share as a drive on all of your computers and it should act just like any other. On the Pi’s side, you’ll have to configure Apache/NGinx to serve the files from that share. If you don’t have a Windows box, you’re not constrained to using Samba shares, but I’ve not had any problems with Samba for the last decade or so. It’s easy to set up, fast, and stable, but you’d have a heck of a time if you want to code outside of your network.

A second option is to set up a self-hosted cloud service like Nextcloud. Now, you’re still able to edit your files as though they were local, but since there’s a layer of abstraction that handles syncing, versioning, and sharing through a web service, you’re able to more easily access it from outside your network. I was using OwnCloud a while back on a BananaPi, but that thing kept ruining hard drives so I gave it up. The downside here is a more complicated, error prone setup, and you have to maintain the cloud service. It’s also not nearly as fast.

The most interesting option is to host your own Cloude9 service. Not a lot of people know that C9 has released their software under an open source license, but you can actually get a web based IDE with a built in terminal without having to pay or have your files hosted in the cloud. This is super cool, but the downsides are that the setup is not simple and since it’s still in alpha, there could be crippling bugs.

Realistically, you’re going to be using tools like sftp and scp to shuttle files to and from your server, or just relying entirely on an awesome Vim setup on your Pi (much geek cred to be had there). Even if you were to start using Docker, you’d need to move that container to your server. What I now do with multiple computers is pretty standard - Git. Push your commits from one computer, then fetch/merge from another. This is a good habit to build anyways. I’ve been badly burned more than a few times because I didn’t fetch, review, and merge before starting work that day. You can keep your environments in line by using Docker, or you can experience the wonders of managing cross-platform development.

I should specify that by “testing environment”, I mean an environment that runs automated tests. The staging environment is more what people think of as “testing” an app - boot it up and poke at it. For your sanity, dev and testing environments should be the same (unless you’re using continuous integration like some sort of cool person).


Okay, so I found a pretty simple solution to this. Notepad++ has an plugin call NppFTP that creates a SSH connection to the host server (my Pi in this case) and basically treats the Pi act like a mapped network drive. You can open any file on the remote host from within Notepad++, make any changes you want, save the file directly to the Pi and it overwrites automatically. Super simple and nothing fancy.

Basically it does exactly what @rickstewart mentioned PhpStorm did. But free.

1 Like

Hey @dennisavo nice find! I’m a big Notepad++ fan and its great that a plugin is available to do what you need it to do.