Out in the wild!
Here you are. You've just completed weeks of hard work. It's time to clean off the old coffee mugs, empty cans and take away containers from your desk. Finally, you free your computer from the purgatory of 30 + chrome tabs of stack overflow posts, google searches, documentation and half-finished youtube tutorials. Your brand new Todo-App is complete!

So you've just finished writing the code, and the app works on your machine. You punch in http://localhost:3000/, and it shows up in the browser. You feel so proud and want to share it with the world. But just how are you going to do that?
The App
There are many different ways to deploy an application to the web, and we will look at just one of many approaches. Let's take a 10,000ft view of how we will deploy this app. First, we will have a brief look at the application itself and then how we got it onto the World Wide Web so anyone can see it. Check it out here if you’d like, http://172.105.103.106/
Todo App
- Node
- Express
- EJS (templating engine)
- MongoDB
- Mongoose (Helps interface with the DB using object mapping)
You can look at the code here if you want to look further into it. We have an express backend that serves up HTML with a list. You can add items to the list or clear off the whole list. The list is stored in a mongo (a NoSQL) database.
The Approach
We will use a series of tools to deploy this app to the internet.
The first step is to use Docker to set up controllable and easily deployable isolated environments. The best way to think about Docker is that it packages up nicely everything that your application needs to work and sets it up in an environment with everything in place. There is no need to worry if your environment has node installed, the correct version of your DB, etc. We use a Dockerfile to list everything your application needs.
Next, it's time to share your code in an online repository, a home where your code can live. This is where Github comes into play. Github lets us manage our code and different versions of the code. For example, you're at work writing all of your code on the company machine, but you need to head out of town for some meetings, and you'll be bringing your personal computer. Just write all of the code at work, and push the code up to Github, where you can then pull it down onto your personal computer when you're stuck at that Holiday Inn Express continental breakfast.
Lastly, we will use Linode as our cloud computing vendor of choice. Why choose Linode when we have a vast array of cloud computing services ["Microsoft Azure," "Google Cloud Platform," "Amazon Web Services"]? They offer $100 in computing credit for listeners of my favourite web dev podcast, Syntax, so that's as good a reason as I need to show support for creators that I enjoy. We will use Linode to host a Linux server, which will become the home of our application, which will be able to access through our web browsers anywhere in the world!
Dockerize!
Here is our Dockerfile. This recipe tells Docker how to build a container for the application.
FROM: Tells Docker which image to start as a base template. Here we use a node image based on an alpine Linux OS with Node.js installed, which is super lightweight and fast.
WORKDIR: Sets the correct directory in which all of the following applications will run.
COPY: Copies file from the directory where the Dockerfile is located into a directory in the container.
RUN: Instructions for the container to execute from the command line.
EXPOSE: Tells Docker that the application is listening on the named port.
CMD: There can only be one CMD command in a Dockerfile. It tells the container what to execute when it starts up.
The following file will be used to build our two docker containers on the Linode server. We use one container to hold the node application and a second container to hold the DB.
To summarize this file quickly, it tells Docker that we will be running two services (node, mongo), instructions on how to build the image (by using the Dockerfile we already used) or grabbing the image for a publically available image registry. The volumes let data persist over the lifetime of the containers. Containers can be created or destroyed easily, and volumes help to ensure that the data created in them can be reaccessed as we make new containers.
Store in Github
We will use Git/Github to store the code. Follow these steps to keep our code.
- Initialize a local repository
- Add files to the repository
- Commit the files
- Create an online repository (on Github.com)
- Connect local git to online git
- Push local repo to Github.com

We have our application ready with Dockerfiles, the code is stored online, and now we are prepared to run this application on the server!
Linode time
Now we must set up our Linode server. Sign in to your account if you've been following along and signed up for the $100 free Linode credit.
Our first few steps to take will be to create a server. Search through the marketplace, and select the Docker option. This lets us get going with a compute unit that already has Docker installed and is ready to go.



Set a command to run when the server starts up and select a Debian 10 Linux distro for the server to run.


Choose the plan; to get this app up, I have selected the cheapest plan on a shared CPU. This can be changed later if you end up requiring more resources.

After entering a root password for the server, spin it up!!
Final Steps
Now that we have the server up and running, the Dockerfiles set, and the code stored in GitHub, it's time to get things running.
Start by opening a terminal and entering the following command. This will connect you to the server that was started up with Linode. There will be a prompt to enter the root password, and you'll be connected!
ssh root@<IP-Address from linode>
Once connected, you can run the docker ps
command, and you should see the docker/getting-started
program running. To stop this, run. docker stop <first two digits of the container ID>
Next up, we will pull in the code we stored on GitHub, using the command git clone https://github.com/<username>/<reponame>.git
This will bring in the code from your online repository and store it on the server. Move into the directory, and you will be greeted with all of our files.

Now here is where all of the hard work pays off. With one simple command, the application will startup! You will be able to add the server's IP address into a browser search bar, and your site will be live! Let's take a look at the command.
docker-compose up -d --build
The docker-compose up
command will use the .yaml file we wrote to build the containers. The flag -d
and —-build
will set the application to run in a 'detached' mode and build the images from the Dockerfile. This is helpful if you have new code in the application that you want to be updated within the containers. If you look here, you can look at the application that I deployed.
The next step would be buying a domain name and hooking it up to your server's IP, making the site much easier to get to rather than a string of numbers. For now, I'll leave that for another post.
To wrap things up, this is not the best/easiest/only way to deploy a project, but I found it a great way to introduce deployment. As many new developers spend so much time working on their projects, often the idea of deployment becomes an afterthought. I hope this is an excellent introduction to the concept of how you can get your projects out into the wild. Deployment represents a culmination of several different tools, techniques and approaches. Here we leveraged Docker, GitHub and Linode together to achieve the goal of hosting a Node app publicly on the web. I hope this can provide you with insight and knowledge while, most importantly, sparking your curiosity to try this on your own.
Keep on learning, keep on developing and keep growing.