Personal Project Overview — Infrastructure

Martin Mička
7 min readMay 29, 2023
Photo by Ian Battaglia on Unsplash

Continuing with my series of tech articles, today I will discuss my thought process regarding the infrastructure of my project. As mentioned previously, there are several moving parts to consider: a frontend app (built with Next.js and TypeScript) that communicates with the backend API (built with Node.js and Nest framework, also with TypeScript), and a few other dependencies such as a database.

The main requirements that guided my infrastructure planning were as follows:

  • I can easily replicate the infrastructure on different servers and setup multiple environments
  • I can have infrastructure as configuration
  • I can scale easily
  • Server envrionment is similar to local environment

After experimenting with different approaches, I have decided to use Docker for both server environments and local development. Having a managed Kubernetes cluster would be unnecessary overhead and more expensive to run. Since this project is for personal use or small-scale implementation, going full cloud doesn’t make much sense.

Docker is perfect for my needs. It can scale, provides sufficient virtualization and abstraction, and solves many of the issues that I would encounter with a dedicated solution. I can also use Docker Compose for versioning and full configuration capability. I can spin it up on multiple servers and use it for local development as well.

Backend and Frontend Applications

I began by dockerizing both parts of my application for local development. Each project has a versioned docker-compose.yaml file. Let’s take a look at them.

Local backend docker-compose.yml

I am using Traefik as a reverse proxy here because it is convenient to use, has service discovery, and works well with minimal configuration. To learn about all the cool stuff you can do with it, I suggest browsing their docs. Traefik takes care of routing requests and other conveniences, such as SSL. I generated and self-signed a certificate, so the setup is closer to how things run on the server. The configuration of Traefik is also pretty straightforward here; the container has forwarded ports 80 and 443, and basic configuration of host, TLS, and service exposure of Traefik Dashboard are in labels. The container also has a bound Docker socket, so it can listen and discover other containers as Traefik services, as well as one configuration.

Traefik configuration

Configuring Traefik is fairly straightforward. First, I turned on the Dashboard to monitor its activity and troubleshoot any issues. Then, I set up two providers: a Docker provider for the services that Traefik needs to serve, and a file provider for managing TLS.

Dynamic Traefik configuration — file provider for TLS

Returning to Traefik configuration, after configuring providers, I set up a proxy to redirect all unsecured traffic to HTTPS, as shown in the web entrypoint. For development purposes, I have turned on logging and access logs, and I have disabled TLS verification, which would fail since the certificates are self-signed. To generate certificates, I am using minica.

In addition to Traefik, I use a MySQL database (I switched from Mongo since my last update). My approach is simple: I create one volume to persist data, establish a separate network connection (so I don’t have to expose MySQL to containers that don’t need to interact with it), use a deterministic database name and root password in the environment, and forward a port for local access to third-party database clients. Of course, in production, I do not forward the port.

Lastly, I also containerized my backend application. I used a multistage build for my application and used my Development target. My backend container is connected to two networks — one that exposes the container to Traefik and the second one for interacting with the database. Additionally, this container has a bind mount for the source code. Below is my Dockerfile:

Quizae backend Dockerfile

The setup process is straightforward. In the first stage, only necessary files such as NPM state and Prisma configuration are copied, followed by the installation process. In the second step, the built application from the previous stage is copied and assigned to a rootless user, then run. At the time of writing, I encountered some extension issues on Alpine image, so I used a base image instead. The third stage is also simple — it does the basic setup, installs dependencies, and copies the application code. Let’s take a look at frontend next:

Frontend docker-compose.yml

It builds a development target from my own image based on the Node image. It binds the local context (source code) inside the container and is on the same network as the backend and Traefik. There’s a Traefik router configuration in the labels, and some basic environment variables are set — mostly for authentication libraries and links to the backend. I’ve disabled Node TLS verification to prevent self-signed certificate errors.

It’s also worth mentioning the additional hosts that need to be routed outside of Docker to ensure correct API communication from inside the container, as well as proper functioning of HMR and websockets in the Next.js framework.

Continuous Integration

I need to have a production-ready image of my app in order to deploy it. For this, I have decided to use Github Pipelines and I have found it to be a helpful tool. I will demonstrate this process for the backend app, as the frontend app is not yet fully ready to be deployed. I have two workflows in place: one that is triggered and enforced to pass on every pull request, while the second is triggered automatically on release. Since I have locked my main branch, I am 100% certain that the code heading for the release has passed tests.

Test CI Workflow for Quizae Backend

Pretty neat, right? Since the backend is headless and Nest offers some useful functionality for end-to-end testing, the only dependencies required are Node runtime and a MySQL database.

Image Build Workflow for Quizae Backend

In terms of the build process, I still rely on Docker. I have previously shared the Dockerfile and some of my reasoning behind that choice. As for my workflow, I do not use any proprietary tools since only basic functionality is required. Instead, I use Docker actions to generate image metadata with semantic versioning, which aligns with how I version my projects. After that, I set up QEMU and Buildx, log in to Github Container Registry, and build my image for the production target. If everything goes smoothly, the image is then pushed to my Container Registry.

Server Environments

For server environments, I have created a separate repository that contains a few Docker Compose files and Traefik configuration. Although this introduces a bit of code duplication, it gives me full control over how I want to orchestrate the containers on the server.

I have decided to use Portainer as my container management tool. It works well with Docker and allows me to control my apps through a visual UI, instead of using complicated scripts on the server. My initial compose file includes the basics: Traefik as my server-wide proxy and Portainer, which will orchestrate my backend and frontend.

Base environment — Traefik and Portainer services

I don’t use Traefik in a significantly different way from my local setup. However, I added HTTPS verification, which works well with Traefik. It not only generates certificates but also verifies them for all domains and subdomains used with the server, without requiring any additional effort on my part.

Additionally, I have configured HTTP Basic middleware for the Traefik Dashboard. This serves as an adequate security measure at the moment. For debugging purposes, I have enabled it, if necessary.

Regarding the second compose file for the “backend” part of Quizae, it includes two service definitions: one for the database and the other for the backend application. Although containerizing databases is not ideal, for the scope of my project, I am willing to accept some performance penalty in exchange for the convenience this setup provides. Furthermore, the project scope does not justify spending additional money on a managed database server.

Backend docker-compose.yml for server environment

This compose file is fully utilized by Portainer. What I appreciate about it is the ability to easily manage my service. For instance, if I need to add more containers to the backend, I can do so from the UI. Portainer also offers webhook functionality that can be utilized for Continuous Deployment.

Currently, to apply backend changes on the server, I must access the Portainer UI and instruct it to redownload the latest backend image and redeploy it. The only downside to my current setup is that Portainer, in combination with Docker, lacks any zero-downtime deployment feature. However, considering the scope of this project, I can accept this limitation. If I needed zero-downtime deployment, I would have to upgrade to Swarm or switch to Kubernetes. If that would introduce too much overhead, PaaS offerings such as DigitalOcean App Platform or Platform.sh could be viable alternatives.

Thank you for reading my article! If you want to see all the code mentioned here, you can access my Quizae project publicly on my Github.

--

--

Martin Mička

I’m 27 years old, working as a Software Developer, mainly interested in new technologies and psychology.