Setting up a Production-Ready VPS from Scratch
Recently, I've been working on a brand new Micro SaaS project that's been quite enjoyable. One thing I've really appreciated is how easy it is to deploy applications to the cloud, with numerous Platform-as-a-Service (PaaS) options making deployment straightforward.
While these platforms are excellent, they're not perfect for every use case. Due to their underlying business model, they're not well-suited for long-running tasks or transferring large amounts of data, which can sometimes result in unexpectedly high bills.
This is where a VPS (Virtual Private Server) comes in. VPS solutions often provide more consistent billing while mitigating some of the limitations that come with serverless platforms. Despite these benefits, I've always been hesitant to use a raw VPS for production services due to the perceived difficulty of making it production-ready.
But is it actually that difficult? To find out, I gave myself a challenge: set up a production-ready VPS from scratch. As it turns out, it's a lot easier, than I originally thought...
The Challenge: A Production-Ready Guestbook
For this challenge, I built a simple guestbook web app to deploy on my VPS. If you're following along at home, you can find the complete source code for this project in my Github Repository.
To define what "production-ready" meant, I created a list of requirements:
- DNS record pointing to the server
- Application deployed and running
- Security best practices:
- HTTPS/TLS with automatic certificate provisioning and renewal
- Hardened SSH to prevent unauthorized access
- Firewall blocking unnecessary ports
- High availability (as much as possible on a single node)
- Load balancing to distribute traffic across multiple instances
- Automated deployments for smooth updates
- Website monitoring and notifications if the site goes down
I also set some technical constraints:
- Use simple tooling without requiring too much domain expertise
- Avoid heavyweight solutions like Kubernetes (K3s, MicroK8s)
- Skip full-featured solutions like Coolify
- No infrastructure-as-code tools (Terraform, Pulumi, OpenTofu)
Step 1: Obtaining and Setting Up the VPS
I used a Hostinger VPS instance with 2 vCPUs and 8GB of memory. When setting up the VPS through Hostinger's UI, I:
- Selected Ubuntu 24.04 LTS as my operating system
- Disabled the Monarch's malware scanner
- Set a strong root password
- Added my SSH public key for secure login
After the VPS was deployed, I tested the SSH login, which worked correctly.
Step 2: Creating a Non-Root User
It's not advisable to work as the root user, so my first step was creating a regular user account:
# Add a new user
adduser elliott
# Add the user to the sudo group
usermod -aG sudo elliott
# Test sudo permissions
su elliott
sudo echo "I have sudo permissions"
If you're following along at home, you might want to install Tmux on your VPS and work inside of it. This way, if your SSH connection drops, you can easily reattach to your session when you reconnect.
Step 3: DNS Configuration
I purchased the domain zenful.cloud
from Hostinger and configured it to point to my VPS:
- First, I cleared the existing A and CNAME records
- Added a new A record for the root domain pointing to my server's IP address
To find your server's IP address, use:
ip addr
DNS propagation can take a few hours, so I moved on to security improvements while waiting.
Step 4: Hardening SSH
To improve the security of my VPS, I needed to harden SSH access:
# First, copy your SSH key to the new user account from your local machine
ssh-copy-id elliott@your-server-ip
# Test that key-based login works
ssh elliott@your-server-ip
# Edit the SSH config file
sudo vim /etc/ssh/sshd_config
Make the following changes in the sshd\_config file:
PasswordAuthentication no
PermitRootLogin no
UsePAM no
On Hostinger, I also needed to modify (or remove) the following file:
sudo vim /etc/ssh/sshd_config.d/50-cloud-init.conf
Apply the changes by reloading the SSH service:
sudo systemctl reload sshd
Test that the changes worked:
# This should fail
ssh root@your-server-ip
Step 5: Deploying the Web Application
Naive Approach: Building on the VPS
First, I tried the direct approach of cloning and building the application on the VPS:
# Install Go
sudo snap install go --classic
# Clone the repo (replace with your repo URL)
git clone https://github.com/your-username/guestbook.git
# Build the application
cd guestbook
go build
# Run the application
DATABASE_URL=postgres://user:password@host:port/database ./guestbook
This worked, but I wasn't a fan of compiling applications on my production server.
Better Approach: Docker Containerization
Instead, I decided to use Docker with a pre-built image from GitHub's container registry:
# Install Docker and Docker Compose on Ubuntu
# Add Docker's official GPG key
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add Docker repository to Apt sources
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
# Install Docker packages
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Add your user to the docker group
sudo usermod -aG docker $USER
# Create a directory for Postgres password
mkdir -p db
# Create a password file for Postgres
echo "your-secure-password" > db/postgres-password.txt
# Deploy with Docker Compose
docker compose up -d
My compose.yaml
file included services for both the guestbook application and PostgreSQL:
services:
db:
image: postgres:16
restart: always
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/postgres-password
volumes:
- postgres-data:/var/lib/postgresql/data
secrets:
- postgres-password
guestbook:
image: ghcr.io/yourusername/guestbook:prod
restart: always
environment:
DATABASE_URL: postgres://postgres:${POSTGRES_PASSWORD}@db:5432/postgres?sslmode=disable
ports:
- "8080:8080"
depends_on:
- db
secrets:
postgres-password:
file: ./db/postgres-password.txt
volumes:
postgres-data:
This successfully deployed the containerized application with a PostgreSQL database.
Step 6: Setting Up a Firewall
To enhance security, I set up the UFW (Uncomplicated Firewall) to restrict inbound traffic:
# Disable all incoming traffic by default
sudo ufw default deny incoming
# Allow all outgoing traffic by default
sudo ufw default allow outgoing
# Allow SSH (CRITICAL - do this before enabling the firewall)
sudo ufw allow 22/tcp
# Check the rules before enabling
sudo ufw status verbose
# Enable the firewall
sudo ufw enable
Important Note: Docker modifies iptables directly, which can bypass UFW rules. This is a known issue. A better solution is to use a reverse proxy rather than exposing container ports directly.
Step 7: Setting Up Traefik as a Reverse Proxy
Instead of directly exposing the application port, I set up Traefik as a reverse proxy:
services:
reverse-proxy:
image: traefik:v3.1
command:
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
- "8080:8080" # The Traefik dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
db:
# ... (as before)
guestbook:
# ... (other configuration as before)
ports: [] # Remove the port mapping
labels:
- "traefik.enable=true"
- "traefik.http.routers.guestbook.rule=Host(`zenful.cloud`)"
I updated the firewall to allow HTTP traffic:
sudo ufw allow 80/tcp
sudo ufw allow 8080/tcp
Step 8: Setting Up Load Balancing
One of Traefik's great features is built-in load balancing. I scaled up my application to three instances:
docker compose up -d --scale guestbook=3
To make this configuration permanent, I updated the compose file:
services:
# ... (other services)
guestbook:
# ... (other configuration)
deploy:
replicas: 3
Step 9: Enabling HTTPS with Automatic TLS Certificates
Traefik also makes it easy to set up automatic TLS certificate generation and renewal using Let's Encrypt:
services:
reverse-proxy:
image: traefik:v3.1
command:
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.myresolver.acme.tlschallenge=true"
- "--certificatesresolvers.myresolver.acme.email=your-email@example.com"
- "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
ports:
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- letsencrypt:/letsencrypt
guestbook:
# ... (other configuration)
labels:
- "traefik.enable=true"
- "traefik.http.routers.guestbook.rule=Host(`zenful.cloud`)"
- "traefik.http.routers.guestbook.entrypoints=websecure"
- "traefik.http.routers.guestbook.tls.certresolver=myresolver"
volumes:
# ... (other volumes)
letsencrypt:
I also needed to update the firewall to allow HTTPS traffic:
sudo ufw allow 443/tcp
To redirect HTTP to HTTPS, I added the following labels to the guestbook service:
- "traefik.http.routers.guestbook-http.rule=Host(`zenful.cloud`)"
- "traefik.http.routers.guestbook-http.entrypoints=web"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
- "traefik.http.routers.guestbook-http.middlewares=redirect-to-https"
Step 10: Setting Up Automated Deployments
For automated deployments, I used Watchtower, which automatically updates running containers when their images change:
services:
# ... (other services)
watchtower:
image: containrrr/watchtower
command:
- "--label-enable"
- "--interval"
- "30"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
guestbook:
image: ghcr.io/yourusername/guestbook:prod
# ... (other configuration)
labels:
# ... (other labels)
- "com.centurylinklabs.watchtower.enable=true"
To enable rolling updates (one container at a time), I added the --rolling-restart
flag:
watchtower:
image: containrrr/watchtower
command:
- "--label-enable"
- "--interval"
- "30"
- "--rolling-restart"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
Step 11: Setting Up Monitoring
For uptime monitoring, I used Uptime Robot, which offers a free tier. I simply added my site URL to their monitoring dashboard, and they'll send email notifications if the site goes down.
Final Deployment
With all components in place, I deployed the final stack:
docker compose up -d
Conclusion
Setting up a production-ready VPS was much easier than I initially thought. By using tools like Traefik and Watchtower, I was able to quickly set up a robust environment with:
- ✅ DNS pointing to the server
- ✅ Application deployed in Docker containers
- ✅ HTTPS with automatic certificate management
- ✅ Hardened SSH
- ✅ Firewall protection
- ✅ Load balancing across multiple instances
- ✅ Automated deployments with rolling updates
- ✅ Uptime monitoring
While a VPS solution may not be as simple as using a PaaS, it offers more control and potentially lower costs for certain types of applications, especially those with high data transfer needs or long-running processes.
The complete source code for the guestbook application and deployment configuration is available on Github