Self-Hosting over Cloud Services: My Journey to VPS Independence

When I first launched my projects, I took the well-traveled path of using managed cloud services. Cloudflare Pages, Cloudflare CDN and DNS Management (well, I still do manage DNS with it to this day, but anyways...), Render, Vercel for application hosting– they made deployment effortless and scaling automatic. It was convenient, way too convenient for my own good.
But as time went on, I began to feel the limitations of these platforms. Limited customization options, unexpected costs for exceeding free tiers (I'm looking at you, AWS), and most importantly, the lack of complete control over my infrastructure. This led me to explore the world of self-hosting on a VPS (Virtual Private Server).
The Migration to Self-Hosting
Moving from platforms like Vercel to a self-managed VPS was both liberating and daunting. I chose Coolify as my deployment platform because it offered a user-friendly interface while still providing the benefits of containerization through Docker. It was plug-and-play. Like Vercel but free and better.
My application, which was a server for a document generation website, previously run simply by running the start script on my university's dedicated servers via PM2 Process Manager, now needed proper containerization to be deployed on Coolify. For context, it was a Hono-Bun application, which was built down into an executable with Bun before being run by PM2 via the start script. Migrating this app into Coolify meant having to create a Dockerfile:
FROM oven/bun:latest
WORKDIR /app
# Copy package files and install dependencies
COPY package.json bun.lockb ./
RUN bun install --production
# Copy application code
COPY . .
# Run database migrations (drizzle-kit)
RUN bun run db:generate
RUN bun run db:migrate
# Build the executable
RUN bun run build
RUN chmod +x ./app
# Create a start script for better reliability
RUN echo '#!/bin/bash\n./app.exe' > start.sh
RUN chmod +x start.sh
# Expose your application port
EXPOSE 3000
# Use ENTRYPOINT for more reliable execution
ENTRYPOINT ["/app/start.sh"]
And to orchestrate my service, I created a docker-compose.yml:
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
restart: always
ports:
- "3000:3000"
environment:
NODE_ENV: production
volumes:
- app_data:/app/data
deploy:
resources:
limits:
memory: 1G
volumes:
app_data:
This got me through the problem of actually deploying the application, but little did I know that the worst was only yet to come...
The Reality of VPS Security
While platforms like Vercel and Cloudflare abstract away security concerns, self-hosting throws you into the deep end of server security management. When I saw my first deployment logs in Coolify, I encountered strange errors left, right, and center, and realized I needed to dig deeper into how my server was configured.
The jarring realization hit me: my VPS's actual IP address was completely exposed to the public internet. This was a security hazard and I had no idea it was happening until a friend of mine pointed it out to me. Unlike with Cloudflare, where your origin server can be hidden, my VPS was now a visible target for potential attacks such as:
- Direct targeting for DDoS attacks
- IP-based scanning and fingerprinting
- Easier reconnaissance by bad actors
- No protection layer between the internet and my actual server
I needed to secure my self-hosted kingdom, so I took to ChatGPT to tell me how I could do it.
Cloudflare's Orange Cloud Proxy
If I had to name the biggest contributor to my self-hosted architecture's security back in the early days and maybe even until now, it would have to be Cloudflare's Orange Cloud Proxy.
When the proxy is enabled, all traffic to that DNS record is routed through Cloudflare’s global network instead of going directly to the origin server. This provided me with several benefits. The main one being THAT MY SERVER'S IP ADDRESS WAS FINALLY HIDDEN FROM THE PUBLIC, protecting it from direct attacks like DDoS. Truly a lifesaver at this point in my journey.
It also even applies performance optimizations like caching, image compression, and smart routing; and it enables security features such as Web Application Firewall (WAF), bot protection, and SSL termination. Essentially, the orange cloud turns on Cloudflare’s full suite of performance and security services for that specific DNS record in which it is turned on.
Building My Security Fortress with UFW
To further address vulnerabilities, I implemented UFW (Uncomplicated Firewall), since iptables is way too tiring. Having to understand what everything did took a lot of effort, but it was very rewarding.
# Set default policies - deny all incoming connections by default
sudo ufw default deny incoming
sudo ufw default allow outgoing
# Allow SSH on custom port (more secure than default port 22)
sudo ufw allow 2213/tcp
# Allow basic web traffic
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
# Allow my application port
sudo ufw allow 3000/tcp
# Enable the firewall
sudo ufw enable
Each command served a specific purpose in my security strategy:
Default Policies
By denying all incoming connections by default, I established a baseline of security that Vercel and Cloudflare had previously handled for me. This "deny-first" approach meant only explicitly allowed traffic could reach my server.
Custom SSH Port
I learned that changing the default SSH port from 22 to a custom port (2213) adds a layer of security through obscurity. It doesn't make my server invulnerable, but it does prevent automated bots from targeting the standard SSH port.
Web and Application Traffic
Opening specific ports for HTTP, HTTPS, and my application was necessary, but I became conscious that each open port represents a potential entry point. This made me more intentional about which services I exposed.
Unexpected Discoveries
While implementing these security measures, I discovered something concerning: my n8n workflow automation triggers (which I was also self-hosting) were publicly accessible! This was particularly alarming as I was also running a VPN on the same server.
This would never have happened with separate cloud services, where each platform handles its own security. But on a single VPS, all services potentially affect each other's security posture.
To address this, I had to:
- Block public access to n8n's port using UFW
- Configure n8n to only listen on my VPN's internal network interface
- Implement proper authentication for webhook endpoints
Cloud Convenience vs. VPS Control: My Takeaways
Making the switch from cloud services to self-hosting has taught me valuable lessons:
- Cloud platforms abstract complexity - Services like Vercel and Cloudflare handle security, scaling, and configuration automatically, which is convenient but limits control and understanding. And that is not necessarily a bad thing. If you just want your applications out there, then go for it, but somewhere down the line, you will have to learn Linux and have to learn DevOps and architecture– why not do it on your own VPS that you spend 5$ a month on?
- Self-hosting demands security awareness - You become responsible for every aspect of security that was previously managed for you
- Every open port is a responsibility - In cloud services, you rarely think about ports; on a VPS, each one needs justification
- Custom configurations require deeper knowledge - Setting up services like custom SSH ports forces you to understand how things actually work
- Independence comes with accountability - There's no support team to call when things go wrong
The Road Ahead
My journey from cloud services to self-hosting continues to evolve. I'm now exploring:
- Setting up intrusion detection systems
- Regular security audits and monitoring
- Creating more sophisticated reverse proxy configurations
- Automated backup systems
- Self-hosting tons of random useful applications (including this very blog!)
While I miss some of the "set it and forget it" convenience of platforms like Vercel and Cloudflare, the control, knowledge, and independence I've gained through self-hosting have been worth the additional effort.
Have you made the jump from cloud services to self-hosting? What challenges and benefits have you discovered? Share your experiences in the comments below!