BLOG

Setting Up a VPS from Scratch for Web Hosting

Dave Cilluffo ·
infrastructureVPSUbuntuweb hostingDevOps

A complete walkthrough from bare Ubuntu to serving production websites — no managed hosting, no hand-holding, just the real process.


We didn’t start with VPS. Like most people, we started with shared hosting because it was cheap and easy. NixiHost shared plans, cPanel, one-click WordPress installs — the whole package. And for a while, that was fine.

But once you’re managing more than a couple of sites, or you need to run anything beyond stock PHP, shared hosting becomes a straightjacket. You’re fighting resource limits. You’re sharing an IP with who-knows-what spam operation. You can’t install what you need. And the kicker: at scale, it’s actually more expensive than running your own server.

I now run 14+ websites off a single VPS — client sites, portfolio projects, ARG sites, static Astro builds — and the monthly cost is less than what I used to pay for three shared hosting accounts. This post is the walkthrough I wish I’d had when I made the jump.


Why VPS Over Shared Hosting

Let me be blunt: if you have one WordPress blog and you never want to think about servers, shared hosting is fine. Stay there. This post isn’t for you.

But if any of these apply, you should be on a VPS:

  • You’re running more than 2-3 websites
  • You need to deploy anything that isn’t PHP (static site generators, Node apps, Python services)
  • You want SSH access that actually lets you do things
  • You’re tired of “resource limit reached” emails
  • You want to control your own SSL, caching, and server config
  • You care about performance and don’t want to share a box with 200 other accounts

The real argument is economic. Shared hosting runs maybe $5-15/month per account, and each account has limits. A VPS capable of hosting a dozen static sites might cost $5-10/month total. The per-site cost drops to nearly nothing. You get root access, dedicated resources, and complete control over the software stack.

The trade-off is real, though: you are the sysadmin now. There’s no support ticket to file when Nginx won’t start at 2 AM. You need to be comfortable in a terminal, or willing to get comfortable.


Choosing a Provider

I use Hetzner. I’ll tell you why, and then I’ll briefly mention the alternatives.

Hetzner’s advantages:

  • Price/performance ratio — Hetzner’s entry-level VPS boxes punch well above their weight. You get more RAM, more CPU, and more bandwidth per dollar than almost any US-based provider.
  • European infrastructure — Servers in Germany and Finland, under EU/German data privacy regulations. If you or your clients care about data sovereignty, this matters.
  • Network quality — Excellent peering, fast connections globally. I’m in Pennsylvania serving sites to a mostly US audience and the latency is not a problem, especially with Cloudflare in front.
  • No nonsense — Clean interface, straightforward billing, no upsell circus.

The alternatives you’ll hear about:

  • DigitalOcean — Good product, well-documented, but more expensive for equivalent specs. Their tutorials are excellent and worth reading even if you host elsewhere.
  • Linode (now Akamai) — Solid, been around forever. Similar pricing to DO. Good if you want US-based datacenters.
  • Vultr — Competitive pricing, lots of datacenter locations. A reasonable alternative to Hetzner.

I’ve used several of these. Hetzner gives me the best value for what I need, which is hosting a bunch of mostly-static websites. Your mileage may vary if you need specific datacenter locations or compliance requirements.


Initial Server Setup

You’ve provisioned an Ubuntu 24.04 LTS server. You’ve got an IP address and a root password in your email. Here’s what happens next.

First Login

ssh root@your.server.ip

You’ll be prompted to accept the host fingerprint and enter the root password. First order of business: update everything.

apt update && apt upgrade -y

Create an Admin User

Don’t run everything as root. Create a regular user with sudo privileges:

adduser admin
usermod -aG sudo admin

Pick a strong password. You’ll switch to SSH keys shortly and can disable password auth after that.

SSH Key Authentication

On your local machine (not the server), generate a key pair if you don’t have one:

ssh-keygen -t ed25519 -C "your-email@example.com"

Copy the public key to your server:

ssh-copy-id admin@your.server.ip

Test that you can log in with the key:

ssh admin@your.server.ip

If that works, disable password authentication. Edit /etc/ssh/sshd_config:

sudo nano /etc/ssh/sshd_config

Find and set these values:

PasswordAuthentication no
PubkeyAuthentication yes
PermitRootLogin no

Then restart SSH:

sudo systemctl restart sshd

Important: Do this in a separate terminal session while keeping your current session open. If you lock yourself out because of a typo, you’ll still have a way in.

Firewall with UFW

Ubuntu’s Uncomplicated Firewall lives up to its name:

sudo ufw allow OpenSSH
sudo ufw allow 'Nginx Full'
sudo ufw enable

That opens ports 22 (SSH), 80 (HTTP), and 443 (HTTPS). Everything else is blocked by default. You can verify with:

sudo ufw status

Automatic Security Updates

Enable unattended upgrades for security patches:

sudo apt install unattended-upgrades
sudo dpkg-reconfigure --priority=low unattended-upgrades

This keeps your system patched without you having to remember to log in and run apt upgrade every week.


Web Server: Nginx

I use Nginx. It’s fast, lightweight, and handles static files like a champion. Apache works fine too — it’s battle-tested and has a massive ecosystem — but for serving static sites, Nginx has less overhead and a configuration style I prefer.

Installation

sudo apt install nginx
sudo systemctl enable nginx
sudo systemctl start nginx

Hit your server’s IP in a browser. You should see the default Nginx welcome page.

Server Block Structure

Each site gets its own server block (Nginx’s version of Apache’s virtual hosts). Create a config file for your domain:

sudo nano /etc/nginx/sites-available/yourdomain.com

A basic static site config looks like this:

server {
    listen 80;
    listen [::]:80;
    server_name yourdomain.com www.yourdomain.com;
    root /var/www/yourdomain.com/public_html;
    index index.html;

    location / {
        try_files $uri $uri/ =404;
    }

    # Optional: cache static assets
    location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2)$ {
        expires 30d;
        add_header Cache-Control "public, immutable";
    }
}

Enable the site and test the config:

sudo ln -s /etc/nginx/sites-available/yourdomain.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx

SSL with Let’s Encrypt

There’s no excuse for not having SSL in 2025. Let’s Encrypt makes it free and Certbot makes it automatic.

sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com

Certbot will modify your Nginx config to add the SSL directives and set up a redirect from HTTP to HTTPS. It also installs a cron job for automatic renewal. You can test the renewal process with:

sudo certbot renew --dry-run

That’s it. Free, automated SSL.


DNS with Cloudflare

I manage all my DNS through Cloudflare. Even if you don’t use their CDN/proxy features, the DNS management interface is clean and fast, and propagation is near-instant.

Pointing Your Domain

In Cloudflare’s DNS settings for your domain, create an A record:

TypeNameContentProxy
A@your.server.ipDNS only
Awwwyour.server.ipDNS only

Proxy vs DNS-Only Mode

Cloudflare offers a proxy mode (orange cloud) that routes traffic through their CDN. I run most of my sites in DNS-only mode (grey cloud) for a few reasons:

  • I handle SSL directly on the server with Let’s Encrypt
  • I don’t need the extra complexity of Cloudflare’s SSL modes
  • DNS-only gives me straightforward end-to-end encryption without worrying about Cloudflare’s “Flexible” vs “Full” vs “Full (Strict)” SSL settings
  • For mostly-static sites with low traffic, the CDN doesn’t add much

If you do use Cloudflare’s proxy, make sure your SSL mode is set to Full (Strict) — this ensures the connection between Cloudflare and your server is also encrypted with a valid certificate. “Flexible” mode means Cloudflare talks to your server over plain HTTP, which is… not great.


Deploying Your First Site

You’ve got a server, Nginx is running, DNS is pointed. Time to put a site on it.

Create the Directory

sudo mkdir -p /var/www/yourdomain.com/public_html
sudo chown -R admin:admin /var/www/yourdomain.com

Upload Your Files

From your local machine, use rsync to push your built site:

rsync -avz --delete ./dist/ admin@your.server.ip:/var/www/yourdomain.com/public_html/

The --delete flag removes files on the server that no longer exist locally, keeping your deployment clean. If you’re deploying an Astro site, this is the dist/ directory after running astro build.

For one-off uploads, scp works fine:

scp -r ./dist/* admin@your.server.ip:/var/www/yourdomain.com/public_html/

But rsync is smarter — it only transfers changed files, which makes subsequent deployments much faster.

Permissions

Make sure Nginx can read the files:

sudo chown -R admin:www-data /var/www/yourdomain.com/public_html
sudo chmod -R 755 /var/www/yourdomain.com/public_html

Reload Nginx if you’ve made config changes, and your site should be live.

Automating Deployment

Once you’re doing this regularly, you’ll want a simple deploy script. Nothing fancy — a bash script that builds your site locally and rsync’s it to the server:

#!/bin/bash
npm run build
rsync -avz --delete ./dist/ admin@your.server.ip:/var/www/yourdomain.com/public_html/
echo "Deployed."

You could wire this up to GitHub Actions or a git hook, but honestly, for most small sites a manual ./deploy.sh works fine.


Control Panel or No Control Panel

This is where opinions diverge. Some developers insist on bare CLI for everything. Some can’t live without cPanel. I’m somewhere in the middle.

I use HestiaCP — an open-source control panel forked from VestaCP. Here’s my honest take:

Why I use a control panel:

  • Managing 14+ domains, each with their own Nginx config, SSL certs, and directory structure, gets tedious fast in pure CLI
  • HestiaCP handles Let’s Encrypt certificate issuance and renewal for every domain automatically
  • Adding a new site takes about 30 seconds through the panel vs. several minutes of manual config
  • Email management, if you need it, is dramatically easier with a panel
  • Backups can be configured and scheduled through the UI

Why you might skip it:

  • A control panel is another piece of software to maintain and keep updated
  • It makes assumptions about your directory structure and Nginx config
  • If you only have 2-3 sites, the overhead isn’t worth it
  • You learn more about your system by doing it manually

If you’re comfortable in the terminal and you only have a handful of sites, go bare. You’ll understand your system better. But if you’re managing a dozen or more domains, a panel like HestiaCP pays for itself in time saved within the first week.

I wrote a dedicated comparison of HestiaCP vs cPanel if you want the full breakdown. For now, just know it’s worth evaluating if you’re running more than a few sites.


What I Wish I’d Known

After migrating from shared hosting to VPS and running this setup for a while, here are the things I learned the hard way.

Backups Are Not Optional

Your VPS provider probably offers snapshots. Use them. But also set up your own backup process — rsync critical data to a separate location, or at minimum, keep local backups of your site files and any databases.

The day your server has a disk failure or you accidentally rm -rf the wrong directory is the day you’ll either restore from backup in 10 minutes or spend a week rebuilding from memory. Don’t be the second person.

Monitor Disk Space

This sounds obvious, but it sneaks up on you. Log files grow. Old backups accumulate. Package caches fill up. Make a habit of checking df -h when you log in. If you want to automate it, set up a cron job or a simple monitoring tool that pings you when disk usage crosses 80%. The specifics depend on your notification setup, but the point is: don’t find out you’re out of space when a deploy fails at 2 AM.

Keep a Runbook

Document everything you do to your server. Every package you install, every config change, every firewall rule. Keep it in a markdown file, a wiki, a notes app — I don’t care where, just keep it somewhere.

When you need to rebuild the server, or set up a second one, or debug something at midnight, you’ll thank yourself. “How did I configure that Nginx redirect again?” is a question you should never have to answer from memory.

The Learning Curve Is Real, But Worth It

I won’t sugarcoat it: the first time you set up a VPS, it takes hours. Things break and you don’t know why. Error messages are cryptic. You’ll Google more than you code.

The second time, it takes 30 minutes. You know the steps. You have your runbook. You’ve already solved the weird edge cases. And from that point on, you have a skill that pays dividends every time you deploy anything.

Don’t Over-Engineer It

It’s tempting to set up Docker, Kubernetes, CI/CD pipelines, monitoring stacks, and all the other infrastructure tooling that DevOps Twitter tells you is mandatory. For a handful of static sites? You don’t need any of that.

A VPS with Nginx, Let’s Encrypt, and rsync is a production-ready hosting stack. It’s boring, it’s reliable, and it works. Add complexity when you actually need it, not because a blog post (including this one) told you to.


Wrapping Up

Moving from shared hosting to a VPS was one of the best infrastructure decisions I’ve made. It’s cheaper at scale, faster, and gives me complete control over how my sites are served. The initial learning curve is real, but manageable for anyone who’s comfortable in a terminal.

The setup I described here — Ubuntu 24.04, Nginx, Let’s Encrypt, Cloudflare DNS, rsync deployments — is what runs every site in the TwelveTake Studios network. It’s not exotic. It’s not cutting-edge. It’s just solid, predictable hosting that I understand from top to bottom.

If you’re sitting on shared hosting wondering if it’s time to make the jump, it probably is. Start with one site, get comfortable, and migrate the rest once you trust the setup.

And if you mess something up? That’s what snapshots are for.


Some links in this post are affiliate links. I only recommend services I actually use.

Share this post

If this was useful, consider buying us a coffee.