N8N Self-Hosted Setup Guide: Docker, Caddy, and Production Tips
N8N Self-Hosted Setup Guide: Docker, Caddy, and Production Tips
If you've been paying $49/month or more for n8n Cloud and wondering whether you could just run it yourself โ you absolutely can. I self-host n8n on the same VPS that runs this blog, and it costs me about $12/month total. But there are real pitfalls that can burn you if you don't set things up right from the start.
This guide walks you through a complete n8n self-hosted setup using Docker Compose, PostgreSQL, and Caddy for automatic SSL. I'll share the exact configs I use, the mistakes I made so you don't have to, and the production hardening steps that keep everything running at 2 AM without waking you up.
What Is n8n?
n8n (pronounced "nodemation") is an open-source workflow automation platform. Think Zapier or Make, but you own the code, the data, and the infrastructure. You build workflows visually by connecting nodes โ each node represents an action like sending an email, querying a database, calling an API, or running custom JavaScript.
What makes n8n stand out from other automation tools:
- Fair-code license โ source-available, self-hostable, and free for individual use
- 200+ integrations built in, plus the ability to create custom nodes
- Code when you need it โ every node can include custom JavaScript or Python
- No per-execution fees โ self-hosted means unlimited runs
- Full API access โ trigger and manage workflows programmatically
For developers, n8n sits in a sweet spot between "I could write a script for this" and "I need a full orchestration platform." It handles the boring parts โ retries, scheduling, credential management, error handling โ while letting you drop into code whenever the visual builder isn't enough.
Why Self-Host n8n?
n8n Cloud works great, but self-hosting makes sense when:
Cost control. n8n Cloud starts at $24/month for 2,500 executions. If you're running automations at any real scale โ say, syncing data between services, processing webhooks, or running scheduled tasks โ you'll blow past that quickly. Self-hosting on a $10-15/month VPS gives you unlimited executions.
Data sovereignty. Your workflows often handle sensitive data โ API keys, customer information, internal service credentials. Self-hosting means that data never leaves your infrastructure. For GDPR compliance or working with enterprise clients, this matters.
Network access. Self-hosted n8n can talk to internal services, databases behind firewalls, and other containers on the same Docker network. No need to expose internal APIs to the internet just so your automation tool can reach them.
Customization. You control the n8n version, the update schedule, the resource allocation, and the backup strategy. You can run custom nodes, pin specific versions, and integrate with your existing monitoring stack.
The tradeoff is real though: you're responsible for uptime, security, backups, and updates. If you're not comfortable with Linux, Docker, and basic sysadmin tasks, n8n Cloud is the better choice.
Setting Up n8n on Your VPS
Here's the full setup process. I'm assuming you have a VPS running Ubuntu 22.04 or later with SSH access and a domain name pointed at your server's IP.
Install Docker and Docker Compose
If you don't already have Docker installed:
# Update system packages
sudo apt update && sudo apt upgrade -y
# Install Docker using the official script
curl -fsSL https://get.docker.com | sh
# Add your user to the docker group (log out and back in after this)
sudo usermod -aG docker $USER
# Verify installation
docker --version
docker compose version
Docker Compose v2 comes bundled with modern Docker installations, so you shouldn't need to install it separately.
Create the Docker Compose Configuration
Create a dedicated directory for your n8n setup:
mkdir -p ~/n8n && cd ~/n8n
Here's the docker-compose.yml I use in production:
version: "3.8"
services:
n8n:
image: n8nio/n8n:1.76.1
container_name: n8n
restart: unless-stopped
ports:
- "5678:5678"
environment:
- N8N_HOST=n8n.yourdomain.com
- N8N_PORT=5678
- N8N_PROTOCOL=https
- WEBHOOK_URL=https://n8n.yourdomain.com/
- GENERIC_TIMEZONE=America/New_York
- TZ=America/New_York
# Database
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=n8n-db
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
# Security - CRITICAL: set this once and never lose it
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
# Execution pruning
- EXECUTIONS_DATA_PRUNE=true
- EXECUTIONS_DATA_MAX_AGE=168
# Metrics
- N8N_METRICS=true
volumes:
- n8n_data:/home/node/.n8n
depends_on:
n8n-db:
condition: service_healthy
networks:
- n8n-network
n8n-db:
image: postgres:16-alpine
container_name: n8n-db
restart: unless-stopped
environment:
- POSTGRES_USER=n8n
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=n8n
volumes:
- n8n_db_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U n8n"]
interval: 10s
timeout: 5s
retries: 5
networks:
- n8n-network
volumes:
n8n_data:
n8n_db_data:
networks:
n8n-network:
driver: bridge
A few things to note about this config:
- Pinned image version (
1.76.1) โ never uselatestin production. You want updates to be intentional. - PostgreSQL, not SQLite โ SQLite is fine for testing, but PostgreSQL handles concurrent workflow executions properly and won't corrupt under load.
- Health check on Postgres โ n8n won't start until the database is actually ready.
- Execution pruning โ keeps only 7 days (168 hours) of execution data. Without this, your disk fills up fast.
Create the Environment File
Create a .env file in the same directory:
# Generate a strong encryption key - save this somewhere safe!
N8N_ENCRYPTION_KEY=$(openssl rand -hex 32)
echo "N8N_ENCRYPTION_KEY=$N8N_ENCRYPTION_KEY" > .env
# Generate a strong database password
POSTGRES_PASSWORD=$(openssl rand -hex 24)
echo "POSTGRES_PASSWORD=$POSTGRES_PASSWORD" >> .env
# Print the values so you can save them
cat .env
Save these values in a password manager immediately. The encryption key is especially critical โ if you lose it, all your stored credentials become permanently unrecoverable.
Set Up Caddy as a Reverse Proxy
I use Caddy because it handles SSL certificates automatically with zero configuration. No certbot cron jobs, no renewal scripts, no expired certificate emergencies.
Install Caddy:
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install caddy
Add this to your /etc/caddy/Caddyfile:
n8n.yourdomain.com {
reverse_proxy localhost:5678 {
flush_interval -1
}
}
The flush_interval -1 is important โ it disables response buffering, which n8n needs for server-sent events (SSE) used in the workflow editor. Without it, the UI can feel laggy or miss real-time updates.
# Reload Caddy to pick up the new config
sudo systemctl reload caddy
Start n8n
cd ~/n8n
docker compose up -d
Check the logs to make sure everything started correctly:
docker compose logs -f n8n
You should see n8n initialize, connect to PostgreSQL, and start listening on port 5678. Open https://n8n.yourdomain.com in your browser, and you'll be greeted with the setup wizard to create your admin account.
Advanced Configuration and Production Hardening
Once the basic setup is running, there are several things you should configure for a production-grade deployment.
Essential Environment Variables
Beyond the basics in the docker-compose file, consider these additions:
# Disable public API registration (only you should create accounts)
N8N_USER_MANAGEMENT_DISABLED=false
# Set execution timeout to prevent runaway workflows (in seconds)
N8N_DEFAULT_EXECUTION_TIMEOUT=300
# Enable queue mode for better performance (requires Redis)
EXECUTIONS_MODE=queue
QUEUE_BULL_REDIS_HOST=redis
# Binary data handling - store large files externally
N8N_AVAILABLE_BINARY_DATA_MODES=filesystem
N8N_BINARY_DATA_MODE=filesystem
Automated Backups
Your n8n instance contains workflows, credentials, and execution history. Losing any of these is painful. Here's a simple backup script:
#!/bin/bash
# backup-n8n.sh
BACKUP_DIR="/backups/n8n"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
mkdir -p $BACKUP_DIR
# Dump PostgreSQL
docker exec n8n-db pg_dump -U n8n n8n | gzip > "$BACKUP_DIR/n8n_db_$TIMESTAMP.sql.gz"
# Backup n8n data volume
docker run --rm -v n8n_n8n_data:/data -v $BACKUP_DIR:/backup alpine \
tar czf "/backup/n8n_data_$TIMESTAMP.tar.gz" -C /data .
# Keep only last 7 days of backups
find $BACKUP_DIR -name "*.gz" -mtime +7 -delete
echo "Backup completed: $TIMESTAMP"
Add it to cron to run daily:
crontab -e
# Add this line:
0 3 * * * /root/backup-n8n.sh >> /var/log/n8n-backup.log 2>&1
Safe Updates
When it's time to update n8n, don't just pull latest. Here's the process I follow:
cd ~/n8n
# 1. Run a backup first
./backup-n8n.sh
# 2. Update the image version in docker-compose.yml
# Change n8nio/n8n:1.76.1 to the new version
# 3. Pull and restart
docker compose pull
docker compose up -d
# 4. Check logs for migration errors
docker compose logs -f n8n
Always check the n8n changelog before updating. Breaking changes happen, especially around credential encryption and database schema.
Common Mistakes and How to Avoid Them
After running n8n in production for a while and helping others set it up, these are the mistakes I see most often:
Losing the encryption key. This is the number one disaster. If N8N_ENCRYPTION_KEY isn't explicitly set, n8n generates a random one on first startup. If you redeploy without that key, every stored credential becomes permanently unrecoverable. Set it explicitly, save it in a password manager, and test that you can recover it.
Using SQLite in production. SQLite works fine for a personal playground, but it doesn't handle concurrent writes well. When multiple workflows execute simultaneously, you can get database locks and corrupted data. Use PostgreSQL.
Not setting WEBHOOK_URL. If this doesn't match your public URL, webhooks will return internal Docker URLs that external services can't reach. Your triggers will silently fail, and you'll spend hours debugging.
Skipping execution pruning. n8n stores full execution data including input/output for every node. A busy instance can generate gigabytes of data per week. Without pruning, your disk fills up and n8n crashes. Set EXECUTIONS_DATA_PRUNE=true and EXECUTIONS_DATA_MAX_AGE to something reasonable.
Running as root. Create a dedicated user for your n8n deployment. Running Docker containers as root is a security risk, and if n8n is compromised, the attacker has full server access.
Not pinning image versions. Using n8nio/n8n:latest means any docker compose pull could introduce breaking changes. Pin to a specific version and update intentionally after reading the changelog.
FAQ
How much does it cost to self-host n8n?
A basic VPS from providers like Hetzner, DigitalOcean, or Linode costs $10-20/month for 2-4 GB RAM and 2 CPU cores. That's enough to run n8n, PostgreSQL, and your reverse proxy comfortably. Compare that to n8n Cloud at $24-60/month with execution limits. Self-hosting breaks even at roughly 20,000 executions per month โ above that, the savings compound fast.
Can I run n8n alongside other services on the same VPS?
Yes, and this is one of the biggest advantages. I run n8n on the same server as this blog, a few other web apps, and their databases. Docker keeps everything isolated. Just make sure you have enough RAM โ n8n itself needs about 512MB-1GB, plus whatever PostgreSQL uses. Budget at least 2GB total for a comfortable setup with room for other services.
How do I migrate from n8n Cloud to self-hosted?
Export your workflows from n8n Cloud (Settings > Export All Workflows). Set up your self-hosted instance, then import them. Credentials won't transfer โ they're encrypted with n8n Cloud's key, so you'll need to re-enter API keys, OAuth tokens, and passwords manually. It's a one-time pain, but it's straightforward. Make a checklist of every credential beforehand so you don't miss any.
Conclusion
Self-hosting n8n is one of the best decisions I've made for my automation setup. The initial setup takes about 30 minutes with Docker Compose and Caddy, and after that, it mostly runs itself. The key is getting the foundation right: PostgreSQL for the database, pinned image versions, a proper encryption key you won't lose, execution pruning, and automated backups.
The cost savings are real โ I went from $49/month on n8n Cloud to roughly $12/month on a shared VPS โ but the bigger win is control. I can connect n8n to internal services, run unlimited executions, and customize everything to fit my workflow.
Start with the docker-compose setup in this guide, get your first few workflows running, and expand from there. If you hit issues, the n8n community forum is genuinely helpful, and the official docs cover edge cases well.