SOP-22: Self-Hosted n8n Deployment & Migration Protocol
The AutomationSurgeon's Production Infrastructure Guide
| Version | 1.2 |
| Owner | Founder |
| Purpose | This document provides a production-ready guide for deploying a secure, self-hosted n8n instance on an Azure VM and migrating from n8n Cloud. This infrastructure supports both client-hosted and managed platform service offerings. |
1. Protocol Overview
This SOP details the end-to-end procedure for establishing a secure, scalable, and self-hosted n8n instance. Self-hosting provides greater control, data privacy, and cost-effectiveness at scale. This guide uses a production-ready Docker Compose stack with best practices for security and maintenance.
Technology Stack: - Infrastructure: Azure Ubuntu VM (22.04/24.04 LTS) with at least 4GB RAM - Containerization: Docker Engine & Docker Compose - Application: n8n (latest version) - Database: PostgreSQL for durable, production-grade data storage - Reverse Proxy: Caddy for automatic HTTPS via Let's Encrypt - Email Relay: SendGrid for reliable transactional email
2. Prerequisites
Before proceeding, ensure the following are in place:
- A registered domain or subdomain (e.g., n8n.automationsurgeon.com)
- An Azure Ubuntu 22.04/24.04 LTS Virtual Machine created (Recommended: Standard_B2s with 4GB RAM)
- Network Security Group rules on Azure allowing inbound traffic on ports 80 (HTTP), 443 (HTTPS), and 22 (SSH)
- A DNS A record pointing your chosen domain/subdomain to the public IP address of your Azure VM
- A SendGrid account with a generated API key
3. Step 1: Install Docker & Docker Compose
Connect to your Azure VM via SSH and install the Docker Engine and Compose plugin. This provides the containerization runtime for our application stack.
# Update package list and install prerequisites
sudo apt-get update
sudo apt-get install -y ca-certificates curl
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Set up the Docker repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
# Install Docker Engine, CLI, Containerd, and Compose plugin
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Verify installation
docker --version
docker compose version
4. Step 2: Prepare Directory Structure
Create a dedicated directory structure to hold your configuration files and persistent data volumes.
# Create the main directory and subdirectories for persistent data
sudo mkdir -p /opt/n8n/{data,postgres,caddy}
# Take ownership of the directory to avoid permission issues
sudo chown -R $USER:$(id -gn) /opt/n8n
# Navigate into the new directory
cd /opt/n8n
5. Step 3: Create Environment File (.env)
Create a .env file to store all your secrets and configuration variables. Remember to replace all placeholder values.
# Create the .env file in /opt/n8n
cat > /opt/n8n/.env << 'EOF'
# ----- n8n Core Settings -----
N8N_HOST=n8n.automationsurgeon.com
N8N_PORT=5678
N8N_PROTOCOL=https
N8N_EDITOR_BASE_URL=https://n8n.automationsurgeon.com
WEBHOOK_URL=https://n8n.automationsurgeon.com
# ----- Security Settings -----
N8N_SECURE_COOKIE=true
N8N_SAMESITE_COOKIE=lax
# Generate a strong key with: openssl rand -hex 32
N8N_ENCRYPTION_KEY=REPLACE_WITH_YOUR_32_BYTE_HEX_KEY
# ----- Timezone -----
# Find your timezone from: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
TZ=America/New_York
# ----- PostgreSQL Database Settings -----
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=postgres
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_SCHEMA=public
DB_POSTGRESDB_PASSWORD=REPLACE_WITH_A_STRONG_DATABASE_PASSWORD
# ----- Email Settings (SendGrid) -----
N8N_EMAIL_MODE=smtp
N8N_SMTP_HOST=smtp.sendgrid.net
N8N_SMTP_PORT=587
N8N_SMTP_USER=apikey
N8N_SMTP_PASS=REPLACE_WITH_YOUR_SENDGRID_API_KEY
N8N_SMTP_SENDER=n8n@automationsurgeon.com
N8N_SMTP_SSL=false
EOF
CRITICAL: The N8N_ENCRYPTION_KEY is essential for securing your credentials. If you lose this key, you will lose access to all encrypted credentials.
6. Step 4: Create Caddyfile for HTTPS
This file configures Caddy to act as a reverse proxy, automatically handling SSL/TLS certificate issuance and renewal from Let's Encrypt.
# Create the Caddyfile in /opt/n8n
cat > /opt/n8n/Caddyfile << 'EOF'
{
# Email for Let's Encrypt notifications
email admin@automationsurgeon.com
}
# Your public-facing domain
n8n.automationsurgeon.com {
# Enable gzip compression for better performance
encode gzip
# Proxy requests to the n8n container
reverse_proxy n8n:5678
}
EOF
7. Step 5: Create docker-compose.yml
This file defines the three services (Postgres, n8n, Caddy), their networking, and their data volumes.
# /opt/n8n/docker-compose.yml
version: "3.9"
services:
postgres:
image: postgres:15-alpine
restart: unless-stopped
environment:
POSTGRES_DB: ${DB_POSTGRESDB_DATABASE}
POSTGRES_USER: ${DB_POSTGRESDB_USER}
POSTGRES_PASSWORD: ${DB_POSTGRESDB_PASSWORD}
volumes:
- ./postgres:/var/lib/postgresql/data
networks:
- internal
n8n:
image: n8nio/n8n:latest
restart: unless-stopped
depends_on:
- postgres
environment:
- N8N_HOST=${N8N_HOST}
- N8N_PORT=${N8N_PORT}
- N8N_PROTOCOL=${N8N_PROTOCOL}
- N8N_EDITOR_BASE_URL=${N8N_EDITOR_BASE_URL}
- WEBHOOK_URL=${WEBHOOK_URL}
- N8N_SECURE_COOKIE=${N8N_SECURE_COOKIE}
- N8N_SAMESITE_COOKIE=${N8N_SAMESITE_COOKIE}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- DB_TYPE=${DB_TYPE}
- DB_POSTGRESDB_HOST=${DB_POSTGRESDB_HOST}
- DB_POSTGRESDB_PORT=${DB_POSTGRESDB_PORT}
- DB_POSTGRESDB_DATABASE=${DB_POSTGRESDB_DATABASE}
- DB_POSTGRESDB_USER=${DB_POSTGRESDB_USER}
- DB_POSTGRESDB_PASSWORD=${DB_POSTGRESDB_PASSWORD}
- DB_POSTGRESDB_SCHEMA=${DB_POSTGRESDB_SCHEMA}
- N8N_EMAIL_MODE=${N8N_EMAIL_MODE}
- N8N_SMTP_HOST=${N8N_SMTP_HOST}
- N8N_SMTP_PORT=${N8N_SMTP_PORT}
- N8N_SMTP_USER=${N8N_SMTP_USER}
- N8N_SMTP_PASS=${N8N_SMTP_PASS}
- N8N_SMTP_SENDER=${N8N_SMTP_SENDER}
- N8N_SMTP_SSL=${N8N_SMTP_SSL}
- TZ=${TZ}
volumes:
- ./data:/home/node/.n8n
networks:
- internal
- web
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
environment:
- TZ=${TZ}
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
depends_on:
- n8n
networks:
- web
- internal
volumes:
caddy_data:
caddy_config:
networks:
web:
internal:
internal: true
8. Step 6: Launch the n8n Stack
With all configuration files in place, pull the latest Docker images and start the services.
cd /opt/n8n
# Pull the latest versions of the container images
docker compose pull
# Start the services in the background
docker compose up -d
# Check the status of the running containers
docker compose ps
Visit https://n8n.automationsurgeon.com. On the first launch, n8n will prompt you to create the Instance Owner (admin) account.
9. Step 7: Migrate from n8n Cloud
Migration is a two-part process: exporting workflows and recreating credentials.
9.1. Export Workflows from n8n Cloud
In your n8n Cloud account, open each workflow, click the "..." menu, and select "Download" to save it as a JSON file.
9.2. Import Workflows to Self-Hosted Instance
Transfer the downloaded JSON files to your Azure VM (e.g., using scp).
# Create a directory for the workflow files
mkdir -p /opt/n8n/import
# (Copy your JSON files into this directory)
# Run the import command inside the n8n container
docker compose exec n8n n8n import:workflow --input=/home/node/import
9.3. Recreate and Re-map Credentials
You must recreate each credential in your new self-hosted instance via the "Credentials" -> "New" section. After creating them, open each imported workflow and re-select the appropriate new credential in each node.
10. Step 8: Ongoing Operations
Establish a regular schedule for backing up your data and updating the application.
10.1. Backup Procedure
# Create a hot backup of the PostgreSQL database
docker compose exec -T postgres \
pg_dump -U "${DB_POSTGRESDB_USER}" -d "${DB_POSTGRESDB_DATABASE}" -F c \
> ~/n8n_pg_$(date +%F).dump
# Archive the n8n data directory (contains encryption key)
tar czf ~/n8n_data_$(date +%F).tgz -C /opt/n8n data
It is highly recommended to automate this backup script with a cron job and transfer the backup files to a secure, off-site location like Azure Blob Storage.
10.2. Update Procedure
cd /opt/n8n
# Pull the latest container images
docker compose pull
# Restart the services with the new images
docker compose up -d
11. Security Best Practices
11.1. Infrastructure Security
- Access Control: Implement SSH key authentication, disable password authentication
- Firewall Rules: Configure Azure NSG rules to allow only necessary ports
- Regular Updates: Keep all components updated with security patches
- Monitoring: Set up comprehensive logging and monitoring
11.2. Data Protection
- Encryption: All data encrypted at rest and in transit
- Backup Security: Secure backup storage with access controls
- Credential Management: Secure storage and rotation of all API keys and passwords
- Audit Logging: Maintain comprehensive logs of all administrative actions
12. Success Metrics & Monitoring
12.1. Infrastructure Performance
- Uptime: Target 99.9% availability
- Response Time: Sub-second response times for webhook processing
- Resource Utilization: Monitor CPU, memory, and disk usage
- Database Performance: Track query performance and connection pools
12.2. Security Metrics
- Vulnerability Scans: Regular security assessments
- Access Logs: Monitor for unauthorized access attempts
- Certificate Status: Ensure SSL certificates remain valid
- Backup Success Rate: 100% backup completion rate