Bootstrap Node Setup¶
Bootstrap nodes run the full stack: xe-node, Caddy, explorer, wallet, and docs. They serve as the network's public-facing infrastructure with web interfaces and TLS. This guide covers setting up a new bootstrap node — for provider-only nodes (bare metal, no web interface), see Provider Node Setup.
Each node runs Ubuntu 24.04 with Caddy, Node.js, pm2, QEMU, and Lima installed. The setup-node.sh script automates the installation on a fresh host.
Prerequisites¶
- Ubuntu 24.04 LTS (or compatible Debian-based system)
- Root SSH access
- Ports 80, 443, and 9000 open in the firewall
setup-node.sh¶
The setup script (explorer/deploy/setup-node.sh) installs all required software:
#!/bin/bash
set -euo pipefail
# Caddy (reverse proxy + TLS)
apt-get install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | \
gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | \
tee /etc/apt/sources.list.d/caddy-stable.list
apt-get update && apt-get install -y caddy
systemctl disable caddy # pm2 manages it
systemctl stop caddy
# Node.js 22 + pm2
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt-get install -y nodejs
npm install -g pm2
pm2 startup # auto-start on boot
# QEMU + Lima (for compute leasing)
apt-get install -y qemu-system-x86 qemu-utils
LIMA_VERSION="1.0.6"
curl -fsSL "https://github.com/lima-vm/lima/releases/download/v${LIMA_VERSION}/lima-${LIMA_VERSION}-Linux-x86_64.tar.gz" | \
tar xz -C /usr/local
# Firewall
ufw allow 80/tcp
ufw allow 443/tcp
ufw allow 9000/tcp
# Service user (non-root, required for Lima)
id xe 2>/dev/null || useradd -r -m -d /home/xe -s /bin/bash xe
usermod -aG kvm xe # KVM access for VM provisioning
# KVM permissions (pm2 uid/gid drops supplementary groups)
echo 'KERNEL=="kvm", MODE="0666"' > /etc/udev/rules.d/99-kvm.rules
chmod 666 /dev/kvm 2>/dev/null || true
# Directories
mkdir -p /opt/xe/{deploy,web/explorer,web/wallet,web/docs}
mkdir -p /etc/caddy
mkdir -p /var/lib/xe-node
chown -R xe:xe /var/lib/xe-node
Directory layout¶
After setup, the host has the following structure:
/opt/xe/
├── deploy/
│ ├── .env # DOMAIN, CORE_DOMAIN, NODE_FLAGS
│ └── ecosystem.config.js # pm2 process config
├── web/
│ ├── explorer/ # Explorer SPA (rsync'd by CI)
│ ├── wallet/ # Wallet SPA (rsync'd by CI)
│ └── docs/ # MkDocs site (rsync'd by CI)
/etc/caddy/
└── Caddyfile # Caddy routing config
/usr/local/bin/
└── xe-node # Core node binary (scp'd by CI)
/var/lib/xe-node/
├── badger/ # BadgerDB database
├── host.key # libp2p ed25519 identity
├── node.key # Node account ed25519 key
├── ssh_host_key # SSH gateway host key
├── images/ # VM base images (auto-downloaded)
│ └── ubuntu-24.04-x86_64.img # Ubuntu cloud image (~600 MB)
├── lima/ # Lima VM state (LIMA_HOME)
│ └── xe-<hash>/ # Per-VM directory
└── lima-templates/ # Lima YAML templates
└── xe-<hash>.yaml
/home/xe/ # Service user home
Per-node .env¶
Each node has a .env file at /opt/xe/deploy/.env. This file is created manually per node and is not managed by CI (rsync excludes it).
Standard node (no compute provider):
DOMAIN=ldn.test.network
CORE_DOMAIN=ldn.core.test.network
NODE_FLAGS=-port 9000 -api-port 8080 -api-bind 0.0.0.0 -data /var/lib/xe-node -dial /ip4/<peer1-ip>/tcp/9000/p2p/<peer1-id>,/ip4/<peer2-ip>/tcp/9000/p2p/<peer2-id>
Provider node (requires bare metal or KVM-enabled host):
DOMAIN=bm1.test.network
CORE_DOMAIN=bm1.core.test.network
NODE_FLAGS=-port 9000 -api-port 8080 -api-bind 0.0.0.0 -data /var/lib/xe-node -provide -vcpus 2 -memory 2048 -disk 20 -dial /ip4/<peer1-ip>/tcp/9000/p2p/<peer1-id>,/ip4/<peer2-ip>/tcp/9000/p2p/<peer2-id>
Provider mode requires KVM
The -provide flag enables Lima VM provisioning, which requires /dev/kvm on the host. Standard cloud VPS instances do not expose hardware virtualization extensions. Only add -provide on bare metal servers or VPS with nested virtualization enabled. Without KVM, lease acceptance will succeed on-chain but VM provisioning will fail.
| Variable | Description |
|---|---|
DOMAIN |
Primary domain for Caddy TLS (e.g., ldn.test.network) |
CORE_DOMAIN |
Direct API domain (e.g., ldn.core.test.network) |
NODE_FLAGS |
All xe-node CLI flags as a single string |
Caddy reads DOMAIN and CORE_DOMAIN via --envfile. The ecosystem.config.js reads NODE_FLAGS and passes it as the xe-node args.
Initial deployment¶
After running setup-node.sh:
- Create .env at
/opt/xe/deploy/.envwith the node's domain and peer dial addresses - Copy Caddyfile to
/etc/caddy/Caddyfile - Copy ecosystem.config.js to
/opt/xe/deploy/ - Deploy xe-node binary to
/usr/local/bin/xe-nodeand set capabilities: - Deploy static sites to
/opt/xe/web/explorer/,/opt/xe/web/wallet/,/opt/xe/web/docs/ - Start services:
Updating after deployment¶
CI handles all updates automatically. On push to master:
- Core node: CI builds the binary, SCPs it to each host, pm2 stop/start restarts the process.
setcapis reapplied after SCP since it's lost when the binary is replaced. - Explorer/Wallet/Docs: CI builds static files and rsyncs them. No restart needed — Caddy serves them directly from disk.
- Config changes: The explorer CI rsyncs
deploy/files (Caddyfile, ecosystem.config.js) and restarts Caddy if the config changed.
Port summary¶
| Port | Protocol | Service | Firewall |
|---|---|---|---|
| 9000 | TCP | libp2p (peer-to-peer networking) | Open (ufw allow) |
| 8080 | TCP | HTTP API (xe-node, localhost only via Caddy) | Not exposed directly |
| 80 | TCP | Caddy (HTTP → HTTPS redirect, ACME challenges) | Open (ufw allow) |
| 443 | TCP | Caddy (HTTPS, TLS termination) | Open (ufw allow) |
| 2222 | TCP | SSH gateway (optional, if -ssh-port set) |
Open if used |
Firewall
Unlike Docker, which bypasses UFW via iptables DOCKER chains, native services require explicit firewall rules. The setup script adds ufw allow rules for ports 80, 443, and 9000. If the SSH gateway is enabled, its port must also be opened.
Peer identity¶
The node's peer ID is derived from the ed25519 private key at {dataDir}/host.key. If this file is deleted (e.g., during a data wipe), the node generates a new identity. The -dial flags in NODE_FLAGS must then be updated on all other nodes to reference the new peer ID.
To get a node's current peer ID:
See also¶
- Deployment Overview -- architecture and CI/CD pipeline
- Configuration -- all node flags and environment variables
- Provider Node Setup -- bare-metal provider setup (no web interface)